Displaying 1 to 20 from 44 results

io-ts - TypeScript compatible runtime type system for IO decoding/encoding

  •    TypeScript

A value of type Type<A, O, I> (called "runtime type") is the runtime representation of the static type A. Note. The Either type is defined in fp-ts, a library containing implementations of common algebraic types in TypeScript.

ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform

  •    C

ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. it is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.

NNPACK - Acceleration package for neural networks on multi-core CPUs

  •    C

NNPACK is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs. NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives leveraged in leading deep learning frameworks, such as PyTorch, Caffe2, MXNet, tiny-dnn, Caffe, Torch, and Darknet.




jetson-inference - Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson

  •    C++

Welcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2. This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.

delta - DELTA is a deep learning based natural language and speech processing platform.

  •    Python

DELTA is a deep learning based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3. For details of DELTA, please refer to this paper.

Pyro - Deep universal probabilistic programming with Python and PyTorch

  •    Python

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling.


gpu-rest-engine - A REST API for Caffe using Docker and Go

  •    C++

This repository shows how to implement a REST server for low-latency image classification (inference) using NVIDIA GPUs. This is an initial demonstration of the GRE (GPU REST Engine) software that will allow you to build your own accelerated microservices. This repository is a demo, it is not intended to be a generic solution that can accept any trained model. Code customization will be required for your use cases.

atomspace - The OpenCog hypergraph database, query system and rule engine

  •    C++

The OpenCog AtomSpace is a knowledge representation (KR) database and the associated query/reasoning engine to fetch and manipulate that data, and perform reasoning on it. Data is represented in the form of graphs, and more generally, as hypergraphs; thus the AtomSpace is a kind of graph database, the query engine is a general graph re-writing system, and the rule-engine is a generalized rule-driven inferencing system. The vertices and edges of a graph, known as "Atoms", are used to represent not only "data", but also "procedures"; thus, many graphs are executable programs as well as data structures. The AtomSpace is a platform for building Artificial General Intelligence (AGI) systems. It provides the central knowledge representation component for OpenCog. As such, it is a fairly mature component, on which a lot of other systems are built, and which depend on it for stable, correct operation in a day-to-day production environment.

filetype.py - Small Python package to infer the file type checking the magic numbers signature

  •    Python

Small and dependency free Python package to infer file type and MIME type checking the magic numbers signature of a file or buffer.This is a Python port from filetype Go package.

mxnet-model-server - Model Server for Apache MXNet is a tool for deploying neural net models for inference

  •    Python

Model Server for Apache MXNet (MMS) is a flexible and easy to use tool for serving Deep Learning models.Use MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

cordova-plugin-tensorflow - On-device image recognition via TensorFlow/Inception

  •    Objective-C++

The plugin provides a TensorFlow class that can be used to initialize graphs and run the inference algorithm. To use a custom model, follow the steps to retrain the model and optimize it for mobile use. Put the .pb and .txt files in a HTTP-accessible zip file, which will be downloaded via the FileTransfer plugin. If you use the generic Inception model it will be downloaded from the TensorFlow website on first use.

infer - 🔮 Use TensorFlow models in Go to evaluate Images (and more soon!)

  •    Go

Infer is a Go package for running predicitions in TensorFlow models. This package provides abstractions for running inferences in TensorFlow models for common types. At the moment it only has methods for images, however in the future it can certainly support more.

json-autotype - Automatic Haskell type inference from JSON input

  •    Haskell

Takes a JSON format input, and generates automatic Haskell type declarations. Parser and printer instances are derived using Aeson.

bayesian-bandit.js - Bayesian bandit implementation for Node and the browser.

  •    Javascript

This is an adaptation of the Bayesian Bandit code from Probabilistic Programming and Bayesian Methods for Hackers, specifically d3bandits.js. The code has been rewritten to be more idiomatic and also usable as a browser script or npm package. Additionally, unit tests are included.

yais - A C++ Library for Developing Compute Intensive Asynchronous Micro-Services using gRPC

  •    C++

C++ library for developing compute intensive asynchronous services built on gRPC. YAIS provides a bootstrap for CUDA, TensorRT and gRPC functionality so developers can focus on the implementation of the server-side RPC without the need for a lot of boilerplate code.

go-mxnet-predictor - go binding for mxnet c_predict_api to do inference with pre-trained model

  •    Go

To run this example, you need to download model files, mean.bin and input image. Then put them in correct path. These files are shared in dropbox and baidu storage service.

neuroJS - Neural network implementation in JavaScript.

  •    Javascript

##About neuroJS is a neural network library written in JavaScript. ##Usage Use the library by opening test.html in either Chrome of Firefox and opening the console.

spark-ml-serving - Spark ML Lib serving library

  •    Scala

Contextless ML implementation of Spark ML. To serve small ML pipelines there is no need to create SparkContext and use cluster-related features. In this project we made our implementations for ML Transformers. Some of them call context-independent Spark methods.