A value of type Type<A, O, I> (called "runtime type") is the runtime representation of the static type A. Note. The Either type is defined in fp-ts, a library containing implementations of common algebraic types in TypeScript.
typescript validation inference types runtimencnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. it is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.
nerual-network inference high-preformance simd arm-neon deep-learning artificial-intelligence android iosNNPACK is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs. NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives leveraged in leading deep learning frameworks, such as PyTorch, Caffe2, MXNet, tiny-dnn, Caffe, Torch, and Darknet.
neural-network neural-networks convolutional-layers inference high-performance high-performance-computing simd cpu multithreading fast-fourier-transform winograd-transform matrix-multiplicationThe Knowledge Graph
grakn graql knowledge-base knowledge-graph knowledge-representation reasoning relational-databases hyper-relational database graph graph-database graph-visualization logic deductions knowledge-engineering enterprise-knowledge-graph knowledge-engine query-language hyper-relational-database inferenceWelcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2. This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.
deep-learning inference computer-vision embedded image-recognition object-detection segmentation jetson jetson-tx1 jetson-tx2DELTA is a deep learning based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3. For details of DELTA, please refer to this paper.
nlp deep-learning tensorflow speech sequence-to-sequence seq2seq speech-recognition text-classification speaker-verification nlu text-generation emotion-recognition tensorflow-serving tensorflow-lite inference asr serving front-endPyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling.
pytorch machine-learning bayesian webppl inference probabilistic-programming probabilistic-graphical-models bayesian-inference variational-inference uberThis repository shows how to implement a REST server for low-latency image classification (inference) using NVIDIA GPUs. This is an initial demonstration of the GRE (GPU REST Engine) software that will allow you to build your own accelerated microservices. This repository is a demo, it is not intended to be a generic solution that can accept any trained model. Code customization will be required for your use cases.
caffe gpu inference inference-server docker deep-learningThe OpenCog AtomSpace is a knowledge representation (KR) database and the associated query/reasoning engine to fetch and manipulate that data, and perform reasoning on it. Data is represented in the form of graphs, and more generally, as hypergraphs; thus the AtomSpace is a kind of graph database, the query engine is a general graph re-writing system, and the rule-engine is a generalized rule-driven inferencing system. The vertices and edges of a graph, known as "Atoms", are used to represent not only "data", but also "procedures"; thus, many graphs are executable programs as well as data structures. The AtomSpace is a platform for building Artificial General Intelligence (AGI) systems. It provides the central knowledge representation component for OpenCog. As such, it is a fairly mature component, on which a lot of other systems are built, and which depend on it for stable, correct operation in a day-to-day production environment.
graph-database rule-engine knowledge-representation query-engine logic-programming knowledge-graph knowledge-base query-language relational-database relational-algebra reasoning rewrite-system rewriting inference-engine inference inference-rulesSmall and dependency free Python package to infer file type and MIME type checking the magic numbers signature of a file or buffer.This is a Python port from filetype Go package.
magic-numbers filetype mime extension type inferenceModel Server for Apache MXNet (MMS) is a flexible and easy to use tool for serving Deep Learning models.Use MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.
mxnet deep-learning inference aiThe plugin provides a TensorFlow class that can be used to initialize graphs and run the inference algorithm. To use a custom model, follow the steps to retrain the model and optimize it for mobile use. Put the .pb and .txt files in a HTTP-accessible zip file, which will be downloaded via the FileTransfer plugin. If you use the generic Inception model it will be downloaded from the TensorFlow website on first use.
cordova phonegap tensorflow inception image-recognition neural-network machine-learning ai inference classification imagerecognition neuralnetworks machinelearning ecosystem:cordova cordova-androidInfer is a Go package for running predicitions in TensorFlow models. This package provides abstractions for running inferences in TensorFlow models for common types. At the moment it only has methods for images, however in the future it can certainly support more.
tensorflow prediction machine-learning inferenceTakes a JSON format input, and generates automatic Haskell type declarations. Parser and printer instances are derived using Aeson.
haskell json hackage json-autotype inference unification parseThis is an adaptation of the Bayesian Bandit code from Probabilistic Programming and Bayesian Methods for Hackers, specifically d3bandits.js. The code has been rewritten to be more idiomatic and also usable as a browser script or npm package. Additionally, unit tests are included.
machine learning bayes bayesian inference multi-armed n-armed armed bandit reinforcement statisticsC++ library for developing compute intensive asynchronous services built on gRPC. YAIS provides a bootstrap for CUDA, TensorRT and gRPC functionality so developers can focus on the implementation of the server-side RPC without the need for a lot of boilerplate code.
deep-learning inference tensorrt cuda grpcTo run this example, you need to download model files, mean.bin and input image. Then put them in correct path. These files are shared in dropbox and baidu storage service.
machine-learning mxnet cgo inference deep-learning##About neuroJS is a neural network library written in JavaScript. ##Usage Use the library by opening test.html in either Chrome of Firefox and opening the console.
npm module neural network machine learning math inference pattern recognition supervisedContextless ML implementation of Spark ML. To serve small ML pipelines there is no need to create SparkContext and use cluster-related features. In this project we made our implementations for ML Transformers. Some of them call context-independent Spark methods.
spark serving scoring inference
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.