Open Neural Network Exchange (ONNX) is the first step toward an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Initially we focus on the capabilities needed for inferencing (evaluation). Caffe2, PyTorch, Microsoft Cognitive Toolkit, Apache MXNet and other tools are developing ONNX support. Enabling interoperability between different frameworks and streamlining the path from research to production will increase the speed of innovation in the AI community. We are an early stage and we invite the community to submit feedback and help us further evolve ONNX.
deep-learning deep-neural-networks neural-network onnx pytorch caffe2 cntkIf you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g. This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.
deep-learning jupyter lasagne caffe tensorflow sonnet keras theano chainer torch pytorch mxnet cntk dockerfile-generator docker-image caffe2 onnxMulti Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework. Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.
ai deep-learning server neural-network mxnet inference onnxCoach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.
reinforcement-learning deep-learning mxnet tensorflow openai-gym rl starcraft imitation-learning hierarchical-reinforcement-learning coach mujoco starcraft2 onnx roboschool carla starcraft2-ai distributed-reinforcement-learningDistiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.
deep-neural-networks jupyter-notebook pytorch regularization pruning quantization group-lasso distillation onnx truncated-svd network-compression pruning-structures early-exit automl-for-compressionWelcome to the open-source repository for the Intel® nGraph™ Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry. The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel® Architecture Processors (CPUs), Intel® Nervana™ Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like Movidius, and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.
ngraph tensorflow mxnet deep-learning compiler performance onnx paddlepaddle neural-network deep-neural-networks pytorch caffe2The ONNX Model Zoo is a collection of pre-trained models for state-of-the-art models in deep learning, available in the ONNX format. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture. The notebooks can be exported and run as python(.py) files. The Open Neural Network eXchange (ONNX) is a open format to represent deep learning models. With ONNX, developers can move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners.
onnx models download pretrained deep-learningTranslate is a library for machine translation written in PyTorch. It provides training for sequence-to-sequence models. Translate relies on fairseq, a general sequence-to-sequence library, which means that models implemented in both Translate and Fairseq can be trained. Translate also provides the ability to export some models to Caffe2 graphs via ONNX and to load and run these models from C++ for production purposes. Currently, we export components (encoder, decoder) to Caffe2 separately and beam search is implemented in C++. In the near future, we will be able to export the beam search as well. We also plan to add export support to more models. Provided you have CUDA installed you should be good to go.
artificial-intelligence machine-learning onnx pytorchA lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated struct onnx_context_t * and you can pass an array of struct resolver_t * for hardware acceleration.
lightweight machine-learning library embedded ai deep-learning neural-network portable inference embedded-systems hardware-acceleration baremetal onnx dedeep-neural-networksPaddle2ONNX enables users to convert models from PaddlePaddle to ONNX. Apache-2.0 license.
ocr detection deploy classification paddlepaddle onnx ppyolo paddlepaddle-models ppocrThis is an add-on package for ONNX support by Chainer. Using onnx-caffe2 is a simple way to do it.
chainer onnx deep-learning onnx-chainer onnx-support caffe onnx-formatnGraph Backend for ONNX. This repository contains tools to run ONNX models using the Intel® nGraph™ library as a backend.
ngraph onnx onnx-support onnx-backendThis is the R Interface to Open Neural Network Exchange (ONNX). Please visit here for tutorials and API reference.
rstats deep-learning onnx deep-neural-networks cranONNX-TF requires ONNX (Open Neural Network Exchange) as an external dependency, for any issues related to ONNX installation, we refer our users to ONNX project repository for documentation and help. Notably, please ensure that protoc is available if you plan to install ONNX via pip. The specific ONNX release version that we support in the master branch of ONNX-TF can be found here. This information about ONNX version requirement is automatically encoded in setup.py, therefore users needn't worry about ONNX version requirement when installing ONNX-TF.
deep-neural-networks deep-learning tensorflow onnxParses ONNX models for execution with TensorRT. See also the TensorRT documentation.
onnx deep-learning nvidiaClone this repository on your local machine. If you choose to install onnxmltools from its source code, you must set an environment variable ONNX_ML=1 before installing onnx package.
machine-learning python-library onnx scikit-learnONNX Runtime is an open-source scoring engine for Open Neural Network Exchange (ONNX) models. ONNX is an open format for machine learning (ML) models that is supported by various ML and DNN frameworks and tools. This format makes it easier to interoperate between frameworks and to maximize the reach of your hardware optimization investments. Learn more about ONNX on https://onnx.ai or view the Github Repo.
deep-learning onnx neural-networks machine-learning ai-framework hardware-accelerationThis repository is a work in progress.
onnx gorgonia graph machine-learningThis is the Go Interface to Open Neural Network Exchange (ONNX). This is a compiled version of the ONNX protobuf definition file.
onnx neural-network open-sourceX2Paddle is a toolkit for converting trained model to PaddlePaddle from other deep learning frameworks
paddlepaddle tensorflow caffe model-converter onnx pytorch
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.