Displaying 1 to 20 from 25 results

onnx - Open Neural Network Exchange

  •    PureBasic

Open Neural Network Exchange (ONNX) is the first step toward an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Initially we focus on the capabilities needed for inferencing (evaluation). Caffe2, PyTorch, Microsoft Cognitive Toolkit, Apache MXNet and other tools are developing ONNX support. Enabling interoperability between different frameworks and streamlining the path from research to production will increase the speed of innovation in the AI community. We are an early stage and we invite the community to submit feedback and help us further evolve ONNX.

deepo - A series of Docker images (and their generator) that allows you to quickly set up your deep learning research environment

  •    Python

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g. This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

multi-model-server - Multi Model Server is a tool for serving neural net models for inference

  •    Java

Multi Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework. Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

coach - Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms

  •    Python

Coach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.




distiller - Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research

  •    Jupyter

Distiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

ngraph - nGraph is an open source C++ library, compiler and runtime for Deep Learning frameworks

  •    C++

Welcome to the open-source repository for the Intel® nGraph™ Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry. The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel® Architecture Processors (CPUs), Intel® Nervana™ Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like Movidius, and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.

models - The ONNX Model Zoo is a collection of pre-trained, state-of-the-art models in the ONNX format

  •    Jupyter

The ONNX Model Zoo is a collection of pre-trained models for state-of-the-art models in deep learning, available in the ONNX format. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture. The notebooks can be exported and run as python(.py) files. The Open Neural Network eXchange (ONNX) is a open format to represent deep learning models. With ONNX, developers can move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners.

translate - Translate - a PyTorch Language Library

  •    Python

Translate is a library for machine translation written in PyTorch. It provides training for sequence-to-sequence models. Translate relies on fairseq, a general sequence-to-sequence library, which means that models implemented in both Translate and Fairseq can be trained. Translate also provides the ability to export some models to Caffe2 graphs via ONNX and to load and run these models from C++ for production purposes. Currently, we export components (encoder, decoder) to Caffe2 separately and beam search is implemented in C++. In the near future, we will be able to export the beam search as well. We also plan to add export support to more models. Provided you have CUDA installed you should be good to go.


libonnx - A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support

  •    C

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support. The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated struct onnx_context_t * and you can pass an array of struct resolver_t * for hardware acceleration.

onnx-chainer - Add-on package for ONNX format support in Chainer

  •    Python

This is an add-on package for ONNX support by Chainer. Using onnx-caffe2 is a simple way to do it.

ngraph-onnx - nGraph™ Backend for ONNX

  •    Python

nGraph Backend for ONNX. This repository contains tools to run ONNX models using the Intel® nGraph™ library as a backend.

onnx-r - R Interface to Open Neural Network Exchange (ONNX)

  •    R

This is the R Interface to Open Neural Network Exchange (ONNX). Please visit here for tutorials and API reference.

onnx-tensorflow - Tensorflow Backend and Frontend for ONNX

  •    Python

ONNX-TF requires ONNX (Open Neural Network Exchange) as an external dependency, for any issues related to ONNX installation, we refer our users to ONNX project repository for documentation and help. Notably, please ensure that protoc is available if you plan to install ONNX via pip. The specific ONNX release version that we support in the master branch of ONNX-TF can be found here. This information about ONNX version requirement is automatically encoded in setup.py, therefore users needn't worry about ONNX version requirement when installing ONNX-TF.

onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX

  •    C++

Parses ONNX models for execution with TensorRT. See also the TensorRT documentation.

onnxmltools - ONNXMLTools enables conversion of models to ONNX. Currently supports CoreML and SciKit

  •    Python

Clone this repository on your local machine. If you choose to install onnxmltools from its source code, you must set an environment variable ONNX_ML=1 before installing onnx package.

onnxruntime - ONNX Runtime

  •    C++

ONNX Runtime is an open-source scoring engine for Open Neural Network Exchange (ONNX) models. ONNX is an open format for machine learning (ML) models that is supported by various ML and DNN frameworks and tools. This format makes it easier to interoperate between frameworks and to maximize the reach of your hardware optimization investments. Learn more about ONNX on https://onnx.ai or view the Github Repo.

onnx-go - Go Interface to Open Neural Network Exchange (ONNX)

  •    Go

This is the Go Interface to Open Neural Network Exchange (ONNX). This is a compiled version of the ONNX protobuf definition file.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.