Displaying 1 to 8 from 8 results

Caffe-HRT - Heterogeneous Run Time version of Caffe

  •    C++

Caffe-HRT is a project that is maintained by OPEN AI LAB, it uses heterogeneous computing infrastructure framework to speed up Caffe and provide utilities to debug, profile and tune application performance. The Caffe based version is 793bd96351749cb8df16f1581baf3e7d8036ac37.

Tengine - Tengine is a lite, high performance, modular inference engine for embedded device

  •    C++

Tengine, developed by OPEN AI LAB, is a lite, high-performance, and modular inference engine for embedded device. Tengine is composed of six modules: core/operator/serializer/executor/driver/wrapper.

ofxDlib - An openFrameworks wrapper for dlib. http://dlib.net/

  •    C++

This is under development currently so, your please post questions to the issues for now. For more, see docs/GETTING_STARTED.md.




dnn - A light-weight deep learning framework implemented in C++.

  •    C++

The Deep Neural Nets (DNN) library is a deep learning framework designed to be small in size, computationally efficient and portable. We started the project as a fork of the popular OpenCV library, while removing some components that is not tightly related to the deep learning framework. Comparing to Caffe and many other implements, DNN is relatively independent to third-party libraries, (Yes, we don't require Boost and Database systems to be install before crafting your own network models) and it can be more easily portable to mobile systems, like iOS, Android and RaspberryPi etc. And more importantly, DNN is powerful! It supports both convolutional networks and recurrent networks, as well as combinations of the two.

CHaiDNN - HLS based Deep Neural Network Accelerator Library for Xilinx Ultrascale+ MPSoCs

  •    C++

CHaiDNN is a Xilinx Deep Neural Network library for acceleration of deep neural networks on Xilinx UltraScale MPSoCs. It is designed for maximum compute efficiency at 6-bit integer data type. It also supports 8-bit integer data type. The design goal of CHaiDNN is to achieve best accuracy with maximum performance. The inference on CHaiDNN works in fixed point domain for better performance. All the feature maps and trained parameters are converted from single precision to fixed point based on the precision parameters specified by the user. The precision parameters can vary a lot depending upon the network, datasets, or even across layers in the same network. Accuracy of a network depends on the precision parameters used to represent the feature maps and trained parameters. Well-crafted precision parameters are expected to give accuracy similar to accuracy obtained from a single precision model.

MXNet-HRT - Heterogeneous Run Time version of MXNet

  •    C++

MXNet-HRT is a project that is maintained by OPEN AI LAB, it uses Arm Compute Library (NEON+GPU) to speed up MXNet and provide utilities to debug, profile and tune application performance. The MXNet based version is 26b1cb9ad0bcde9206863a6f847455ff3ec3c266.

TensorFlow-HRT - Heterogeneous Run Time version of TensorFlow

  •    C++

TensorFlow-HRT is a project that is maintained by OPEN AI LAB, it uses heterogeneous computing infrastructure framework to speed up Tensorflow and provide utilities to debug, profile and tune application performance. There are some compatibility issues between ACL and Tensorflow ops.