Anakin - High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices

  •        269

Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineers and is a large-scale application of industrial products.

https://anakin.baidu.com/
https://github.com/PaddlePaddle/Anakin

Tags
Implementation
License
Platform

   




Related Projects

FeatherCNN - FeatherCNN is a high performance inference engine for convolutional neural networks.

  •    C++

FeatherCNN, developed by Tencent TEG AI Platform, is a high-performance lightweight CNN inference library. FeatherCNN is currently targeting at ARM CPUs, and is capable to extend to other devices in the future. Highly Performant FeatherCNN delivers state-of-the-art inference computing performance on a wide range of devices, including mobile phones (iOS/Android), embedded devices (Linux) as well as ARM-based servers (Linux).

Tengine - Tengine is a lite, high performance, modular inference engine for embedded device

  •    C++

Tengine, developed by OPEN AI LAB, is a lite, high-performance, and modular inference engine for embedded device. Tengine is composed of six modules: core/operator/serializer/executor/driver/wrapper.

ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform

  •    C

ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. it is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.

TNN - TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server

  •    C++

TNN: A high-performance, lightweight neural network inference framework open sourced by Tencent Youtu Lab. It also has many outstanding advantages such as cross-platform, high performance, model compression, and code tailoring. The TNN framework further strengthens the support and performance optimization of mobile devices on the basis of the original Rapidnet and ncnn frameworks. At the same time, it refers to the high performance and good scalability characteristics of the industry's mainstream open source frameworks, and expands the support for X86 and NV GPUs. On the mobile phone, TNN has been used by many applications such as mobile QQ, weishi, and Pitu. As a basic acceleration framework for Tencent Cloud AI, TNN has provided acceleration support for the implementation of many businesses. Everyone is welcome to participate in the collaborative construction to promote the further improvement of the TNN reasoning framework. Chinese OCR demo is the TNN implementation of chineseocr_lite project. It is lightweight and supports tilted, rotated and vertical text recognition.

ngraph - nGraph is an open source C++ library, compiler and runtime for Deep Learning frameworks

  •    C++

Welcome to the open-source repository for the Intel® nGraph™ Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry. The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel® Architecture Processors (CPUs), Intel® Nervana™ Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like Movidius, and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.


RadeonProRender-Baikal

  •    C++

Baikal initiative has been started as a sample application demonstrating the usage of AMD® RadeonRays intersection engine, but evolved into a fully functional rendering engine aimed at graphics researchers, educational institutions and open-source enthusiasts in general. Baikal is fast and efficient GPU-based global illumination renderer implemented using OpenCL and relying on AMD® RadeonRays intersection engine. It is cross-platform and vendor independent. The only requirement it imposes on the hardware is OpenCL 1.2 support. Baikal maintains high level of performance across all vendors, but it is specifically optimized for AMD® GPUs and APUs.

LiteApp - LiteApp is a high performance mobile cross-platform implementation, The realization of cross-platform functionality is base on webview and provides different ideas and solutions for improve webview performance

  •    Javascript

LiteApp is a high performance mobile cross-platform framework. The implementation of its cross-platform functionality is based on webview but improved with novel ideas and solutions for better performance. LiteApp dedicates to enable developers to use modern web development technology to build applications on both Android and iOS with a single codebase. More specifically, you can use javascript and modern front-end framework Vue.js to develop mobile apps by using LiteApp, with which, productivity and performance can coexist ,application you build will be running on web with performance close to native. We achieve this by decoupling the render engine from the syntax layer, see more detail below.

ONE - On-device Neural Engine

  •    Python

A high-performance, on-device neural network inference framework. This project ONE aims at providing a high-performance, on-device neural network (NN) inference framework that performs inference of a given NN model on processors, such as CPU, GPU, DSP or NPU.

hprose-java - Hprose is a cross-language RPC. This project is Hprose 2.0 for Java

  •    Java

Hprose is a High Performance Remote Object Service Engine. It is a modern, lightweight, cross-language, cross-platform, object-oriented, high performance, remote dynamic communication middleware. It is not only easy to use, but powerful. You just need a little time to learn, then you can use it to easily construct cross language cross platform distributed application system. Hprose supports many programming languages, for example: * AAuto Quicker * ActionScript * ASP * C++ * Dart * Delphi/Free Pascal * dotNET(C#, Visual Basic...) * Golang * Java * JavaScript * Node.js * Objective-C * Perl * PHP * Python * Ruby * ... Through Hprose, You can conveniently and efficiently intercommunicate between those programming languages. This project is the implementation of Hprose for Java.

jetson-inference - Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson

  •    C++

Welcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2. This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.

DeepSpeech - A PaddlePaddle implementation of DeepSpeech2 architecture for ASR.

  •    Python

DeepSpeech2 on PaddlePaddle is an open-source implementation of end-to-end Automatic Speech Recognition (ASR) engine, based on Baidu's Deep Speech 2 paper, with PaddlePaddle platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient and scalable implementation, including training, inference & testing module, distributed PaddleCloud training, and demo deployment. Besides, several pre-trained models for both English and Mandarin are also released. To avoid the trouble of environment setup, running in Docker container is highly recommended. Otherwise follow the guidelines below to install the dependencies manually.

DALI - A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications

  •    C++

Today’s deep learning applications include complex, multi-stage pre-processing data pipelines that include compute-intensive steps mainly carried out on the CPU. For instance, steps such as load data from disk, decode, crop, random resize, color and spatial augmentations and format conversions are carried out on the CPUs, limiting the performance and scalability of training and inference tasks. In addition, the deep learning frameworks today have multiple data pre-processing implementations, resulting in challenges such as portability of training and inference workflows and code maintainability. NVIDIA Data Loading Library (DALI) is a collection of highly optimized building blocks and an execution engine to accelerate input data pre-processing for deep learning applications. DALI provides both performance and flexibility of accelerating different data pipelines, as a single library, that can be easily integrated into different deep learning training and inference applications.

PaddleSeg - End-to-end image segmentation kit based on PaddlePaddle.

  •    Python

Welcome to PaddleSeg! PaddleSeg is an end-to-end image segmentation development kit developed based on PaddlePaddle, which covers a large number of high-quality segmentation models in different directions such as high-performance and lightweight. With the help of modular design, we provide two application methods: Configuration Drive and API Calling. So one can conveniently complete the entire image segmentation application from training to deployment through configuration calls or API calls. High Performance Model: Based on the high-performance backbone trained by Baidu's self-developed semi-supervised label knowledge distillation scheme (SSLD), combined with the state of the art segmentation technology, we provides 50+ high-quality pre-training models, which are better than other open source implementations.

AirSim - Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research

  •    C++

AirSim is a simulator for drones (and soon other vehicles) built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.

hprose-golang - Hprose is a cross-language RPC. This project is Hprose 2.0 for Golang.

  •    Go

Hprose is a High Performance Remote Object Service Engine.It is a modern, lightweight, cross-language, cross-platform, object-oriented, high performance, remote dynamic communication middleware. It is not only easy to use, but powerful. You just need a little time to learn, then you can use it to easily construct cross language cross platform distributed application system.

hprose-php - Hprose is a cross-language RPC. This project is Hprose 2.0 for PHP

  •    PHP

Hprose is a High Performance Remote Object Service Engine.It is a modern, lightweight, cross-language, cross-platform, object-oriented, high performance, remote dynamic communication middleware. It is not only easy to use, but powerful. You just need a little time to learn, then you can use it to easily construct cross language cross platform distributed application system.

Hippy - Hippy is designed for Web developer to easily build cross-platform and high-performance awesome apps

  •    C++

Hippy is a cross-platform development framework, aiming to help developers write once, run on three platforms(iOS, Android and Web). Hippy is quite friendly to Web developers, especially who are familiar with React or Vue. With Hippy, developers are able to create the cross platform app easily. Hippy is now applied in 27+ Tencent apps such as Mobile QQ, Mobile QQ Browser, Tencent Video App, QQ Music App, Tencent News, reaching hundreds of millions of ordinary users.

Flutter - Framework for building high-performance, high-fidelity iOS and Android apps

  •    Dart

Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. Flutter’s widgets incorporate all critical platform differences such as scrolling, navigation, icons and fonts, and your Flutter code is compiled to native ARM machine code using Dart's native compilers.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.