Displaying 1 to 20 from 20 results

mace - MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms

  •    C++

MACE Model Zoo contains several common neural networks and models which will be built daily against a list of mobile phones. The benchmark results can be found in the CI result page (choose the latest passed pipeline, click release step and you will see the benchmark results). Any kind of contribution is welcome. For bug reports, feature requests, please just open an issue without any hesitation. For code contributions, it's strongly suggested to open an issue for discussion first. For more details, please refer to the contribution guide.

neon - Intel® Nervana™ reference deep learning framework committed to best performance on all hardware

  •    Python

neon is Intel's reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility. For fast iteration and model exploration, neon has the fastest performance among deep learning libraries (2x speed of cuDNNv4, see benchmarks).

Simd - C++ image processing library with using of SIMD: SSE, SSE2, SSE3, SSSE3, SSE4

  •    C++

The Simd Library is a free open source image processing library, designed for C and C++ programmers. It provides many useful high performance algorithms for image processing such as: pixel format conversion, image scaling and filtration, extraction of statistic information from images, motion detection, object detection (HAAR and LBP classifier cascades) and classification, neural network. The algorithms are optimized with using of different SIMD CPU extensions. In particular the library supports following CPU extensions: SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2 and AVX-512 for x86/x64, VMX(Altivec) and VSX(Power7) for PowerPC (big-endian), NEON for ARM.




libsimdpp - Portable header-only zero-overhead C++ low level SIMD library

  •    C++

libsimdpp is a portable header-only zero-overhead C++ low level SIMD library. The library presents a single interface over SIMD instruction sets present in x86, ARM, PowerPC and MIPS architectures. On architectures that support different SIMD instruction sets the library allows the same source code files to be compiled for each SIMD instruction set and then hooked into an internal or third-party dynamic dispatch mechanism. This allows the capabilities of the processor to be queried on runtime and the most efficient implementation to be selected. The library sits somewhere in the middle between programming directly in SIMD intrinsics and even higher-level SIMD libraries. As much control as possible is given to the developer, so that it's possible to exactly predict what code the compiler will generate.

Vc - SIMD Vector Classes for C++

  •    C++

Recent generations of CPUs, and GPUs in particular, require data-parallel codes for full efficiency. Data parallelism requires that the same sequence of operations is applied to different input data. CPUs and GPUs can thus reduce the necessary hardware for instruction decoding and scheduling in favor of more arithmetic and logic units, which execute the same instructions synchronously. On CPU architectures this is implemented via SIMD registers and instructions. A single SIMD register can store N values and a single SIMD instruction can execute N operations on those values. On GPU architectures N threads run in perfect sync, fed by a single instruction decoder/scheduler. Each thread has local memory and a given index to calculate the offsets in memory for loads and stores. Current C++ compilers can do automatic transformation of scalar codes to SIMD instructions (auto-vectorization). However, the compiler must reconstruct an intrinsic property of the algorithm that was lost when the developer wrote a purely scalar implementation in C++. Consequently, C++ compilers cannot vectorize any given code to its most efficient data-parallel variant. Especially larger data-parallel loops, spanning over multiple functions or even translation units, will often not be transformed into efficient SIMD code.

xsimd - Modern, portable C++ wrappers for SIMD intrinsics and parallelized, optimized math implementations

  •    C++

SIMD (Single Instruction, Multiple Data) is a feature of microprocessors that has been available for many years. SIMD instructions perform a single operation on a batch of values at once, and thus provide a way to significantly accelerate code execution. However, these instructions differ between microprocessor vendors and compilers. xsimd provides a unified means for using these features for library authors. Namely, it enables manipulation of batches of numbers with the same arithmetic operators as for single values. It also provides accelerated implementation of common mathematical functions operating on batches.


libsodium-neon - Node.js bindings to rust_sodium.

  •    Rust

This repository is part of the source code of Wire. You can find more information at wire.com or by contacting opensource@wire.com.You can find the published source code at github.com/wireapp.

neon-lang - Implementation of a simple programming language

  •    C++

These errors have been identified through many years of participating on Stack Overflow, answering the same kinds of beginner questions over and over. See Common Errors for a full list of similar common errors.

neon - 🍸 Encodes and decodes NEON file format.

  •    PHP

NEON is very similar to YAML.The main difference is that the NEON supports "entities" (so can be used e.g. to parse phpDoc annotations) and tab characters for indentation. NEON syntax is a little simpler and the parsing is faster. Documentation can be found on the website.

fluorine - flow based programing abstraction

  •    Javascript

Fluorine - Flow based programing abstraction. Fluorine can simply be though of as an abstraction or a DSL. It is a structure of code in which you can manage complex asynchronous code with ease.

Prophecy - 👛 The first mobile NEO wallet

  •    Javascript

The first open-source mobile wallet for the NEO blockchain . Prophecy is based on Ockham a mobile app framework based on React, Cordova and Onsenui component library.

base64simd - Base64 coding and decoding with SIMD instructions (SSE/AVX2/AVX512F/AVX512BW/AVX512VBMI/ARM Neon)

  •    C++

Repository contains code for encoding and decoding base64 using SIMD instructions. Depending on CPU's architecture, vectorized encoding is faster than scalar versions by factor from 2 to 4; decoding is faster 2 .. 2.7 times. Daniel Lemire and I wrote also paper Faster Base64 Encoding and Decoding Using AVX2 Instructions which was published by ACM Transactiona on the Web.

sse4-strstr - SIMD (SWAR/SSE/SSE4/AVX2/AVX512F/ARM Neon) of Karp-Rabin algorithm's modification

  •    C++

Sample programs for article "SIMD-friendly algorithms for substring searching" (http://0x80.pl/articles/simd-strfind.html). The root directory contains C++11 procedures implemented using intrinsics for SSE, SSE4, AVX2, AVX512F, AVX512BW and ARM Neon (both ARMv7 and ARMv8).

sse2neon - C/C+ header converting Intel SSE intrinsics to ARN NEON intrinsics

  •    C++

A C/C++ header file that converts Intel SSE intrinsics to ARN NEON intrinsics. The SIMD instruction set of Intel, which is known as SSE is used in many applications for improved performance. ARM also have introduced an SIMD instruction set called Neon to their processors. Rewriting code written for SSE to work on Neon is very time consuming. and this is a header file that can automatically convert some of the SSE instricts into NEON instricts.

react-neon-ssr - React neon(Rust) powered server side renderer

  •    Rust

A set of test cases for quickly identifying issues with server-side rendering. The start command runs a webpack dev server and a server-side rendering server in development mode with hot reloading.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.