Displaying 1 to 19 from 19 results

distiller - Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research

  •    Python

Distiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning

  •    Python

The TensorFlow Model Optimization Toolkit is a suite of tools that users, both novice and advanced, can use to optimize machine learning models for deployment and execution. Supported techniques include quantization and pruning for sparse weights. There are APIs built specifically for Keras.

pngquant - Command-line utility and a library for lossy compression of PNG images

  •    C

pngquant is a PNG compressor that significantly reduces file sizes by converting images to a more efficient 8-bit PNG format with alpha channel (often 60-80% smaller than 24/32-bit PNG files). Compressed images are fully standards-compliant and are supported by all web browsers and operating systems.




distiller - Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research

  •    Jupyter

Distiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

nlp-architect - A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

  •    Python

NLP Architect is an open source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding Neural Networks. NLP Architect is an NLP library designed to be flexible, easy to extend, allow for easy and rapid integration of NLP models in applications and to showcase optimized models.

brevitas - Brevitas: quantization-aware training in PyTorch

  •    Python

Brevitas is a PyTorch research library for quantization-aware training (QAT). Brevitas is currently under active development. Documentation, examples, and pretrained models will be progressively released.

libimagequant - Palette quantization library that powers pngquant and other PNG optimizers

  •    C

Small, portable C library for high-quality conversion of RGBA images to 8-bit indexed-color (palette) images. It's powering pngquant2.For Free/Libre Open Source Software it's available under GPL v3 or later with additional copyright notices for older parts of the code.


libimagequant-rust - libimagequant (pngquant) bindings for the Rust language

  •    Rust

Imagequant library converts RGBA images to 8-bit indexed images with palette, including alpha component. It's ideal for generating tiny PNG images (although image I/O isn't handled by the library itself).This wrapper makes the library usable from Rust.

imagecolors - A node module that pulls useful color information out of an image through a combination of ImageMagick color quantization algorithms and human fiddling

  •    Javascript

A node module that pulls useful color information out of an image through a combination of ImageMagick color quantization algorithms and human fiddling. You can install via NPM.

neuquant - JavaScript port of the NeuQuant image quantization algorithm

  •    Javascript

A JavaScript port of Anthony Dekker's NeuQuant image quantization algorithm including a pixel-stream interface. Returns a buffer containing a palette of 256 RGB colors for the input RGB image. The quality parameter is set to 10 by default, but can be changed to increase or decrease quality at the expense of performance. The lower the number, the higher the quality.

llquantize - log/linear quantization for Node.js

  •    Javascript

For more information on log/linear quantization, see this blog post. To summarize: log/linear quantization addresses the problem of using the wrong aggregation resolution, which leads to "clogging the system with unnecessarily fine-grained data, or discarding valuable information in overly coarse-grained data".

colorquant - Go library for color quantization and dithering

  •    Go

Colorquant is an image / color quantization library written in Go. It can be considered as a replacement for the quantization and dithering part of the draw method from the core image library for various reasons (see below). The purpose of color quantization is to reduce the color palette of an image to a fraction of it's initial colors (usually 256), but to preserve it's representative colors and to elliminate visual artifacts at the same time. Even with the best set of 256 colors, there are many images that look bad. They have visible contouring in regions where the color changes slowly.

LQ-Nets - LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

  •    Python

By Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua. Microsoft Research Asia (MSRA).

libimagequant-rust - libimagequant (pngquant) bindings for the Rust language

  •    Rust

Imagequant library converts RGBA images to 8-bit indexed images with palette, including alpha component. It's ideal for generating tiny PNG images (although image I/O isn't handled by the library itself). This wrapper makes the library usable from Rust.

local-search-quantization - State-of-the-art method for large-scale ANN search as of Oct 2016

  •    Julia

The code in this repository was mostly written by Julieta Martinez and Joris Clement. Our code is mostly written in Julia, and should run under version 0.6 or later. To get Julia, go to the Julia downloads page and install the latest stable release.

qkeras - QKeras: a quantization deep learning library for Keras

  •    Python

QKeras is a quantization extension to Keras that provides drop-in replacement for some of the Keras layers, especially the ones that creates parameters and activation layers, and perform arithmetic operations, so that we can quickly create a deep quantized version of Keras network. Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.

graffitist - Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow

  •    Python

Graffitist is a flexible and scalable framework built on top of TensorFlow to process low-level graph descriptions of deep neural networks (DNNs) for accurate and efficient inference on fixed-point hardware. It comprises of a (growing) library of transforms to apply various neural network compression techniques such as quantization, pruning, and compression. Each transform consists of unique pattern matching and manipulation algorithms that when run sequentially produce an optimized output graph. Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks, Sambhav R. Jain, Albert Gural, Michael Wu, Chris H. Dick, arXiv preprint arXiv:1903.08066, 2019.

quantize - 🎨 Simple color palette quantization using MMCQ

  •    Go

As an example, we can reduce a photo of the Go Gopher (source) into a color palette. This library is distributed under the MIT License, see LICENSE.txt for more information.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.