Lz4 - Extremely Fast Compression algorithm

  •        2292

LZ4 is a very fast lossless compression based on well-known LZ77 (Lempel-Ziv) algorithm, providing compression speed at 300 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speeds up and beyond 1GB/s per core, typically reaching RAM speed limits on multi-core systems.

http://fastcompression.blogspot.in/p/lz4.html

Tags
Implementation
License
Platform

   




Related Projects

lz4 - Extremely Fast Compression algorithm


LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.Speed can be tuned dynamically, selecting an "acceleration" factor which trades compression ratio for more speed up. On the other end, a high compression derivative, LZ4_HC, is also provided, trading CPU time for improved compression ratio. All versions feature the same decompression speed.

lz4-java - LZ4 compression for Java


LZ4 compression for Java, based on Yann Collet's work available at http://code.google.com/p/lz4/.The streams produced by those 2 compression algorithms use the same compression format, are very fast to decompress and can be decompressed by the same decompressor instance.

Basic Compression Library


Basic Compression Library is a set of open source implementations of RLE (Run Length Encoding), Huffman, Rice, Lempel-Ziv (LZ77) and Shannon-Fano compression algorithms.

Zstandard - Fast real-time compression algorithm


Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.

SHARC - Fastest lossless compression algorithm


SHARC is an extremely fast lossless dictionary-based compression algorithm. It is capable of an unprecedented compression speed of more than 500 MB/s per core on modern Intel CPUs ! It is scalable on multi core/multi CPU, developed in pure C99, and easily portable on many platforms.



FastPFor - The FastPFOR C++ library: Fast integer compression


A research library with integer compression schemes. It is broadly applicable to the compression of arrays of 32-bit integers where most integers are small. The library seeks to exploit SIMD instructions (SSE) whenever possible.This library can decode at least 4 billions of compressed integers per second on most desktop or laptop processors. That is, it can decompress data at a rate of 15 GB/s. This is significantly faster than generic codecs like gzip, LZO, Snappy or LZ4.

lz4 - Extremely Fast Compression algorithm


LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.

SIMDCompressionAndIntersection - A C++ library to compress and intersect sorted lists of integers using SIMD instructions


As the name suggests, this is a C/C++ library for fast compression and intersection of lists of sorted integers using SIMD instructions. The library focuses on innovative techniques and very fast schemes, with particular attention to differential coding. It introduces new SIMD intersections schemes such as SIMD Galloping.This library can decode at least 4 billions of compressed integers per second on most desktop or laptop processors. That is, it can decompress data at a rate of 15 GB/s. This is significantly faster than generic codecs like gzip, LZO, Snappy or LZ4.

ocaml-lz4 - OCaml bindings for LZ4, a very fast lossless compression algorithm


OCaml bindings for LZ4, a very fast lossless compression algorithm

node-lz4 - LZ4 fast compression algorithm for NodeJS


LZ4 fast compression algorithm for NodeJS

4zip - Extreemle Fast Compression Program Based On lz4


Extreemle Fast Compression Program Based On lz4

brotli - Brotli compression format


Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.The specification of the Brotli Compressed Data Format is defined in RFC 7932.

lzjb - A fast pure JavaScript implementation of LZJB compression/decompression.


A fast pure JavaScript implementation of LZJB compression/decompression.

ncompress - Fast compression and decompression utilities


Fast compression and decompression utilities

pithy - Fast compression / decompression library.


Fast compression / decompression library.

wflz-erlang-nif - An Erlang NIF wrapper for wfLZ fast compression / decompression library.


An Erlang NIF wrapper for wfLZ fast compression / decompression library.

ngx_brotli - NGINX module for Brotli compression


Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.Both Brotli library and nginx module are under active development.

zlib - A Massively Spiffy Yet Delicately Unobtrusive Compression Library


zlib is a general purpose data compression library. All the code is thread safe. It is ported to different programming languages like Java, CSharp, Python and Perl.

QuickLZ - Fastest Compression Library in C, C# and Java


QuickLZ is the world's fastest compression library, reaching 308 Mbyte/s per core. It supports Streaming mode for optimal compression ratio of small packets down to 200 - 300 bytes in size.

dictionary - High-performance dictionary coding


Suppose you want to compress a large array of values with (relatively) few distinct values. For example, maybe you have 16 distinct 64-bit values. Only four bits are needed to store a value in the range [0,16) using binary packing, so if you have long arrays, it is possible to save 60 bits per value (compress the data by a factor of 16).We consider the following (simple) form of dictionary coding. We have a dictionary of 64-bit values (could be pointers) stored in an array. In the compression phase, we convert the values to indexes and binary pack them. In the decompression phase, we try to recover the dictionary-coded values as fast as possible.