node-lz4 - LZ4 fast compression algorithm for NodeJS

  •        90

LZ4 is a very fast compression and decompression algorithm. This nodejs module provides a Javascript implementation of the decoder as well as native bindings to the LZ4 functions. Nodejs Streams are also supported for compression and decompression. NB. Version 0.2 does not support the legacy format, only the one as of "LZ4 Streaming Format 1.4". Use version 0.1 if required.

http://github.com/pierrec/node-lz4

Dependencies:

cuint : ^0.2.0
nan : ^2.0.9
xxhashjs : ^0.1.1

Tags
Implementation
License
Platform

   




Related Projects

lz4-java - LZ4 compression for Java

  •    C

LZ4 compression for Java, based on Yann Collet's work available at http://code.google.com/p/lz4/. This library provides access to two compression methods that both generate a valid LZ4 stream fast scan (LZ4) and high compression (LZ4 HC). The streams produced by those 2 compression algorithms use the same compression format, are very fast to decompress and can be decompressed by the same decompressor instance.

lizard - Lizard (formerly LZ5) is an efficient compressor with very fast decompression

  •    C

Lizard library is based on frequently used LZ4 library by Yann Collet but the Lizard compression format is not compatible with LZ4. Lizard library is provided as open-source software using BSD 2-Clause license. The high compression/decompression speed is achieved without any SSE and AVX extensions. The following results are obtained with lzbench and -t16,16 using 1 core of Intel Core i5-4300U, Windows 10 64-bit (MinGW-w64 compilation under gcc 6.2.0) with silesia.tar which contains tarred files from Silesia compression corpus.

lz4 - Extremely Fast Compression algorithm

  •    C

LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.Speed can be tuned dynamically, selecting an "acceleration" factor which trades compression ratio for more speed up. On the other end, a high compression derivative, LZ4_HC, is also provided, trading CPU time for improved compression ratio. All versions feature the same decompression speed.

Lz4 - Extremely Fast Compression algorithm

  •    C

LZ4 is a very fast lossless compression based on well-known LZ77 (Lempel-Ziv) algorithm, providing compression speed at 300 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speeds up and beyond 1GB/s per core, typically reaching RAM speed limits on multi-core systems.

7-Zip-zstd - 7-Zip with support for Brotli, Fast-LZMA2, Lizard, LZ4, LZ5 and Zstandard

  •    C

Zstandard v1.3.7 is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. Brotli v.1.0.7 is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.


lz4net - LZ4 - ultra fast compression algorithm - for all .NET platforms

  •    CSharp

This library is no longer maintained and has been replaced by K4os.Compression.LZ4. K4os.Compression.LZ4 is a port of lz4 1.8.1 (latest stable @ 2018-02-01) for .NET Standard and handles both BLOCK and STREAM modes.

lz4 - Extremely Fast Compression algorithm

  •    C

LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.

MessagePack-CSharp - Extremely Fast MessagePack Serializer for C#(

  •    CSharp

Extremely fast MessagePack serializer for C#, x10 faster than MsgPack-Cli and acquires best performance compared with all the other C# serializers. MessagePack for C# has built-in LZ4 compression which can achieve super fast and small binary size. Performance is always important! for Game, Distributed computing, Microservices, Store data to Redis, etc.MessagePack has compact binary size and full set of general purpose expression. Please see the comparison with JSON, protobuf, ZeroFormatter section. If you want to know why MessagePack C# is fastest, please see performance section.

Redisson - Redis based In-Memory Data Grid for Java

  •    Java

Redisson - distributed Java objects and services (Set, Multimap, SortedSet, Map, List, Queue, BlockingQueue, Deque, BlockingDeque, Semaphore, Lock, AtomicLong, Map Reduce, Publish / Subscribe, Bloom filter, Spring Cache, Executor service, Tomcat Session Manager, Scheduler service, JCache API) on top of Redis server. Rich Redis client.

FastPFor - The FastPFOR C++ library: Fast integer compression

  •    C++

A research library with integer compression schemes. It is broadly applicable to the compression of arrays of 32-bit integers where most integers are small. The library seeks to exploit SIMD instructions (SSE) whenever possible.This library can decode at least 4 billions of compressed integers per second on most desktop or laptop processors. That is, it can decompress data at a rate of 15 GB/s. This is significantly faster than generic codecs like gzip, LZO, Snappy or LZ4.

Snappy for .NET

  •    

.NET P/Invoke wrapper for Snappy, fast compression library

Libarchive - C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other

  •    C

The libarchive project develops a portable, efficient C library that can read and write streaming archives in a variety of formats. It also includes implementations of the common tar, cpio, and zcat command-line tools that use the libarchive library.

wal-g - Archival and Restoration for Postgres

  •    Go

WAL-G is an archival restoration tool for Postgres. WAL-G is the successor of WAL-E with a number of key differences. WAL-G uses LZ4, LZMA or Brotli compression, multiple processors and non-exclusive base backups for Postgres. More information on the design and implementation of WAL-G can be found on the Citus Data blog post "Introducing WAL-G by Citus: Faster Disaster Recovery for Postgres".

archiver - Easily create and extract

  •    Go

Package archiver makes it trivially easy to make and extract common archive formats such as .zip, and .tar.gz. Simply name the input and output file(s).Files are put into the root of the archive; directories are recursively added, preserving structure.

compressjs - Pure JavaScript de/compression (bzip2, etc) for node.js, volo, and the browser.

  •    Javascript

compressjs contains fast pure-JavaScript implementations of various de/compression algorithms, including bzip2, Charles Bloom's LZP3, a modified LZJB, PPM-D, and an implementation of Dynamic Markov Compression. compressjs is written by C. Scott Ananian. The Range Coder used is a JavaScript port of Michael Schindler's C range coder. Bits also also borrowed from Yuta Mori's SAIS implementation; Eli Skeggs, Kevin Kwok, Rob Landley, James Taylor, and Matthew Francis for Bzip2 compression and decompression code. "Bear" wrote the original JavaScript LZJB; the version here is based on the node lzjb module. Here are some representative speeds and sizes for the various algorithms implemented in this package. Times are with node 0.8.22 on my laptop, but they should be valid for inter-algorithm comparisons.

lzham_codec - Lossless data compression codec with LZMA-like ratios but 1

  •    C++

LZHAM is a lossless data compression codec written in C/C++ (specifically C++03), with a compression ratio similar to LZMA but with 1.5x-8x faster decompression speed. It officially supports Linux x86/x64, Windows x86/x64, OSX, and iOS, with Android support on the way. LZHAM is a lossless (LZ based) data compression codec optimized for particularly fast decompression at very high compression ratios with a zlib compatible API. It's been developed over a period of 3 years and alpha versions have already shipped in many products. (The alpha is here: https://code.google.com/p/lzham/) LZHAM's decompressor is slower than zlib's, but generally much faster than LZMA's, with a compression ratio that is typically within a few percent of LZMA's and sometimes better.

Snappy Java - Snappy compressor/decompressor for Java

  •    Java

snappy-java is a Java port of the snappy, a fast C++ compresser/decompresser developed by Google. It does fast compression/decompression around 200~400MB/sec, Less memory usage. SnappyOutputStream uses only 32KB+ in default, Compression/decompression of Java primitive arrays (float[], double[], int[], short[], long[], etc.) and lot more.

XZ for Java

  •    Java

This aims to be a complete implementation of XZ data compression in pure Java. Single-threaded streamed compression and decompression and random access decompression have been fully implemented. Threading is planned but it is unknown when it will be implemented.

HU01 Decompression

  •    

Hotmail's DeltaSync protocol uses custom compression scheme known as hm-compression or HU01. It is very similar to DEFLATE encoding, which is standardized as RFC 1951, but the bit stream format is different. Earlier work on decompressing these streams was published by Daniel P...

c-blosc - A blocking, shuffling and loss-less compression library that can be faster than `memcpy()`

  •    C

Blosc is a high performance compressor optimized for binary data. It has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc is the first compressor (that I'm aware of) that is meant not only to reduce the size of large datasets on-disk or in-memory, but also to accelerate memory-bound computations. It uses the blocking technique so as to reduce activity in the memory bus as much as possible. In short, this technique works by dividing datasets in blocks that are small enough to fit in caches of modern processors and perform compression / decompression there. It also leverages, if available, SIMD instructions (SSE2, AVX2) and multi-threading capabilities of CPUs, in order to accelerate the compression / decompression process to a maximum.