Basic Compression Library

  •        2450

Basic Compression Library is a set of open source implementations of RLE (Run Length Encoding), Huffman, Rice, Lempel-Ziv (LZ77) and Shannon-Fano compression algorithms.

http://bcl.comli.eu/

Tags
Implementation
License
Platform

   




Related Projects

c-blosc - A blocking, shuffling and loss-less compression library that can be faster than `memcpy()`

  •    C

Blosc is a high performance compressor optimized for binary data. It has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc is the first compressor (that I'm aware of) that is meant not only to reduce the size of large datasets on-disk or in-memory, but also to accelerate memory-bound computations. It uses the blocking technique so as to reduce activity in the memory bus as much as possible. In short, this technique works by dividing datasets in blocks that are small enough to fit in caches of modern processors and perform compression / decompression there. It also leverages, if available, SIMD instructions (SSE2, AVX2) and multi-threading capabilities of CPUs, in order to accelerate the compression / decompression process to a maximum.

lzham_codec - Lossless data compression codec with LZMA-like ratios but 1

  •    C++

LZHAM is a lossless data compression codec written in C/C++ (specifically C++03), with a compression ratio similar to LZMA but with 1.5x-8x faster decompression speed. It officially supports Linux x86/x64, Windows x86/x64, OSX, and iOS, with Android support on the way. LZHAM is a lossless (LZ based) data compression codec optimized for particularly fast decompression at very high compression ratios with a zlib compatible API. It's been developed over a period of 3 years and alpha versions have already shipped in many products. (The alpha is here: https://code.google.com/p/lzham/) LZHAM's decompressor is slower than zlib's, but generally much faster than LZMA's, with a compression ratio that is typically within a few percent of LZMA's and sometimes better.

Zstandard - Fast real-time compression algorithm

  •    C

Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.

lizard - Lizard (formerly LZ5) is an efficient compressor with very fast decompression

  •    C

Lizard library is based on frequently used LZ4 library by Yann Collet but the Lizard compression format is not compatible with LZ4. Lizard library is provided as open-source software using BSD 2-Clause license. The high compression/decompression speed is achieved without any SSE and AVX extensions. The following results are obtained with lzbench and -t16,16 using 1 core of Intel Core i5-4300U, Windows 10 64-bit (MinGW-w64 compilation under gcc 6.2.0) with silesia.tar which contains tarred files from Silesia compression corpus.

XZ for Java

  •    Java

This aims to be a complete implementation of XZ data compression in pure Java. Single-threaded streamed compression and decompression and random access decompression have been fully implemented. Threading is planned but it is unknown when it will be implemented.


zlib - A Massively Spiffy Yet Delicately Unobtrusive Compression Library

  •    C

zlib is a general purpose data compression library. All the code is thread safe. It is ported to different programming languages like Java, CSharp, Python and Perl.

QuickLZ - Fastest Compression Library in C, C# and Java

  •    C

QuickLZ is the world's fastest compression library, reaching 308 Mbyte/s per core. It supports Streaming mode for optimal compression ratio of small packets down to 200 - 300 bytes in size.

lzfse - LZFSE compression library and command line tool

  •    C

This is a reference C implementation of the LZFSE compressor introduced in the Compression library with OS X 10.11 and iOS 9. LZFSE is a Lempel-Ziv style data compression algorithm using Finite State Entropy coding. It targets similar compression rates at higher compression and decompression speed compared to deflate using zlib.

lz4-java - LZ4 compression for Java

  •    C

LZ4 compression for Java, based on Yann Collet's work available at http://code.google.com/p/lz4/. This library provides access to two compression methods that both generate a valid LZ4 stream fast scan (LZ4) and high compression (LZ4 HC). The streams produced by those 2 compression algorithms use the same compression format, are very fast to decompress and can be decompressed by the same decompressor instance.

node-lz4 - LZ4 fast compression algorithm for NodeJS

  •    Javascript

LZ4 is a very fast compression and decompression algorithm. This nodejs module provides a Javascript implementation of the decoder as well as native bindings to the LZ4 functions. Nodejs Streams are also supported for compression and decompression. NB. Version 0.2 does not support the legacy format, only the one as of "LZ4 Streaming Format 1.4". Use version 0.1 if required.

LZF Compressor- High-performance, streaming/chunking Java LZF codec, compatible with standard C LZF package

  •    Java

LZF-compress is a Java library for encoding and decoding data in LZF format, written by Tatu Saloranta. LZF alfgorithm itself is optimized for speed, with somewhat more modest compression. Compared to the standard Deflate (algorithm gzip uses) LZF can be 5-6 times as fast to compress, and twice as fast to decompress. Compression rate is lower since no Huffman-encoding is used after lempel-ziv substring elimintation.

compressjs - Pure JavaScript de/compression (bzip2, etc) for node.js, volo, and the browser.

  •    Javascript

compressjs contains fast pure-JavaScript implementations of various de/compression algorithms, including bzip2, Charles Bloom's LZP3, a modified LZJB, PPM-D, and an implementation of Dynamic Markov Compression. compressjs is written by C. Scott Ananian. The Range Coder used is a JavaScript port of Michael Schindler's C range coder. Bits also also borrowed from Yuta Mori's SAIS implementation; Eli Skeggs, Kevin Kwok, Rob Landley, James Taylor, and Matthew Francis for Bzip2 compression and decompression code. "Bear" wrote the original JavaScript LZJB; the version here is based on the node lzjb module. Here are some representative speeds and sizes for the various algorithms implemented in this package. Times are with node 0.8.22 on my laptop, but they should be valid for inter-algorithm comparisons.

Lz4 - Extremely Fast Compression algorithm

  •    C

LZ4 is a very fast lossless compression based on well-known LZ77 (Lempel-Ziv) algorithm, providing compression speed at 300 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speeds up and beyond 1GB/s per core, typically reaching RAM speed limits on multi-core systems.

SHARC - Fastest lossless compression algorithm

  •    C

SHARC is an extremely fast lossless dictionary-based compression algorithm. It is capable of an unprecedented compression speed of more than 500 MB/s per core on modern Intel CPUs ! It is scalable on multi core/multi CPU, developed in pure C99, and easily portable on many platforms.

heatshrink - data compression library for embedded/real-time systems

  •    C

A data compression/decompression library for embedded/real-time systems. There is a standalone command-line program, heatshrink, but the encoder and decoder can also be used as libraries, independent of each other. To do so, copy heatshrink_common.h, heatshrink_config.h, and either heatshrink_encoder.c or heatshrink_decoder.c (and their respective header) into your project. For projects that use both, static libraries are built that use static and dynamic allocation.

CyberChef - The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis

  •    Javascript

CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR or Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more. The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years.

Snappy Java - Snappy compressor/decompressor for Java

  •    Java

snappy-java is a Java port of the snappy, a fast C++ compresser/decompresser developed by Google. It does fast compression/decompression around 200~400MB/sec, Less memory usage. SnappyOutputStream uses only 32KB+ in default, Compression/decompression of Java primitive arrays (float[], double[], int[], short[], long[], etc.) and lot more.

7-Zip - File archiver with a high compression ratio

  •    C

7-Zip is a file archiver with the high compression ratio. The program supports 7z, XZ, BZIP2, GZIP, TAR, ZIP, WIM, ARJ, CAB, CHM, CPIO, CramFS, DEB, DMG, FAT, HFS, ISO, LZH, LZMA, MBR, MSI, NSIS, NTFS, RAR, RPM, SquashFS, UDF, VHD, WIM, XAR, Z.

lz4 - Extremely Fast Compression algorithm

  •    C

LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.Speed can be tuned dynamically, selecting an "acceleration" factor which trades compression ratio for more speed up. On the other end, a high compression derivative, LZ4_HC, is also provided, trading CPU time for improved compression ratio. All versions feature the same decompression speed.

pgzip - Go parallel gzip (de)compression

  •    Go

Go parallel gzip compression/decompression. This is a fully gzip compatible drop in replacement for "compress/gzip". This will split compression into blocks that are compressed in parallel. This can be useful for compressing big amounts of data. The output is a standard gzip file.