SHARC - Fastest lossless compression algorithm

  •        0

SHARC is an extremely fast lossless dictionary-based compression algorithm. It is capable of an unprecedented compression speed of more than 500 MB/s per core on modern Intel CPUs ! It is scalable on multi core/multi CPU, developed in pure C99, and easily portable on many platforms.



comments powered by Disqus

Related Projects

Lz4 - Extremely Fast Compression algorithm

LZ4 is a very fast lossless compression based on well-known LZ77 (Lempel-Ziv) algorithm, providing compression speed at 300 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speeds up and beyond 1GB/s per core, typically reaching RAM speed limits on multi-core systems.

zlib - A Massively Spiffy Yet Delicately Unobtrusive Compression Library

zlib is a general purpose data compression library. All the code is thread safe. It is ported to different programming languages like Java, CSharp, Python and Perl.

DotNetZip Library

DotNetZip is a FAST, FREE class library and toolset for manipulating zip files. Use VB, C# or any .NET language to easily create, extract, or update zip files.

Zopfli - Compression Algorithm from Google

Zopfli Compression Algorithm is a new zlib (gzip, deflate) compatible compressor. This compressor takes more time (~100x slower), but compresses around 5% better than zlib and better than any other zlib-compatible compressor we have found.

Basic Compression Library

Basic Compression Library is a set of open source implementations of RLE (Run Length Encoding), Huffman, Rice, Lempel-Ziv (LZ77) and Shannon-Fano compression algorithms.

QuickLZ - Fastest Compression Library in C, C# and Java

QuickLZ is the world's fastest compression library, reaching 308 Mbyte/s per core. It supports Streaming mode for optimal compression ratio of small packets down to 200 - 300 bytes in size.

pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines

pigz, which stands for parallel implementation of gzip, it compresses using threads to make use of multiple processors and cores. The input is broken up in to 128 KB chunks with each compressed in parallel. The individual check value for each chunk is also calculated in parallel. The compressed data is written in order to the output, and a combined check value is calculated from the individual check values.

HBase - Hadoop database

HBase provides support to handle BigTable - billions of rows X millions of columns. It is a scalable, distributed, versioned, column-oriented store modeled after Google's Bigtable and runs on top of HDFS (Hadoop Distributed Filesystem). It features compression, in-memory operation per-column. Data could be replicated between the nodes. HBase is used in Facebook and Twitter.

PartImage - Disk Backup Software

Partimage is opensource disk backup software. It saves partitions having a supported filesystem on a sector basis to an image file. Although it runs under Linux, Windows and most Linux filesystems are supported. The image file can be compressed to save disk space and transfer time and can be split into multiple files to be copied to CDs or DVDs.

UPX - a powerful executable packer

UPX is a portable, extendable, high-performance executable packer for several different executable formats, including Windows and Linux executables and DLLs. It achieves an excellent compression ratio and offers *very* fast decompression.

Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.

Tag Cloud >>