Displaying 1 to 20 from 117 results

DotNetZip Library


DotNetZip is a FAST, FREE class library and toolset for manipulating zip files. Use VB, C# or any .NET language to easily create, extract, or update zip files.

lz4 - Extremely Fast Compression algorithm


LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.Speed can be tuned dynamically, selecting an "acceleration" factor which trades compression ratio for more speed up. On the other end, a high compression derivative, LZ4_HC, is also provided, trading CPU time for improved compression ratio. All versions feature the same decompression speed.

Zstandard - Fast real-time compression algorithm


Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.

draco - Draco is a library for compressing and decompressing 3D geometric meshes and point clouds


Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.Draco was designed and built for compression efficiency and speed. The code supports compressing points, connectivity information, texture coordinates, color information, normals, and any other generic attributes associated with geometry. With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users, this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth and rendered quickly.




lepton - Lepton is a tool and file format for losslessly compressing JPEGs by an average of 22%.


Lepton is a tool and file format for losslessly compressing JPEGs by an average of 22%.This can be used to archive large photo collections, or to serve images live and save 22% bandwidth.

Lz4 - Extremely Fast Compression algorithm


LZ4 is a very fast lossless compression based on well-known LZ77 (Lempel-Ziv) algorithm, providing compression speed at 300 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speeds up and beyond 1GB/s per core, typically reaching RAM speed limits on multi-core systems.

7-Zip - File archiver with a high compression ratio


7-Zip is a file archiver with the high compression ratio. The program supports 7z, XZ, BZIP2, GZIP, TAR, ZIP, WIM, ARJ, CAB, CHM, CPIO, CramFS, DEB, DMG, FAT, HFS, ISO, LZH, LZMA, MBR, MSI, NSIS, NTFS, RAR, RPM, SquashFS, UDF, VHD, WIM, XAR, Z.

zlib - A Massively Spiffy Yet Delicately Unobtrusive Compression Library


zlib is a general purpose data compression library. All the code is thread safe. It is ported to different programming languages like Java, CSharp, Python and Perl.


PeaZip - Cross-platform file and archive manager


PeaZip is a free file archiver utility and rar extractor for Windows and Linux, work with 150+ archive types and variants (7z, ace, arc, bz2, cab, gz, iso, paq, pea, rar, tar, wim, zip, zipx...), handle spanned archives and support multiple archive encryption standards. The project aims to provide a cross-platform, portable, GUI frontend for multiple Open Source technologies (7-Zip, FreeArc, PAQ, PEA, UPX) focused on file and archive management, and security

boxing - Android multi-media selector based on MVP mode.


Core version: only contain the core function. UI version: contain UI implements base on core version.

digital_video_introduction - A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding)


A gentle introduction to video technology, although it's aimed at software developers / engineers, we want to make it easy for anyone to learn. This idea was born during a mini workshop for newcomers to video technology. The goal is to introduce some digital video concepts with a simple vocabulary, lots of visual elements and practical examples when possible, and make this knowledge available everywhere. Please, feel free to send corrections, suggestions and improve it.

makeself - A self-extracting archiving tool for Unix systems, in 100% shell script.


makeself.sh is a small shell script that generates a self-extractable compressed tar archive from a directory. The resulting file appears as a shell script (many of those have a .run suffix), and can be launched as is. The archive will then uncompress itself to a temporary directory and an optional arbitrary command will be executed (for example an installation script). This is pretty similar to archives generated with WinZip Self-Extractor in the Windows world. Makeself archives also include checksums for integrity self-validation (CRC and/or MD5/SHA256 checksums). The makeself.sh script itself is used only to create the archives from a directory of files. The resultant archive is actually a compressed (using gzip, bzip2, or compress) TAR archive, with a small shell script stub at the beginning. This small stub performs all the steps of extracting the files, running the embedded command, and removing the temporary files when done. All the user has to do to install the software contained in such an archive is to "run" the archive, i.e sh nice-software.run. I recommend using the ".run" (which was introduced by some Makeself archives released by Loki Software) or ".sh" suffix for such archives not to confuse the users, so that they will know they are actually shell scripts (with quite a lot of binary data attached to them though!).

essential-image-optimization - Essential Image Optimization - an eBook


Bring up a terminal and type node --version. Node should respond with a version at or above 0.10.x. If you require Node, go to nodejs.org and click on the big green Install button.

FiniteStateEntropy - New generation entropy codecs : Finite State Entropy and Huff0


Huff0, a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU (Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds.FSE is a new kind of Entropy encoder, based on ANS theory, from Jarek Duda, achieving precise compression accuracy (like Arithmetic coding) at much higher speeds.

Snappy - A fast compressor/decompressor


Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger.

Basic Compression Library


Basic Compression Library is a set of open source implementations of RLE (Run Length Encoding), Huffman, Rice, Lempel-Ziv (LZ77) and Shannon-Fano compression algorithms.

Libarchive - C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other


The libarchive project develops a portable, efficient C library that can read and write streaming archives in a variety of formats. It also includes implementations of the common tar, cpio, and zcat command-line tools that use the libarchive library.

QuickLZ - Fastest Compression Library in C, C# and Java


QuickLZ is the world's fastest compression library, reaching 308 Mbyte/s per core. It supports Streaming mode for optimal compression ratio of small packets down to 200 - 300 bytes in size.

pigz - A parallel implementation of gzip for modern multi-processor, multi-core machines


pigz, which stands for parallel implementation of gzip, it compresses using threads to make use of multiple processors and cores. The input is broken up in to 128 KB chunks with each compressed in parallel. The individual check value for each chunk is also calculated in parallel. The compressed data is written in order to the output, and a combined check value is calculated from the individual check values.