Displaying 1 to 11 from 11 results

Squid - HTTP reverse proxy optimizes web delivery

  •    C++

Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. Cached content means data is served locally and users will see this through faster download speeds with frequently-used content.

ForestDB - A Fast Key-Value Storage Engine Based on Hierarchical B+-Tree Trie

  •    C++

ForestDB is a key-value storage engine developed by Couchbase Caching and Storage Team, and its main index structure is built from Hierarchical B+-Tree based Trie, called HB+-Trie. ForestDB paper has been published in IEEE Transactions on Computers.

Hyperdex - A Searchable Distributed Key-Value Store

  •    C++

HyperDex is a distributed, searchable key-value store. HyperDex provides a unique search primitive which enables searches over stored values. By design, HyperDex retains the performance of traditional key-value stores while enabling support for the search operation. It is fast, scalable, Consistent, Fault tolerant.

Coral CDN- Content Distribution Network

  •    C++

Coral is a peer-to-peer content distribution network. Sites that run Coral automatically replicate content. Using modern peer-to-peer indexing techniques, CoralCDN will efficiently find a cached object if it exists anywhere in the network.




libmc - Fast and light-weight memcached client for C++ / #python / #golang #libmc

  •    C++

libmc is a memcached client library for Python without any other dependencies in runtime. It's mainly written in C++ and Cython. libmc can be considered as a drop in replacement for libmemcached and python-libmemcached. libmc is developing and maintaining by Douban Inc. Currently, It is working in production environment, powering all web traffics in douban.com. Realtime benchmark result is available on travis.

dyld_cache_extract - A macOS utility to extract dynamic libraries from the dyld_shared_cache of macOS and iOS

  •    C++

A macOS utility to extract dynamic libraries from the dyld_shared_cache of macOS and iOS. The project is available as a macOS application (with GUI) and as a command line tool.

node-shared-cache - An interprocess shared LRU cache module for Node.JS

  •    C++

It supports auto memory-management and fast object serialization. It uses a hashmap and LRU cache internally to maintain its contents. You can install it with npm. Just type npm i node-shared-cache will do it.

easyhttpcpp - A cross-platform HTTP client library with a focus on usability and speed

  •    C++

A cross-platform HTTP client library with a focus on usability and speed. Under its hood, EasyHttp uses POCO C++ Libraries and derives many of its design inspirations from okHttp, a well known HTTP client for Android and Java applications. Please check out Wiki for details. Modern network applications need a powerful HTTP client. While we already have many well known C++ HTTP clients like, Poco::Net, Boost.Asio, cpprestsdk to name a few, they often lack features like a powerful response cache, HTTP connection pooling, debuggability etc which we all take it for granted for libraries targeted towards Android or iOS platforms. EasyHttp tries to fill that gap.


sphinx - Sphinx is a fast in-memory key-value store, compatible with Memcached.

  •    C++

Sphinx is a fast in-memory key-value store that is compatible with the Memcached wire protocol. Sphinx partitions data between logical cores, similar to MICA (Lim et al., 2014), so that a specific core manages each key. Sphinx also partitions connection sockets between cores. If a remote core manages a request key, Sphinx uses message passing to execute the request on that core. To manage key-value pairs, Sphinx uses an in-memory, log-structured memory allocator, similar to RAMCloud (Rumble et al., 2014).

caches - LRU, LFU, FIFO cache C++ implementations

  •    C++

Using this library is simple. It is necessary to include header with the cache implementation (cache.hpp file) and appropriate header with the cache policy if it is needed. If not then the non-special algorithm will be used (it removes the last element which key is the last in the internal container). The only requirement is a compatible C++11 compiler.

lru-cache - :dizzy: A feature complete LRU cache implementation in C++

  •    C++

A feature complete LRU cache implementation in C++. A least recently used (LRU) cache is a fixed size cache that behaves just like a regular lookup table, but remembers the order in which elements are accessed. Once its (user-defined) capacity is reached, it uses this information to replace the least recently used element with a newly inserted one. This is ideal for caching function return values, where fast lookup of complex computations is favorable, but a memory blowup from caching all (input, output) pairs is to be avoided.