HyperMinHash-java - Union, intersection, and set cardinality in loglog space

  •        3

HyperMinHash is a probabilistic data structure that can approximate union, intersection, and set cardinalities as well as Jaccard indices of very large sets with high accuracy, in loglog space, and in a streaming fashion.

https://github.com/LiveRamp/HyperMinHash-java
www.github.com/LiveRamp/HyperMinHash-java

Dependencies:

org.apache.commons:commons-math3:3.4.1

Tags
Implementation
License
Platform

   




Related Projects

hyperloglog - HyperLogLog with lots of sugar (Sparse, LogLog-Beta bias correction and TailCut space reduction)

  •    Go

An improved version of HyperLogLog for the count-distinct problem, approximating the number of distinct elements in a multiset using 20-50% less space than other usual HyperLogLog implementations.This work is based on "Better with fewer bits: Improving the performance of cardinality estimation of large data streams - Qingjun Xiao, You Zhou, Shigang Chen".

ccard-lib - C library for estimating cardinality in streams for which it is infeasible to store all events in memory

  •    C

C library for estimating cardinality in data streams, in which case it is infeasible to store all events in memory. This library implements a series of cardinality estimating algorithms such as Linear Counting, LogLog Counting, HyperLogLog Counting and Adaptive Counting. For more information about these algorithms please read the Reference section.

datasketch - MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++

  •    Python

datasketch gives you probabilistic data structures that can process and search very large amount of data super fast, with little loss of accuracy. datasketch must be used with Python 2.7 or above and NumPy 1.11 or above. Scipy is optional, but with it the LSH initialization can be much faster.

hlld - C network daemon for HyperLogLogs

  •    C

hlld is a high-performance C server which is used to expose HyperLogLog sets and operations over them to networked clients. It uses a simple ASCI protocol which is human readable, and similar to memcached. HyperLogLog's are a relatively new sketching data structure. They are used to estimate cardinality, i.e. the unique number of items in a set. They are based on the observation that any bit in a "good" hash function is independent of any other bit and that the probability of getting a string of N bits all set to the same value is 1/(2^N). There is a lot more in the math, but that is the basic intuition. What is even more incredible is that the storage required to do the counting is log(log(N)). So with a 6 bit register, we can count well into the trillions. For more information, its best to read the papers referenced at the end.

hyperloglog - HyperLogLog and HyperLogLog++ implementation in Go/Golang.

  •    Go

Implements the HyperLogLog and HyperLogLog++ algorithms. Documentation can be found here.


postgresql-hll - PostgreSQL extension adding HyperLogLog data structures as a native data type

  •    C

This Postgres module introduces a new data type hll which is a HyperLogLog data structure. HyperLogLog is a fixed-size, set-like structure used for distinct value counting with tunable precision. For example, in 1280 bytes hll can estimate the count of tens of billions of distinct values with only a few percent error. In addition to the algorithm proposed in the original paper, this implementation is augmented to improve its accuracy and memory use without sacrificing much speed. See below for more details.

stream-lib - Stream summarizer and cardinality estimator.

  •    Java

Stream summarizer and cardinality estimator.

skizze - A probabilistic data structure service and storage

  •    Go

Skizze ([ˈskɪt͡sə]: german for sketch) is a sketch data store to deal with all problems around counting and sketching using probabilistic data-structures. Unlike a Key-Value store, Skizze does not store values, but rather appends values to defined sketches, allowing one to solve frequency and cardinality queries in near O(1) time, with minimal memory footprint.

logswan - Fast Web log analyzer using probabilistic data structures

  •    C

Logswan is a fast Web log analyzer using probabilistic data structures. It is targeted at very large log files, typically APIs logs. It has constant memory usage regardless of the log file size, and takes approximatively 4MB of RAM.Unique visitors counting is performed using two HyperLogLog counters (one for IPv4, and another one for IPv6), providing a relative accuracy of 0.10%. String representations of IP addresses are used and preferred as they offer better precision.

Redisson - Redis based In-Memory Data Grid for Java

  •    Java

Redisson - distributed Java objects and services (Set, Multimap, SortedSet, Map, List, Queue, BlockingQueue, Deque, BlockingDeque, Semaphore, Lock, AtomicLong, Map Reduce, Publish / Subscribe, Bloom filter, Spring Cache, Executor service, Tomcat Session Manager, Scheduler service, JCache API) on top of Redis server. Rich Redis client.

Redisson - Redis based In-Memory Data Grid for Java

  •    Java

Redisson - Distributed and Scalable Java data structures (Set, SortedSet, Map, ConcurrentMap, List, Queue, Deque, Lock, AtomicLong, CountDownLatch, Publish / Subscribe, HyperLogLog) on top of Redis server. Advanced redis java client. It supports over 28+ data structures and services, Synchronous / asynchronous / reactive interfaces and lot more.

stream-lib - Stream summarizer and cardinality estimator.

  •    Java

A library for summarizing data in streams for which it is infeasible to store all events

walrus - Lightweight Python utilities for working with Redis

  •    Python

Lightweight Python utilities for working with Redis. The purpose of walrus is to make working with Redis in Python a little easier by wrapping rich objects in Pythonic containers. It consist of wrappers for the Redis object types like Hash, List, Set, Sorted Set, HyperLogLog, Array.

BoomFilters - Probabilistic data structures for processing continuous, unbounded streams.

  •    Go

Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.

bloom - Go package implementing Bloom filters

  •    Go

A Bloom filter is a representation of a set of n items, where the main requirement is to make membership queries; i.e., whether an item is a member of a set.A Bloom filter has two parameters: m, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and k, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo m). Set membership is done by testing whether the bits at each value of the hashing functions (again, modulo m) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose k and m correctly.

ResNeXt - Implementation of a classification framework from the paper Aggregated Residual Transformations for Deep Neural Networks

  •    Lua

This repository contains a Torch implementation for the ResNeXt algorithm for image classification. The code is based on fb.resnet.torch. ResNeXt is a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call “cardinality” (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width.

Gato (Graph Animation Toolbox)

  •    Python

Gato (Graph Animation Toolbox): Animate graph algorithms for example for computing shortest paths, minimal spanning trees, maximum flows or maximal cardinality or weight matchings. Create your own animations using the Animated Data Structures (ADS).

benchm-ml - A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc

  •    R

This project aims at a minimal benchmark for scalability, speed and accuracy of commonly used implementations of a few machine learning algorithms. The target of this study is binary classification with numeric and categorical inputs (of limited cardinality i.e. not very sparse) and no missing data, perhaps the most common problem in business applications (e.g. credit scoring, fraud detection or churn prediction). If the input matrix is of n x p, n is varied as 10K, 100K, 1M, 10M, while p is ~1K (after expanding the categoricals into dummy variables/one-hot encoding). This particular type of data structure/size (the largest) stems from this author's interest in some particular business applications. Note: While a large part of this benchmark was done in Spring 2015 reflecting the state of ML implementations at that time, this repo is being updated if I see significant changes in implementations or new implementations have become widely available (e.g. lightgbm). Also, please find a summary of the progress and learnings from this benchmark at the end of this repo.

Akumuli - Time-series database

  •    C++

Akumuli is a time-series database for modern hardware. It can be used to capture, store and process time-series data in real-time. The word "akumuli" can be translated from Esperanto as "accumulate".

tf-pose-estimation - Deep Pose Estimation implemented using Tensorflow with Custom Architectures for fast inference

  •    PureBasic

'Openpose' for human pose estimation have been implemented using Tensorflow. It also provides several variants that have made some changes to the network structure for real-time processing on the CPU or low-power embedded devices. 2018.5.21 Post-processing part is implemented in c++. It is required compiling the part. See: https://github.com/ildoonet/tf-pose-estimation/tree/master/src/pafprocess 2018.2.7 Arguments in run.py script changed. Support dynamic input size.