go-datastructures

  •        29

Go-datastructures is a collection of useful, performant, and threadsafe Go datastructures.Interval tree for collision in n-dimensional ranges. Implemented via a red-black augmented tree. Extra dimensions are handled in simultaneous inserts/queries to save space although this may result in suboptimal time complexity. Intersection determined using bit arrays. In a single dimension, inserts, deletes, and queries should be in O(log n) time.

https://github.com/Workiva/go-datastructures

Tags
Implementation
License
Platform

   




Related Projects

golang-set - A simple set type for the Go language. Also used in Docker.

  •    Go

The missing set collection for the Go language. Until Go has sets built-in...use this.I have to give some credit for helping seed the idea with this post on stackoverflow.

immutable - Immutable collections for Go

  •    Go

This repository contains immutable collection types for Go. It includes List, Map, and SortedMap implementations. Immutable collections can provide efficient, lock free sharing of data by requiring that edits to the collections return new collections. The collection types in this library are meant to mimic Go built-in collections such asslice and map. The primary usage difference between Go collections and immutable collections is that immutable collections always return a new collection on mutation so you will need to save the new reference.

gota - Gota: DataFrames and data wrangling in Go (Golang)

  •    Go

This is an implementation of DataFrames, Series and data wrangling methods for the Go programming language. The API is still in flux so use at your own risk.The term DataFrame typically refers to a tabular dataset that can be viewed as a two dimensional table. Often the columns of this dataset refers to a list of features, while the rows represent a number of measurements. As the data on the real world is not perfect, DataFrame supports non measurements or NaN elements.

gods - GoDS (Go Data Structures)

  •    Go

Implementation of various data structures and algorithms in Go.Containers are either ordered or unordered. All ordered containers provide stateful iterators and some of them allow enumerable functions.

BoomFilters - Probabilistic data structures for processing continuous, unbounded streams.

  •    Go

Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.


cosmos - Algorithms that run our universe | Your personal library of every algorithm and data structure code that you will ever encounter | Ask us anything at our forum

  •    C++

Cosmos is your personal offline collection of every algorithm and data structure one will ever encounter and use in a lifetime. This provides solutions in various languages spanning C, C++, Java, JavaScript, Swift, Python, Go and others. This work is maintained by a community of hundreds of people and is a massive collaborative effort to bring the readily available coding knowledge offline.

bitset - Go package implementing bitsets

  •    Go

Package bitset implements bitsets, a mapping between non-negative integers and boolean values. It should be more efficient than map[uint] bool.It provides methods for setting, clearing, flipping, and testing individual integers.

bloom - Go package implementing Bloom filters

  •    Go

A Bloom filter is a representation of a set of n items, where the main requirement is to make membership queries; i.e., whether an item is a member of a set.A Bloom filter has two parameters: m, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and k, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo m). Set membership is done by testing whether the bits at each value of the hashing functions (again, modulo m) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose k and m correctly.

hyperloglog - HyperLogLog with lots of sugar (Sparse, LogLog-Beta bias correction and TailCut space reduction)

  •    Go

An improved version of HyperLogLog for the count-distinct problem, approximating the number of distinct elements in a multiset using 20-50% less space than other usual HyperLogLog implementations.This work is based on "Better with fewer bits: Improving the performance of cardinality estimation of large data streams - Qingjun Xiao, You Zhou, Shigang Chen".

Capsule - The Capsule Hash Trie Collections Library

  •    Java

Capsule aims to become a full-fledged (immutable) collections library for Java 8+ that is solely built around persistent tries. The library is designed for standalone use and for being embedded in domain-specific languages. Capsule still has to undergo some incubation before it can ship as a well-rounded collection library. Nevertheless, the code is stable and performance is solid.

pyrsistent - Persistent/Immutable/Functional data structures for Python

  •    Python

Pyrsistent is a number of persistent collections (by some referred to as functional data structures). Persistent in the sense that they are immutable. All methods on a data structure that would normally mutate it instead return a new copy of the structure containing the requested updates. The original structure is left untouched.

Data-Structures-and-Algorithms - Data Structures and Algorithms implementation in Go

  •    Go

There are several data structures and algorithms implemented in this project. The list will be replenished with time. The library is not intended for direct use by importing. We strongly recommend copying the necessary implementations and adjusting to your case.

Eclipse Collections - Best Java Collection Framework

  •    Java

Eclipse Collections is a collections framework for Java. It has JDK-compatible List, Set and Map implementations with a rich API, additional types not found in the JDK like Bags, Multimaps, and set of utility classes that work with any JDK compatible Collections, Arrays, Maps, or Strings. The iteration protocol was inspired by the Smalltalk collection framework. It maximizes the power of Java 8 Lambda expressions and method references with rich APIs directly available on your collections.

bifurcan - impure functional data structures

  •    Java

This library provides high-quality Java implementations of mutable and immutable data structures, each sharing a common API. Rather than using the existing collection interfaces in java.util such as List or Map, it provides its own interfaces (IList, IMap, ISet) that provide functional semantics - each update to a collection returns a reference to a new collection.

set - Set data structure for Go

  •    Go

Set is a basic and simple, hash-based, Set data structure implementation in Go (Golang).Set provides both threadsafe and non-threadsafe implementations of a generic set data structure. The thread safety encompasses all operations on one set. Operations on multiple sets are consistent in that the elements of each set used was valid at exactly one point in time between the start and the end of the operation. Because it's thread safe, you can use it concurrently with your goroutines.

mapstructure - Go library for decoding generic map values into native Go structures.

  •    Go

mapstructure is a Go library for decoding generic map values to structures and vice versa, while providing helpful error handling.This library is most useful when decoding values from some data stream (JSON, Gob, etc.) where you don't quite know the structure of the underlying data until you read a part of it. You can therefore read a map[string]interface{} and use this library to decode it into the proper underlying native Go structure.

DynamicData - Reactive collections based on Rx.Net

  •    CSharp

Dynamic Data is a portable class library which brings the power of Reactive Extensions (Rx) to collections. Rx is extremely powerful but out of the box provides nothing to assist with managing collections. In most applications there is a need to update the collections dynamically. Typically a collection is loaded and after the initial load, asynchronous updates are received. The original collection will need to reflect these changes. In simple scenarios the code is simple. However, typical applications are much more complicated and may apply a filter, transform the original dto and apply a sort. Even with these simple every day operations the complexity of the code is quickly magnified. Dynamic data has been developed to remove the tedious code of dynamically maintaining collections. It has grown to become functionally very rich with at least 60 collection based operations which amongst other things enable filtering, sorting, grouping, joining different sources, transforms, binding, pagination, data virtualisation, expiration, disposal management plus more.

DynamicData - Reactive collections based on Rx.Net

  •    CSharp

Dynamic Data is a portable class library which brings the power of Reactive Extensions (Rx) to collections.Rx is extremely powerful but out of the box provides nothing to assist with managing collections. In most applications there is a need to update the collections dynamically. Typically a collection is loaded and after the initial load, asynchronous updates are received. The original collection will need to reflect these changes. In simple scenarios the code is simple. However, typical applications are much more complicated and may apply a filter, transform the original dto and apply a sort. Even with these simple every day operations the complexity of the code is quickly magnified. Dynamic data has been developed to remove the tedious code of dynamically maintaining collections. It has grown to become functionally very rich with at least 60 collection based operations which amongst other things enable filtering, sorting, grouping, joining different sources, transforms, binding, pagination, data virtualisation, expiration, disposal management plus more.