Displaying 1 to 11 from 11 results

Atomix - Scalable, fault-tolerant distributed systems protocols and primitives for the JVM

  •    Java

Atomix is an event-driven framework for coordinating fault-tolerant distributed systems built on the Raft consensus algorithm. It provides the building blocks that solve many common distributed systems problems including group membership, leader election, distributed concurrency control, partitioning, and replication.

braft - An industrial-grade C++ implementation of RAFT consensus algorithm based on brpc, widely used inside Baidu to build highly-available distributed systems

  •    C++

An industrial-grade C++ implementation of RAFT consensus algorithm and replicated state machine based on brpc. braft is designed and implemented for scenarios demanding for high workload and low overhead of latency, with the consideration for easy-to-understand concepts so that engineers inside Baidu can build their own distributed systems individually and correctly. Build brpc which is the main dependency of braft.

dragonboat - A feature complete and high performance multi-group Raft library in Go.

  •    Go

Dragonboat is a high performance multi-group Raft consensus library in Go with C++11 binding support. Consensus algorithms such as Raft provides fault-tolerance by alllowing a system continue to operate as long as the majority member servers are available. For example, a Raft cluster of 5 servers can make progress even if 2 servers fail. It also appears to clients as a single node with strong data consistency always provided. All running servers can be used to initiate read requests for aggregated read throughput.

PySyncObj - A library for replicating your python class between multiple servers, based on raft protocol

  •    Python

And thats all! Now you can call incCounter on serverA, and check counter value on serverB - they will be synchronized.You can look at batteries implementation, examples and unit-tests for more use-cases. Also there is an API documentation. Feel free to create proposals and/or pull requests with new batteries, features, etc. Join our gitter chat if you have any questions.




Rafty - Implementation of RAFT consensus in .NET core

  •    CSharp

Rafty is an implementation of the Raft concensus algorythm see here created using C# and .NET core. Rafty is the algorythm only and does not provide useful implementation of the transport between nodes, the state machine or log. Instead Rafty provides interfaces that you will need to implement. I reccomend at least 5 nodes in your cluster for Rafty to operate optimally and this is basically all I've tested.... Bring the rafty package into your project using nuget.

azure-docker4azureoms - :new: :rocket: ☁:star: :whale2: :penguin: Docker for Azure with OMS and some more stacks

  •    

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

leto - A key value storage example powered by hashicorp raft and BadgerDB

  •    Go

In Greek mythology, Leto (/ˈliːtoʊ/) is a daughter of the Titans Coeus and Phoebe, the sister of Asteria. Leto is another reference example use of Hashicorp Raft. The API is redis protocol compatiable.


dragonboat-example - Examples for Dragonboat

  •    Go

This repo contains examples for dragonboat. Please first make sure you have the dragonboat library installed, instructions can be found here.

KVRaftDB - a distributed Key/Value Database based on Raft. MIT-6.824

  •    Go

A replicated service achieves fault tolerance by storing complete copies of its state (i.e., data) on multiple replica servers. Replication allows the service to continue operating even if some of its servers experience failures (crashes or a broken or flaky network). The challenge is that failures may cause the replicas to hold differing copies of the data. Raft manages a service's state replicas, and in particular it helps the service sort out what the correct state is after failures. Raft implements a replicated state machine. It organizes client requests into a sequence, called the log, and ensures that all the replicas agree on the contents of the log. Each replica executes the client requests in the log in the order they appear in the log, applying those requests to the replica's local copy of the service's state. Since all the live replicas see the same log contents, they all execute the same requests in the same order, and thus continue to have identical service state. If a server fails but later recovers, Raft takes care of bringing its log up to date. Raft will continue to operate as long as at least a majority of the servers are alive and can talk to each other. If there is no such majority, Raft will make no progress, but will pick up where it left off as soon as a majority can communicate again.