Patroni - A template for PostgreSQL High Availability with ZooKeeper, etcd, or Consul

  •        880

Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Database engineers, DBAs, DevOps engineers, and SREs who are looking to quickly deploy HA PostgreSQL in the datacenter-or anywhere else-will hopefully find it useful.

https://github.com/zalando/patroni

Tags
Implementation
License
Platform

   




Related Projects

Spilo - Highly available elephant herd: HA PostgreSQL cluster using Docker

  •    Python

Spilo is a Docker image that provides PostgreSQL and Patroni bundled together. Patroni is a template for PostgreSQL HA. Multiple Spilos can create a resilient High Available PostgreSQL cluster. For this, you'll need to start all participating Spilos with identical etcd addresses and cluster names.

governor - Runners to orchestrate a high-availability PostgreSQL

  •    Python

Compose are no longer maintaining Governor as an active project. We are pleased to say that Governor seeded the Patroni project which Compose has now adopted as their HA solution. We recommend it to anyone seeking similar functionality to Governor. We have archived the project on GitHub; you are free to use it and fork it, but we will not be accepting issues or pull requests.There are many ways to run high availability with PostgreSQL; here we present a template for you to create your own custom fit high availability solution using etcd and python for maximum accessibility.

stolon - PostgreSQL cloud native High Availability and more.

  •    Go

Stolon is under active development and used in different environments. Probably its on disk format (store hierarchy and key contents) will change in future to support new features. If a breaking change is needed it'll be documented in the release notes and an upgrade path will be provided. Anyway it's quite easy to reset a cluster from scratch keeping the current master instance working and without losing any data.

pg_auto_failover - Postgres extension and service for automated failover and high-availability

  •    C

pg_auto_failover is an extension and service for PostgreSQL that monitors and manages automated failover for a Postgres cluster. It is optimized for simplicity and correctness and supports Postgres 10 and newer. pg_auto_failover supports several Postgres architectures and implements a safe automated failover for your Postgres service. It is possible to get started with only two data nodes which will be given the roles of primary and secondary by the monitor.

Dqlite - High-availability SQLite

  •    C

Dqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices. Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint.


traefik - The Cloud Native Application Proxy

  •    Go

Traefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, ...) and configures itself automatically and dynamically. Pointing Traefik at your orchestrator should be the only configuration step you need. . Overview . Features . Supported backends . Quickstart . Web UI . Documentation .

yoke - Postgres high-availability cluster with auto-failover and automated cluster recovery.

  •    Go

Yoke is a Postgres redundancy/auto-failover solution that provides a high-availability PostgreSQL cluster that's simple to manage. Note: The ini file can be named anything and reside anywhere. All Yoke needs is the /path/to/config.ini on startup.

voyager - ✈️️ Secure Ingress Controller for Kubernetes

  •    Go

Voyager is a HAProxy backed secure L7 and L4 ingress controller for Kubernetes developed by AppsCode. This can be used with any Kubernetes cloud providers including aws, gce, gke, azure, acs. This can also be used with bare metal Kubernetes clusters.Voyager provides L7 and L4 loadbalancing using a custom Kubernetes Ingress resource. This is built on top of the HAProxy to support high availability, sticky sessions, name and path-based virtual hosting. This also support configurable application ports with all the options available in a standard Kubernetes Ingress. Here is a complex ingress example that shows how various features can be used. You can find the generated HAProxy Configuration here.

kubeadm-ha - Kubernetes high availiability deploy based on kubeadm (for v1

  •    Smarty

kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.

etcd - Distributed reliable key-value store for the most critical data of a distributed system

  •    Go

Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get stable binaries. etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.

consul-haproxy - Consul HAProxy connector for real-time configuration

  •    Go

Deprecated! This project is deprecated. Consul HAProxy has been replaced by Consul Template. This repository is kept for history and legacy purposes. Please use Consul Template instead.This project provides consul-haproxy, a daemon for dynamically configuring HAProxy using data from Consul.

HAProxy - The Reliable, High Performance TCP/HTTP Load Balancer

  •    C

HAProxy is a fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware.

PolarDB - Share-nothing, Distributed database based on PostgreSQL

  •    C

PolarDB for PostgreSQL (PolarDB for short) is an open-source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication.

Trafik - A Modern Reverse Proxy

  •    Go

Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, and a lot more) to manage its configuration automatically and dynamically.

rqlite - The lightweight, distributed relational database built on SQLite

  •    Go

rqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it gracefully handles leader elections, and tolerates failures of machines, including the leader. rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation. With it you've got a lightweight and reliable distributed relational data store. Think etcd or Consul, but with relational data modelling also available.

doorman - Doorman: Global Distributed Client Side Rate Limiting.

  •    Go

Doorman is a solution for Global Distributed Client Side Rate Limiting. Clients that talk to a shared resource (such as a database, a gRPC service, a RESTful API, or whatever) can use Doorman to voluntarily limit their use (usually in requests per second) of the resource. Doorman is written in Go and uses gRPC as its communication protocol. For some high-availability features it needs a distributed lock manager. We currently support etcd, but it should be relatively simple to make it use Zookeeper instead.The Doorman master server remembers all clients that currently have capacity and whenever a client asks for capacity it inserts the clients request into its memory and runs the algorithm to figure out what this client should get.

zetcd - Serve the Apache Zookeeper API but back it with an etcd cluster

  •    Go

A ZooKeeper "personality" for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster.Protocol encoding and decoding heavily based on go-zookeeper.

zetcd - Serve the Apache Zookeeper API but back it with an etcd cluster

  •    Go

A ZooKeeper "personality" for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster. Protocol encoding and decoding heavily based on go-zookeeper.

redis_failover - redis_failover is a ZooKeeper-based automatic master/slave failover solution for Ruby

  •    Ruby

redis_failover provides a full automatic master/slave failover solution for Ruby. Redis does not currently provide an automatic failover capability when configured for master/slave replication. When the master node dies, a new master must be manually brought online and assigned as the slave's new master. This manual switch-over is not desirable in high traffic sites where Redis is a critical part of the overall architecture. The existing standard Redis client for Ruby also only supports configuration for a single Redis server. When using master/slave replication, it is desirable to have all writes go to the master, and all reads go to one of the N configured slaves. This gem (built using ZK) attempts to address these failover scenarios. One or more Node Manager daemons run as background processes and monitor all of your configured master/slave nodes. When the daemon starts up, it automatically discovers the current master/slaves. Background watchers are setup for each of the redis nodes. As soon as a node is detected as being offline, it will be moved to an "unavailable" state. If the node that went offline was the master, then one of the slaves will be promoted as the new master. All existing slaves will be automatically reconfigured to point to the new master for replication. All nodes marked as unavailable will be periodically checked to see if they have been brought back online. If so, the newly available nodes will be configured as slaves and brought back into the list of available nodes. Note that detection of a node going down should be nearly instantaneous, since the mechanism used to keep tabs on a node is via a blocking Redis BLPOP call (no polling). This call fails nearly immediately when the node actually goes offline. To avoid false positives (i.e., intermittent flaky network interruption), the Node Manager will only mark a node as unavailable if it fails to communicate with it 3 times (this is configurable via --max-failures, see configuration options below). Note that you can (and should) deploy multiple Node Manager daemons since they each report periodic health reports/snapshots of the redis servers. A "node strategy" is used to determine if a node is actually unavailable. By default a majority strategy is used, but you can also configure "consensus" or "single" as well.

SkyDNS - DNS service discovery for etcd

  •    Go

SkyDNS is a distributed service for announcement and discovery of services built on top of etcd. It utilizes DNS queries to discover available services. This is done by leveraging SRV records in DNS, with special meaning given to subdomains, priorities and weights.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.