patroni - A template for PostgreSQL High Availability with ZooKeeper, etcd, or Consul

  •        86

You can find a version of this documentation that is searchable and also easier to navigate at patroni.readthedocs.io.There are many ways to run high availability with PostgreSQL; for a list, see the PostgreSQL Documentation.

https://github.com/zalando/patroni

Tags
Implementation
License
Platform

   




Related Projects

governor - Runners to orchestrate a high-availability PostgreSQL


Compose are no longer maintaining Governor as an active project. We are pleased to say that Governor seeded the Patroni project which Compose has now adopted as their HA solution. We recommend it to anyone seeking similar functionality to Governor. We have archived the project on GitHub; you are free to use it and fork it, but we will not be accepting issues or pull requests.There are many ways to run high availability with PostgreSQL; here we present a template for you to create your own custom fit high availability solution using etcd and python for maximum accessibility.

voyager - ✈️️ Secure Ingress Controller for Kubernetes


Voyager is a HAProxy backed secure L7 and L4 ingress controller for Kubernetes developed by AppsCode. This can be used with any Kubernetes cloud providers including aws, gce, gke, azure, acs. This can also be used with bare metal Kubernetes clusters.Voyager provides L7 and L4 loadbalancing using a custom Kubernetes Ingress resource. This is built on top of the HAProxy to support high availability, sticky sessions, name and path-based virtual hosting. This also support configurable application ports with all the options available in a standard Kubernetes Ingress. Here is a complex ingress example that shows how various features can be used. You can find the generated HAProxy Configuration here.

Trafik - A Modern Reverse Proxy


Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, and a lot more) to manage its configuration automatically and dynamically.

consul-haproxy - Consul HAProxy connector for real-time configuration


Deprecated! This project is deprecated. Consul HAProxy has been replaced by Consul Template. This repository is kept for history and legacy purposes. Please use Consul Template instead.This project provides consul-haproxy, a daemon for dynamically configuring HAProxy using data from Consul.

HAProxy - The Reliable, High Performance TCP/HTTP Load Balancer


HAProxy is a fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware.


rqlite - The lightweight, distributed relational database built on SQLite.


rqlite is a distributed relational database, which uses SQLite as its storage engine. rqlite uses Raft to achieve consensus across all the instances of the SQLite databases, ensuring that every change made to the system is made to a quorum of SQLite databases, or none at all. It also gracefully handles leader elections, and tolerates failures of machines, including the leader. rqlite is available for Linux, OSX, and Microsoft Windows.rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation. With it you've got a lightweight and reliable distributed relational data store. Think etcd or Consul, but with relational data modelling also available.

doorman - Doorman: Global Distributed Client Side Rate Limiting.


Doorman is a solution for Global Distributed Client Side Rate Limiting. Clients that talk to a shared resource (such as a database, a gRPC service, a RESTful API, or whatever) can use Doorman to voluntarily limit their use (usually in requests per second) of the resource. Doorman is written in Go and uses gRPC as its communication protocol. For some high-availability features it needs a distributed lock manager. We currently support etcd, but it should be relatively simple to make it use Zookeeper instead.The Doorman master server remembers all clients that currently have capacity and whenever a client asks for capacity it inserts the clients request into its memory and runs the algorithm to figure out what this client should get.

zetcd - Serve the Apache Zookeeper API but back it with an etcd cluster


A ZooKeeper "personality" for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster.Protocol encoding and decoding heavily based on go-zookeeper.

etcd-operator - etcd operator creates/configures/manages etcd clusters atop Kubernetes


Major planned features have been completed and while no breaking API changes are currently planned, we reserve the right to address bugs and API changes in a backwards incompatible way before the project is declared stable. See upgrade guide for safe upgrade process.Currently user facing etcd cluster objects are created as Kubernetes Custom Resources, however, taking advantage of User Aggregated API Servers to improve reliability, validation and versioning is planned. The use of Aggregated API should be minimally disruptive to existing users but may change what Kubernetes objects are created or how users deploy the etcd operator.

SkyDNS - DNS service discovery for etcd


SkyDNS is a distributed service for announcement and discovery of services built on top of etcd. It utilizes DNS queries to discover available services. This is done by leveraging SRV records in DNS, with special meaning given to subdomains, priorities and weights.

redis_failover - redis_failover is a ZooKeeper-based automatic master/slave failover solution for Ruby


redis_failover provides a full automatic master/slave failover solution for Ruby. Redis does not currently provide an automatic failover capability when configured for master/slave replication. When the master node dies, a new master must be manually brought online and assigned as the slave's new master. This manual switch-over is not desirable in high traffic sites where Redis is a critical part of the overall architecture. The existing standard Redis client for Ruby also only supports configuration for a single Redis server. When using master/slave replication, it is desirable to have all writes go to the master, and all reads go to one of the N configured slaves. This gem (built using ZK) attempts to address these failover scenarios. One or more Node Manager daemons run as background processes and monitor all of your configured master/slave nodes. When the daemon starts up, it automatically discovers the current master/slaves. Background watchers are setup for each of the redis nodes. As soon as a node is detected as being offline, it will be moved to an "unavailable" state. If the node that went offline was the master, then one of the slaves will be promoted as the new master. All existing slaves will be automatically reconfigured to point to the new master for replication. All nodes marked as unavailable will be periodically checked to see if they have been brought back online. If so, the newly available nodes will be configured as slaves and brought back into the list of available nodes. Note that detection of a node going down should be nearly instantaneous, since the mechanism used to keep tabs on a node is via a blocking Redis BLPOP call (no polling). This call fails nearly immediately when the node actually goes offline. To avoid false positives (i.e., intermittent flaky network interruption), the Node Manager will only mark a node as unavailable if it fails to communicate with it 3 times (this is configurable via --max-failures, see configuration options below). Note that you can (and should) deploy multiple Node Manager daemons since they each report periodic health reports/snapshots of the redis servers. A "node strategy" is used to determine if a node is actually unavailable. By default a majority strategy is used, but you can also configure "consensus" or "single" as well.

raftd - A reference implementation for using the go-raft library for distributed consensus.


NOTE: This project is unmaintained. If you are using goraft in a project and want to carry the project forward please file an issue with your ideas and intentions. The original project authors have created new raft implementations now used in etcd and InfluxDB.If you want to see a simple raft key-value store see the etcd/raft example.

raft - UNMAINTAINED: A Go implementation of the Raft distributed consensus protocol.


NOTE: This project is unmaintained. If you are using goraft in a project and want to carry the project forward please file an issue with your ideas and intentions. The original project authors have created new raft implementations now used in etcd and InfluxDB.This is a Go implementation of the Raft distributed consensus protocol. Raft is a protocol by which a cluster of nodes can maintain a replicated state machine. The state machine is kept in sync through the use of a replicated log.

kubeform - Form your :boat: Kubernetes :anchor: cluster anywhere using CoreOS, Terraform and Ansible


Deploy yourself a high-availability Kubernetes cluster, in minutes. Built on Terraform, CoreOS and Ansible.Our recipes for bootstrapping HA Kubernetes clusters on any cloud or on-premise.

kubespray - Setup a kubernetes cluster


If you have questions, join us on the kubernetes slack, channel #kubespray.Note: Upstart/SysV init based OS types are not supported.

tectonic-installer - Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more


Tectonic is built on pure-upstream Kubernetes but has an opinion on the best way to install and run a Kubernetes cluster. This project helps you install a Kubernetes cluster the "Tectonic Way". It provides good defaults, enables install automation, and is customizable to meet your infrastructure needs.Check the ROADMAP for details on where the project is headed.

Kong - The Microservice API Gateway


Kong is a cloud-native, fast, scalable, and distributed Microservice Abstraction Layer (also known as an API Gateway, API Middleware or in some cases Service Mesh). Backed by the battle-tested NGINX with a focus on high performance, Kong was made available as an open-source platform in 2015. Under active development, Kong is used in production at thousands of organizations from startups, Global 5000 and Government organizations.

memcached - A fully featured Memcached client build on top of Node


memcached is a fully featured Memcached client for Node.js. memcached is built with scaling, high availability and exceptional performance in mind. We use consistent hashing to store the data across different nodes. Consistent hashing is a scheme that provides a hash table functionality in a way that adding or removing a server node does not significantly change the mapping of the keys to server nodes. The algorithm that is used for consistent hashing is the same as libketama. There are different ways to handle errors for example, when a server becomes unavailable you can configure the client to see all requests to that server as cache misses until it goes up again. It's also possible to automatically remove the affected server from the consistent hashing algorithm or provide memcached with a failover server that can take the place of the unresponsive server.