Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Database engineers, DBAs, DevOps engineers, and SREs who are looking to quickly deploy HA PostgreSQL in the datacenter-or anywhere else-will hopefully find it useful.
https://github.com/zalando/patroniTags | postgresql high-availability etcd zookeeper consul raft failover haproxy kubernetes |
Implementation | Python |
License | MIT |
Platform | Windows Linux |
Spilo is a Docker image that provides PostgreSQL and Patroni bundled together. Patroni is a template for PostgreSQL HA. Multiple Spilos can create a resilient High Available PostgreSQL cluster. For this, you'll need to start all participating Spilos with identical etcd addresses and cluster names.
docker-image postgresql high-availability patroni postgresql-clusterCompose are no longer maintaining Governor as an active project. We are pleased to say that Governor seeded the Patroni project which Compose has now adopted as their HA solution. We recommend it to anyone seeking similar functionality to Governor. We have archived the project on GitHub; you are free to use it and fork it, but we will not be accepting issues or pull requests.There are many ways to run high availability with PostgreSQL; here we present a template for you to create your own custom fit high availability solution using etcd and python for maximum accessibility.
Stolon is under active development and used in different environments. Probably its on disk format (store hierarchy and key contents) will change in future to support new features. If a breaking change is needed it'll be documented in the release notes and an upgrade path will be provided. Anyway it's quite easy to reset a cluster from scratch keeping the current master instance working and without losing any data.
postgresql high-availability kubernetes docker cloud-native data-consistency synchronous-replication declarative-config standby-cluster disaster-recovery etcd consulpg_auto_failover is an extension and service for PostgreSQL that monitors and manages automated failover for a Postgres cluster. It is optimized for simplicity and correctness and supports Postgres 10 and newer. pg_auto_failover supports several Postgres architectures and implements a safe automated failover for your Postgres service. It is possible to get started with only two data nodes which will be given the roles of primary and secondary by the monitor.
postgres postgresql high-availability postgresql-extension auto-failoverDqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices. Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint.
sqlite database embedded-database distributed-database embedded distributedTraefik (pronounced traffic) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, ...) and configures itself automatically and dynamically. Pointing Traefik at your orchestrator should be the only configuration step you need. . Overview . Features . Supported backends . Quickstart . Web UI . Documentation .
letsencrypt docker kubernetes microservice consul load-balancer zookeeper marathon etcd mesos reverse-proxyYoke is a Postgres redundancy/auto-failover solution that provides a high-availability PostgreSQL cluster that's simple to manage. Note: The ini file can be named anything and reside anywhere. All Yoke needs is the /path/to/config.ini on startup.
postgres auto-failover redundancy nanobox availability-cluster cluster nanopackVoyager is a HAProxy backed secure L7 and L4 ingress controller for Kubernetes developed by AppsCode. This can be used with any Kubernetes cloud providers including aws, gce, gke, azure, acs. This can also be used with bare metal Kubernetes clusters.Voyager provides L7 and L4 loadbalancing using a custom Kubernetes Ingress resource. This is built on top of the HAProxy to support high availability, sticky sessions, name and path-based virtual hosting. This also support configurable application ports with all the options available in a standard Kubernetes Ingress. Here is a complex ingress example that shows how various features can be used. You can find the generated HAProxy Configuration here.
kubernetes haproxy tls ingress appscode voyagerkube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.
kubernetes kubeadm ha high-availability cluster nginx keepalivedNote: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get stable binaries. etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.
etcd raft distributed-systems kubernetes database key-value consensus distributed-databaseDeprecated! This project is deprecated. Consul HAProxy has been replaced by Consul Template. This repository is kept for history and legacy purposes. Please use Consul Template instead.This project provides consul-haproxy, a daemon for dynamically configuring HAProxy using data from Consul.
HAProxy is a fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware.
load-balancer proxy high-availabilityPolarDB for PostgreSQL (PolarDB for short) is an open-source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication.
database distributed-database rdbms postgresqlTræfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, and a lot more) to manage its configuration automatically and dynamically.
reverse-proxy load-balancer http proxy docker microservice marathon mesos consul etcdrqlite is a lightweight, distributed relational database, which uses SQLite as its storage engine. Forming a cluster is very straightforward, it gracefully handles leader elections, and tolerates failures of machines, including the leader. rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation. With it you've got a lightweight and reliable distributed relational data store. Think etcd or Consul, but with relational data modelling also available.
sqlite distributed-systems database relational-database raft consensus distributed-database sqlDoorman is a solution for Global Distributed Client Side Rate Limiting. Clients that talk to a shared resource (such as a database, a gRPC service, a RESTful API, or whatever) can use Doorman to voluntarily limit their use (usually in requests per second) of the resource. Doorman is written in Go and uses gRPC as its communication protocol. For some high-availability features it needs a distributed lock manager. We currently support etcd, but it should be relatively simple to make it use Zookeeper instead.The Doorman master server remembers all clients that currently have capacity and whenever a client asks for capacity it inserts the clients request into its memory and runs the algorithm to figure out what this client should get.
A ZooKeeper "personality" for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster.Protocol encoding and decoding heavily based on go-zookeeper.
zookeeper etcdA ZooKeeper "personality" for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster. Protocol encoding and decoding heavily based on go-zookeeper.
zookeeper etcdredis_failover provides a full automatic master/slave failover solution for Ruby. Redis does not currently provide an automatic failover capability when configured for master/slave replication. When the master node dies, a new master must be manually brought online and assigned as the slave's new master. This manual switch-over is not desirable in high traffic sites where Redis is a critical part of the overall architecture. The existing standard Redis client for Ruby also only supports configuration for a single Redis server. When using master/slave replication, it is desirable to have all writes go to the master, and all reads go to one of the N configured slaves. This gem (built using ZK) attempts to address these failover scenarios. One or more Node Manager daemons run as background processes and monitor all of your configured master/slave nodes. When the daemon starts up, it automatically discovers the current master/slaves. Background watchers are setup for each of the redis nodes. As soon as a node is detected as being offline, it will be moved to an "unavailable" state. If the node that went offline was the master, then one of the slaves will be promoted as the new master. All existing slaves will be automatically reconfigured to point to the new master for replication. All nodes marked as unavailable will be periodically checked to see if they have been brought back online. If so, the newly available nodes will be configured as slaves and brought back into the list of available nodes. Note that detection of a node going down should be nearly instantaneous, since the mechanism used to keep tabs on a node is via a blocking Redis BLPOP call (no polling). This call fails nearly immediately when the node actually goes offline. To avoid false positives (i.e., intermittent flaky network interruption), the Node Manager will only mark a node as unavailable if it fails to communicate with it 3 times (this is configurable via --max-failures, see configuration options below). Note that you can (and should) deploy multiple Node Manager daemons since they each report periodic health reports/snapshots of the redis servers. A "node strategy" is used to determine if a node is actually unavailable. By default a majority strategy is used, but you can also configure "consensus" or "single" as well.
SkyDNS is a distributed service for announcement and discovery of services built on top of etcd. It utilizes DNS queries to discover available services. This is done by leveraging SRV records in DNS, with special meaning given to subdomains, priorities and weights.
docker service-discovery etcd
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.