metallb - A network load-balancer implementation for Kubernetes using BGP and ARP

  •        250

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.Check out MetalLB's website for more information.

https://metallb.universe.tf
https://github.com/google/metallb

Tags
Implementation
License
Platform

   




Related Projects

metallb - A network load-balancer implementation for Kubernetes using standard routing protocols

  •    CSS

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Check out MetalLB's website for more information.

glb-director - GitHub Load Balancer Director and supporting tooling.

  •    C

The GitHub Load Balancer (GLB) Director is a set of components that provide a scalable set of stateless Layer 4 load balancer servers capable of line rate packet processing in bare metal datacenter environments, and is used in production to serve all traffic from GitHub's datacenters. GLB Director is designed to be used in datacenter environments where multiple servers can announce the same IP address via BGP and have network routers shard traffic amongst those servers using ECMP routing. While ECMP shards connections per-flow using consistent hashing, addition or removal of nodes will generally cause some disruption to traffic as state isn't stored for each flow. A split L4/L7 design is typically used to allow the L4 servers to redistribute these flows back to a consistent server in a flow-aware manner. GLB Director implements the L4 (director) tier of a split L4/L7 load balancer design.

kubeadm-ha - Kubernetes high availiability deploy based on kubeadm (for v1

  •    Smarty

kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.

keepalived - Keepalived

  •    C

The main goal of the keepalived project is to add a strong & robust keepalive facility to the Linux Virtual Server project. It implements a multilayer TCP/IP stack checks. Keepalived implements a framework based on three family checks : Layer3, Layer4 & Layer5. This framework gives the daemon the ability of checking a LVS server pool states. Keepalived can be sumarize as a LVS driving daemon. Keepalived implementation is based on an I/O multiplexer to handle a strong multi-threading framework. All the events process use this I/O multiplexer.

Kong - The Microservice API Gateway

  •    Lua

Kong is a cloud-native, fast, scalable, and distributed Microservice Abstraction Layer (also known as an API Gateway, API Middleware or in some cases Service Mesh). Backed by the battle-tested NGINX with a focus on high performance, Kong was made available as an open-source platform in 2015. Under active development, Kong is used in production at thousands of organizations from startups, Global 5000 and Government organizations.


matchbox - Network boot and provision Container Linux clusters (e.g. etcd3, Kubernetes, more)

  •    Go

matchbox is a service that matches bare-metal machines (based on labels like MAC, UUID, etc.) to profiles that PXE boot and provision Container Linux clusters. Profiles specify the kernel/initrd, kernel arguments, iPXE config, GRUB config, Container Linux Config, or other configs a machine should use. Matchbox can be installed as a binary, RPM, container image, or deployed on a Kubernetes cluster and it provides an authenticated gRPC API for clients like Terraform.

k8s-on-raspbian - Kubernetes on Raspbian (Raspberry Pi)

  •    Shell

This guide is part of a larger blog post: Build your own bare-metal ARM cluster. Once you're up and running please share your clusters on Twitter with @alexellisuk.

dhcplb - dhcplb is Facebook's implementation of a load balancer for DHCP.

  •    Go

dhcplb is Facebook's implementation of a DHCP v4/v6 relayer with load balancing capabilities. Facebook currently uses it in production, and it's deployed at global scale across all of our data centers. Facebook uses DHCP to provide network configuration to bare-metal machines at provisioning phase and to assign IPs to out-of-band interfaces.

typhoon - Minimal and free Kubernetes distribution

  •    HCL

Typhoon is a minimal and free Kubernetes distribution. Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.

Kubernetes-GPU-Guide - This guide should help fellow researchers and hobbyists to easily automate and accelerate there deep leaning training with their own Kubernetes GPU cluster

  •    Shell

This guide should help fellow researchers and hobbyists to easily automate and accelerate there deep leaning training with their own Kubernetes GPU cluster. Therefore I will explain how to easily setup a GPU cluster on multiple Ubuntu 16.04 bare metal servers and provide some useful scripts and .yaml files that do the entire setup for you. By the way: If you need a Kubernetes GPU-cluster for other reasons, this guide might be helpful to you as well.

voyager - ✈️️ Secure Ingress Controller for Kubernetes

  •    Go

Voyager is a HAProxy backed secure L7 and L4 ingress controller for Kubernetes developed by AppsCode. This can be used with any Kubernetes cloud providers including aws, gce, gke, azure, acs. This can also be used with bare metal Kubernetes clusters.Voyager provides L7 and L4 loadbalancing using a custom Kubernetes Ingress resource. This is built on top of the HAProxy to support high availability, sticky sessions, name and path-based virtual hosting. This also support configurable application ports with all the options available in a standard Kubernetes Ingress. Here is a complex ingress example that shows how various features can be used. You can find the generated HAProxy Configuration here.

kubespray - Setup a kubernetes cluster

  •    Python

If you have questions, join us on the kubernetes slack, channel #kubespray.Note: Upstart/SysV init based OS types are not supported.

kube-cert-manager - Manage Lets Encrypt certificates for a Kubernetes cluster.

  •    Go

This is not an official Google Project.The secrets created by the Kubernetes Certificate Manager can be used to configure any TLS terminating load balancer.

kubespray - Deploy a Production Ready Kubernetes Cluster

  •    Python

probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault"). One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible. A workaround consists of setting ANSIBLE_LIBRARY and ANSIBLE_MODULE_UTILS environment variables respectively to the ansible/modules and ansible/module_utils subdirectories of pip packages installation location, which can be found in the Location field of the output of pip show [package] before executing ansible-playbook.

ovn-kubernetes - Kubernetes integration for OVN

  •    Go

This document describes how to use Open Virtual Networking with Kubernetes 1.8.0 or later. This document assumes that you have installed Open vSwitch by following INSTALL.rst or by using the distribution packages such as .deb or.rpm. OVN provides network virtualization to containers. In the "overlay" mode, OVN can create a logical network amongst containers running on multiple hosts. In this mode, OVN programs the Open vSwitch instances running inside your hosts. These hosts can be bare-metal machines or vanilla VMs.

Trafik - A Modern Reverse Proxy

  •    Go

Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, and a lot more) to manage its configuration automatically and dynamically.

Calico - A pure layer 3 approach for Virtual Networking for highly scalable data centers

  •    Python

Project Calico represents a new approach to virtual networking, based on the same scalable IP networking principles as the Internet. Unlike other virtual networking approaches, Calico does not use overlays, instead providing a pure Layer 3 approach to data center networking. Calico is simple to deploy and diagnose, provides a rich security policy, supports both IPv4 and IPv6 and can be used across a combination of bare-metal, VM and container workloads.

contour - Contour is a Kubernetes ingress controller for Lyft's Envoy proxy.

  •    Go

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Unlike other Ingress controllers, Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile. Contour also introduces a new ingress API (IngressRoute) which is implemented via a Custom Resource Definition (CRD). Its goal is to expand upon the functionality of the Ingress API to allow for a richer user experience as well as solve shortcomings in the original design.

loadcat - NGINX load balancer configurator

  •    Go

Loadcat is an Nginx configurator that allows you to use Nginx as a load balancer. The project is inspired by the various Nginx load balancing tutorial articles available online and also the existence of Linode's load balancer service NodeBalancers. So far the tool covers some of HTTP and HTTPS load balancing features, such as SSL termination, adding servers on the fly, marking them as unavailable or backup as necessary, and setting their weights to distribute load fairly. Loadcat parses a TOML encoded configuration file. In case one is not found, Loadcat will create one with same sane defaults. The location of the configuration file can be specified with the -config flag.