Displaying 1 to 18 from 18 results

cilium - HTTP, gRPC, and Kafka Aware Security and Networking for Containers with BPF and XDP

  •    Go

Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos. A new Linux kernel technology called BPF is at the foundation of Cilium. It supports dynamic insertion of BPF bytecode into the Linux kernel at various integration points such as: network IO, application sockets, and tracepoints to implement security, networking and visibility logic. BPF is highly efficient and flexible. To learn more about BPF, read more in our extensive BPF and XDP Reference Guide.

kube-ovn - An OVN-based Kubernetes Network Fabric for Enterprises

  •    Go

Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises. The Switch, Router, Firewall showed in the diagram below are all distributed on all Nodes. There is no single point of failure for in cluster network.

sdn-handbook - SDN网络指南(SDN Handbook)

  •    C

SDN (Software Defined Networking)作为当前最重要的热门技术之一,目前已经普遍得到大家的共识。有关SDN的资料和书籍非常丰富,但入门和学习SDN依然是非常困难。本书整理了SDN实践中的一些基本理论和实践案例心得,希望能给大家带来启发,也欢迎大家关注和贡献。

multus-cni - Multi-homed pod cni

  •    Go

Please check the CNI documentation for more information on container networking. Multus may be deployed as a Daemonset, and is provided in this guide along with Flannel. Flannel is deployed as a pod-to-pod network that is used as our "default network". Each network attachment is made in addition to this default network.




kube-spawn - A tool for creating multi-node Kubernetes clusters on a Linux machine using kubeadm & systemd-nspawn

  •    Go

kube-spawn is a tool for creating a multi-node Kubernetes (>= 1.8) cluster on a single Linux machine, created mostly for developers of Kubernetes but is also a Certified Kubernetes Distribution and, therefore, perfect for running and testing deployments locally. It attempts to mimic production setups by making use of OS containers to set up nodes.

circuit - Container Network Management

  •    Go

Circuit manages networks for runc.Circuit has been designed for flexibility. For example, the controller has been designed to be replaced. Circuit leverages CNI for setting up networking using various plugins such as bridge, ptp, etc. Define multiple CNI networks and connect/disconnect, load balance, etc.

bond-cni - Bond-cni is for fail-over and high availability of networking in cloudnative orchestration

  •    Go

This plugin is recommended to be built with Go 1.7.5 which has been fully tested. Note: In this example configuration above required "ipam" is provided by flannel plugin implicitly.

cni-benchmarks - A simple program to benchmark various container networking (CNI) plugins.

  •    Go

This is not benchmarking network speed, it is benchmarking the creation, setup and deletion of networks in the network namespace. Running the benchmarks is just done with go. You will need to use sudo since it requires creating network namespaces.


terway - CNI plugin for alibaba cloud VPC/ENI

  •    Go

After setup kubernetes cluster. Change iptables Forward default policy to ACCEPT on every node of cluster: iptables -P FORWARD ACCEPT. Make sure cluster up and healthy by kubectl get cs.

coil - CNI IPAM + intra-node routing plugin in favor of UNIX philosophy

  •    Go

Coil is a CNI plugin that automates IP address management (IPAM) and programs intra-node Pod routing for Kubernetes. Coil is designed in favor of UNIX philosophy. It is not tightly integrated with routing daemons like BIRD. It does not implement Kubernetes Network Policies either.

linen-cni - A CNI plugin designed for overlay networks with Open vSwitch

  •    Go

A CNI plugin designed for overlay networks with Open vSwitch. Linen provides a convenient way to easily setup networking between pods across nodes. To support multi-host overlay networking and large scale isolatio, VxLAN tunnel end point (VTEP) is used instead of GRE. Linen creates an OVS bridge and added as a port to the linux bridge.

CNI-Genie - CNI-Genie for choosing pod network of your choice during deployment time

  •    Go

Without CNI-Genie, the orchestrator is bound to only a single CNI plugin. E.g., for the case of Kubernetes, without CNI-Genie, kubelet is bound to only a single CNI plugin passed to kubelet on start. CNI-Genie allows for the co-existance of multiple CNI plugins in runtime.

ctnr - rootless runc-based container engine

  •    Go

ctnr is a CLI built on top of runc to manage and build OCI images as well as containers on Linux. ctnr aims to ease system container creation and execution as unprivileged user. Also ctnr is a tool to experiment with runc features. Container networking is limited. With plain ctnr/runc only the host network can be used. The standard CNI plugins require root privileges. One workaround is to map ports on the host network using PRoot* accepting bad performance. A better solution is to use slirp4netns which emulates the TCP/IP stack in a user namespace efficiently. It can be used with ctnr via the slirp-cni-plugin. Once container initialization is also moved into a user namespace with slirp the standard CNI plugins can be used again. For instance the bridge can be used to achieve communication between containers (see user-mode networking).

api-cni-cleanup - Kubernetes CNI cleanner

  •    Go

This application must run inside kubernetes cluster. It's recommended to run it must on Daemonset in order to access to all nodes where the cni files are located.

ovs-cni - Open vSwitch CNI plugin

  •    Shell

This plugin allows user to define Kubernetes networks on top of Open vSwitch bridges available on nodes. IPAM is currently not supported. There is no scheduling involved, desired bridges must be precreated on all nodes. Also, ovs-cni does not configure bridges, it's up to user to connect them to L2, L3 or an overlay network. Finally please note that Open vSwitch must be installed and running on the host. In order to use this plugin, Multus must be installed on all hosts and NetworkAttachmentDefinition CRD created.

midonet-cni - A CNI plugin written in Go which makes midonet talk to kubernetes, support for multiple namespace

  •    Go

A CNI plugin written in Go which makes midonet talk to kubernetes, support for multiple namespace. Edit





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.