CoreOS- Linux for Massive Server Deployments

  •        1403

CoreOS is just Linux kernel + systemd. This is not for desktop PCs or Laptops or Tablets. It is meant to run the hundreds of thousands of servers. It is based on Google Chrome, that automatically updates itself every few weeks. It supports built in service discovery and configuration sharing using the Raft distributed consensus algorithm. CoreOS is made for companies building platforms.

https://coreos.com/
https://github.com/coreos

Tags
Implementation
License
Platform

   




Related Projects

RancherOS - Tiny Linux distro that runs the entire OS as Docker containers

  •    Go

RancherOS is a minimalist Linux distribution perfect for running Docker containers. It runs Docker directly on top of the kernel and delivers Linux services as containers. It includes only the services needed to run Docker. RancherOS reduces the hassle with updating, patching, and maintaining your container host operating system.

tack - Terraform module for creating Kubernetes cluster running on Container Linux by CoreOS in an AWS VPC

  •    HCL

Opinionated Terraform module for creating a Highly Available Kubernetes cluster running on Container Linux by CoreOS (any channel) in an AWS Virtual Private Cloud VPC. With prerequisites installed make all will simply spin up a default cluster; and, since it is based on Terraform, customization is much easier than CloudFormation.The default configuration includes Kubernetes add-ons: DNS, Dashboard and UI.

Dragonfly - Dragonfly is an intelligent P2P based file distribution system.

  •    Java

Dragonfly is an intelligent P2P based file distribution system. It resolved issues like low-efficiency,low-success rate,waste of network bandwidth you faced in large-scale file distribution scenarios such as application deployment, large-scale cache file distribution, data file distribution, images distribution etc. In Alibaba, the system transferred 2 billion times and distributed 3.4PB data every month, it becomes one of the most important infrastructure in Alibaba. The reliability is up to 99.9999%. DevOps takes a lot of benefits from container technologies . but at the same time, it also bring a lot of challenges: the efficiency of image distribution, especially when you have a lot of applications and require image distribution at the same time. Dragonfly works extremely well with both Docker and Pouch, and actually we compatible with any other container technologies without any modifications of container engine.

Admiral - Container management solution with an accent on modeling containerized applications and provide placement based on dynamic policy allocation

  •    Java

Admiral™ is a highly scalable and very lightweight Container Management platform for deploying and managing container based applications. It is designed to have a small footprint and boot extremely quickly. Admiral™ is intended to provide automated deployment and lifecycle management of containers.This container management solution can help reduce complexity and achieve advantages including simplified and automated application delivery, optimized resource utilization along with business governance and applying business policies and overall data center integration.

Moby Project - An open framework to assemble specialized container systems

  •    Go

Moby is an open-source project created by Docker to advance the software containerization movement. It provides a “Lego set” of dozens of components, the framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.


longshoreman - Automated deployment with Docker.

  •    Javascript

Longshoreman automates application deployment using Docker. Just create a Docker repository (or use a service), configure the cluster using AWS or Digital Ocean (or whatever you like) and deploy applications using a Heroku-like CLI tool.We created Longshoreman because we love using Docker but were frustrated with the lack of production-ready deployment options that were available at the time. We looked closely at Deis, Flynn, Dokku and others, but they either did not meet our requirements or were explicitly marked as not ready for production. We were extremely impressed by Deis in particular and its use of bleeding edge technologies like CoreOS, etcd and systemd. The biggest shortcoming we found with Deis is that it rebuilds Dockerfiles from scratch for each deploy (as far as I know).

fleet - fleet ties together systemd and etcd into a distributed init system

  •    Go

fleet is no longer developed or maintained by CoreOS. After February 1, 2018, a fleet container image will continue to be available from the CoreOS Quay registry, but will not be shipped as part of Container Linux. CoreOS instead recommends Kubernetes for all clustering needs.fleet ties together systemd and etcd into a simple distributed init system. Think of it as an extension of systemd that operates at the cluster level instead of the machine level.

Rancher - Complete container management platform

  •    Go

Rancher is an open source project that provides a complete platform for operating Docker in production. It provides infrastructure services such as multi-host networking, global and local load balancing, and volume snapshots. It integrates native Docker management capabilities such as Docker Machine and Docker Swarm. It offers a rich user experience that enables devops admins to operate Docker in production at large scale.

infrakit - A toolkit for creating and managing declarative, self-healing infrastructure.

  •    Go

InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing. In this video, InfraKit was used to build a custom linux operating system (based on linuxkit). We then deployed a cluster of virtual machine instances on a local Mac laptop using the Mac Xhyve hypervisor (HyperKit). A cluster of 3 servers booted up in seconds. Later, after the custom OS image has been updated with a new public key, InfraKit detects the change and orchestrates a rolling update of the nodes. We then deploy the same OS image to a bare-metal ARM server running on Packet.net, where the server uses custom ipxe boot directly from the localhost. It demonstrates some of the key concepts and components in InfraKit and shows how InfraKit can be used to implement an integrated workflow from custom OS image creation to cluster deployment and Day N management. The entire demo is published as a playbook, and you can create your own playbooks too.

deploykit - A toolkit for creating and managing declarative, self-healing infrastructure.

  •    Go

InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing. In this video, InfraKit was used to build a custom linux operating system (based on linuxkit). We then deployed a cluster of virtual machine instances on a local Mac laptop using the Mac Xhyve hypervisor (HyperKit). A cluster of 3 servers booted up in seconds. Later, after the custom OS image has been updated with a new public key, InfraKit detects the change and orchestrates a rolling update of the nodes. We then deploy the same OS image to a bare-metal ARM server running on Packet.net, where the server uses custom ipxe boot directly from the localhost. It demonstrates some of the key concepts and components in InfraKit and shows how InfraKit can be used to implement an integrated workflow from custom OS image creation to cluster deployment and Day N management. The entire demo is published as a playbook, and you can create your own playbooks too.

Apache Karaf - OSGi distribution for server-side applications

  •    Java

Karaf Container is a modern and polymorphic container. It's a lightweight, powerful, and enterprise ready container powered by OSGi. By polymorphic, it means that Karaf can host any kind of applications: OSGi, Spring, WAR, and much more. It uses either the Apache Felix or Eclipse Equinox OSGi frameworks, providing additional features on top of the framework.

Portainer - Simple management UI for Docker

  •    Javascript

Portainer is a lightweight management UI which allows you to easily manage your different Docker environments (Docker hosts or Swarm clusters). Portainer is meant to be as simple to deploy as it is to use. It consists of a single container that can run on any Docker engine (can be deployed as Linux container or a Windows native container). It allows you to manage your Docker containers, images, volumes, networks and more ! It is compatible with the standalone Docker engine and with Docker Swarm mode.

gradle-tomcat-plugin - Gradle plugin supporting deployment of your web application to an embedded Tomcat web container

  •    Groovy

The plugin provides deployment capabilities of web applications to an embedded Tomcat web container in any given Gradle build. It extends the War plugin. At the moment the Tomcat versions 6.0.x, 7.0.x, 8.0.x, 8.5.x and 9.0.x are supported. The typical use case for this plugin is to support deployment during development. The plugin allows for rapid web application development due to the container's fast startup times. Gradle starts the embedded container in the same JVM. Currently, the container cannot be forked as a separate process. This plugin also can't deploy a WAR file to a remote container. If you are looking for this capability, please have a look at the Cargo plugin instead.

Harbor - An enterprise-class container registry server based on Docker Distribution

  •    Go

Project Harbor is an enterprise-class registry server that stores and distributes Docker images. It extends the open source Docker Distribution by adding the functionalities usually required by an enterprise, such as security, identity and management. As an enterprise private registry, Harbor offers better performance and security.

Docker - The Linux container engine

  •    Go

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Common use cases for Docker include Automating the packaging and deployment of applications, Creation of lightweight, private PAAS environments, Automated testing and continuous integration/deployment, Deploying and scaling web apps, databases and backend services

nscale - Deployment just got easy

  •    Javascript

nscale is an open toolkit supporting configuration, build and deployment of connected container sets. nscale is ideally used to support the development and operation of microservice based systems. Unlike other distributed container management systems, nscale aims to do the simplest thing that could possibly work. nscale aims to be simple to setup, configure and operate.

crfs - CRFS: Container Registry Filesystem

  •    Go

CRFS is a read-only FUSE filesystem that lets you mount a container image, served directly from a container registry (such as gcr.io), without pulling it all locally first. Go's continuous build system tests Go on many operating systems and architectures, using a mix of containers (mostly for Linux) and VMs (for other operating systems). We prioritize fast builds, targetting 5 minute turnaround for pre-submit tests when testing new changes. For isolation and other reasons, we run all our containers in a single-use fresh VMs. Generally our containers do start quickly, but some of our containers are very large and take a long time to start. To work around that, we've automated the creation of VM images where our heavy containers are pre-pulled. This is all a silly workaround. It'd be much better if we could just read the bytes over the network from the right place, without the all the hoops.

openedge - Extend cloud computing, data and service seamlessly to edge devices.

  •    Go

OpenEdge is an open edge computing framework that extends cloud computing, data and service seamlessly to edge devices. It can provide temporary offline, low-latency computing services, and include device connect, message routing, remote synchronization, function computing, video access pre-processing, AI inference, etc. The combination of OpenEdge and the Cloud Management Suite of BIE(Baidu IntelliEdge) will achieve cloud management and application distribution, enable applications running on edge devices and meet all kinds of edge computing scenario. About architecture design, OpenEdge takes modularization and containerization design mode. Based on the modular design pattern, OpenEdge splits the product to multiple modules, and make sure each one of them is a separate, independent module. In general, OpenEdge can fully meet the conscientious needs of users to deploy on demand. Besides, OpenEdge also takes containerization design mode to build images. Due to the cross-platform characteristics of docker to ensure the running environment of each operating system is consistent. In addition, OpenEdge also isolates and limits the resources of containers, and allocates the CPU, memory and other resources of each running instance accurately to improve the efficiency of resource utilization.

humpback - Quickly build lightweight docker cloud for enterprise user.

  •    

Quickly build lightweight docker cloud for enterprise user. Single Mode Single mode, which implements container management for a single group of hosts, providing container creation, container operations, container renaming, container upgrade and cloning, container monitoring, and container log output.

Distro Astro - Linux Distribution for Astronomers

  •    C

Distro Astro has features for almost all astronomy use—from observatories, planetariums, and professional researchers, to astrophotographers and amateur enthusiasts. The INDI Library built into Distro Astro provides telescope control for the most common telescopes from Meade, Celestron, Orion, and other major telescope brands. Nightshade is an advanced planetarium software for fish-eye dome projections developed by planetarium provider Digitalis Education.