humpback-center - Humpback Center 主要为 Humpback 平台提供集群容器调度服务,以集群中心角色实现各个 Group 的容器分配管理。

  •        46

Humpback Center 主要为 Humpback 平台提供集群容器调度服务,以集群中心角色实现各个 Group 的容器分配管理。

https://github.com/humpback/humpback-center

Tags
Implementation
License
Platform

   




Related Projects

humpback - Quickly build lightweight docker cloud for enterprise user.

  •    

Quickly build lightweight docker cloud for enterprise user. Single Mode Single mode, which implements container management for a single group of hosts, providing container creation, container operations, container renaming, container upgrade and cloning, container monitoring, and container log output.

Nomad - Easily Deploy Applications at Any Scale

  •    Go

Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications.

kubernetes-vagrant-centos-cluster - Setting up a distributed Kubernetes cluster along with Istio service mesh locally with Vagrant and VirtualBox

  •    Shell

Setting up a Kubernetes cluster and istio service mesh with vagrantfile which consists of 1 master(also as node) and 3 nodes. You don't have to create complicated CA files or configuration. Because I want to setup the etcd, apiserver, controller and scheduler without docker container.

Fenzo - Extensible Scheduler for Mesos Frameworks

  •    Java

Fenzo is a scheduler Java library for Apache Mesos frameworks that supports plugins for scheduling optimizations and facilitates cluster autoscaling.Apache Mesos frameworks match and assign resources to pending tasks. Fenzo presents a plugin-based, Java library that facilitates scheduling resources to tasks by using a variety of possible scheduling objectives, such as bin packing, balancing across resource abstractions (such as AWS availability zones or data center racks), resource affinity, and task locality.

docker-redis-cluster - Dockerfile for Redis Cluster (redis 3.0+)

  •    Makefile

Docker image with redis built and installed from source. The main usage for this container is to test redis cluster code. For example in https://github.com/Grokzen/redis-py-cluster repo.


PiCluster - Manage Docker Containers

  •    Javascript

PiCluster is a simple way to manage Docker containers on multiple hosts. Docker Swarm not that good and Kubernetes was too difficult to install currently on ARM. PiCluster will only build and run images from Dockerfile's on the host specified in the config file. This software will work on regular x86 hardware also and is not tied to ARM.

Redisson - Redis based In-Memory Data Grid for Java

  •    Java

Redisson - distributed Java objects and services (Set, Multimap, SortedSet, Map, List, Queue, BlockingQueue, Deque, BlockingDeque, Semaphore, Lock, AtomicLong, Map Reduce, Publish / Subscribe, Bloom filter, Spring Cache, Executor service, Tomcat Session Manager, Scheduler service, JCache API) on top of Redis server. Rich Redis client.

ofelia - A docker job scheduler (aka. crontab for docker)

  •    Go

Ofelia is a modern and low footprint job scheduler for docker environments, built on Go. Ofelia aims to be a replacement for the old fashioned cron. It has been a long time since cron was released, actually more than 28 years. The world has changed a lot and especially since the Docker revolution. Vixie's cron works great but it's not extensible and it's hard to debug when something goes wrong.

Spilo - Highly available elephant herd: HA PostgreSQL cluster using Docker

  •    Python

Spilo is a Docker image that provides PostgreSQL and Patroni bundled together. Patroni is a template for PostgreSQL HA. Multiple Spilos can create a resilient High Available PostgreSQL cluster. For this, you'll need to start all participating Spilos with identical etcd addresses and cluster names.

sparrow - Sparrow scheduling platform (U.C. Berkeley).

  •    Python

Sparrow is a high throughput, low latency, and fault-tolerant distributed cluster scheduler. Sparrow is designed for applications that require resource allocations frequently for very short jobs, such as analytics frameworks. Sparrow schedules from a distributed set of schedulers that maintain no shared state. Instead, to schedule a job, a scheduler obtains intantaneous load information by sending probes to a subset of worker machines. The scheduler places the job's tasks on the least loaded of the probed workers. This technique allows Sparrow to schedule in milliseconds, two orders of magnitude faster than existing approaches. Sparrow also handles failures: if a scheduler fails, a client simply directs scheduling requests to an alternate scheduler. To read more about Sparrow, check out our paper.

ksync - Sync files between your local system and a kubernetes cluster.

  •    Go

ksync speeds up developers who build applications for Kubernetes. It transparently updates containers running on the cluster from your local checkout. This enables developers to use their favorite IDEs, such as Atom or Sublime Text to work from inside a cluster instead of from outside it. There is no reason to wait minutes to test code changes when you can see the results in seconds. You can also download the latest release and install it yourself.

Cascading - Data Processing Workflows on Hadoop

  •    Java

Cascading is a Data Processing API, Process Planner, and Process Scheduler used for defining and executing complex, scale-free, and fault tolerant data processing workflows on an Apache Hadoop cluster. It is a thin Java library and API that sits on top of Hadoop's MapReduce layer and is executed from the command line like any other Hadoop application.

evQueue - Job scheduler and queueing engine

  •    C++

evQueue is an open source job scheduler and queueing engine. It features an event-driven C++ engine and a PHP / MySQL web control interface which provides tasks monitoring and creation. It provides both simple task execution and complex task chaining (workflow) using an easy to use drag & drop web interface. Workflow description includes output linking to input, conditions, loops... Queues management provides an easy way for task parallelization and resource control.

kubeadm-ha - Kubernetes high availiability deploy based on kubeadm (for v1

  •    Smarty

kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.

Telepresence - Local development against a remote Kubernetes or OpenShift cluster

  •    Python

Telepresence substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.

postgres-operator - Postgres operator creates and manages PostgreSQL clusters running in Kubernetes

  •    Go

The operator watches additions, updates, and deletions of PostgreSQL cluster manifests and changes the running clusters accordingly. For example, when a user submits a new manifest, the operator fetches that manifest and spawns a new Postgres cluster along with all necessary entities such as Kubernetes StatefulSets and Postgres roles. See this Postgres cluster manifest for settings that a manifest may contain. The operator also watches updates to its own configuration and alters running Postgres clusters if necessary. For instance, if a pod docker image is changed, the operator carries out the rolling update. That is, the operator re-spawns one-by-one pods of each StatefulSet it manages with the new Docker image.

kubeadm-dind-cluster - A Kubernetes multi-node test cluster based on kubeadm

  •    Shell

A Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker). Supports both local workflows and workflows utilizing powerful remote machines/cloud instances for building Kubernetes, starting test clusters and running e2e tests.

docker-elasticsearch-kubernetes - Ready to use Elasticsearch + Kubernetes discovery plug-in Docker image

  •    

Ready to use, lean Elasticsearch Docker image ready for using within a Kubernetes cluster. See pires/kubernetes-elasticsearch-cluster for instructions on how to run, scale and use Elasticsearch on Kubernetes.

adop-docker-compose - Talk to us on Gitter: https://gitter.im/Accenture/ADOP

  •    Shell

The DevOps Platform is a tools environment for continuously testing, releasing and maintaining applications. Reference code, delivery pipelines, automated testing and environments can be loaded in via the concept of Cartridges. The platform runs on a docker container cluster so it can be stood up for evaluation purposes on just one server using local storage, or stood up in a multi-data centre cluster with distributed network storage. It will also run anywhere that docker runs.

autodock - Autodock. The docker container automation tool.

  •    Python

All the deliciousness of a crazy KFC combination sandwich...none of the shame. Autodock is a docker automation tool. This tool will help you spin up docker containers faster than ever! It automatically sorts servers in your Docker cluster by lowest load. It then distributes the containers you want to create among them. After bootstrapping the containers with Paramiko and Salt it saves this information to the ETCD cluster.