zero-to-jupyterhub-k8s - Resources for deploying JupyterHub to a Kubernetes Cluster

  •        129

This is under active development and subject to change. This repo contains resources, such as Helm charts and the Zero to JupyterHub Guide, which help you to deploy JupyterHub on Kubernetes.

https://zero-to-jupyterhub.readthedocs.io
https://github.com/jupyterhub/zero-to-jupyterhub-k8s

Tags
Implementation
License
Platform

   




Related Projects

kubespawner - Kubernetes spawner for JupyterHub

  •    Python

The kubespawner (also known as JupyterHub Kubernetes Spawner) enables JupyterHub to spawn single-user notebook servers on a Kubernetes cluster. You can read a list of all the spawner options available on ReadTheDocs.

jupyterhub-deploy-docker - Reference deployment of JupyterHub with docker

  •    Python

jupyterhub-deploy-docker provides a reference deployment of JupyterHub, a multi-user Jupyter Notebook environment, on a single host using Docker. This deployment is NOT intended for a production environment. It is a reference implementation that does not meet traditional requirements in terms of availability nor scalability.

binderhub - Deterministically build docker images from a git repository + commit

  •    Python

BinderHub allows you to BUILD and REGISTER a Docker image using a GitHub repository, then CONNECT with JupyterHub, allowing you to create a public IP address that allows users to interact with the code and environment within a live JupyterHub instance. You can select a specific branch name, commit, or tag to serve. BinderHub is created using Python, kubernetes, tornado, and traitlets. As such, it should be a familiar technical foundation for Jupyter developers.

jupyterhub - Multi-user server for Jupyter notebooks

  •    Python

With JupyterHub you can create a multi-user Hub which spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server. Project Jupyter created JupyterHub to support many users. The Hub can offer notebook servers to a class of students, a corporate data science workgroup, a scientific research project, or a high performance computing group.


dockerspawner - Spawns JupyterHub single user servers in Docker containers

  •    Python

DockerSpawner enables JupyterHub to spawn single user notebook servers in Docker containers. JupyterHub 0.7 or above is required, which also means Python 3.3 or above.

docker-stacks - Ready-to-run Docker images containing Jupyter applications

  •    Dockerfile

Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter applications and interactive computing tools. The two examples below may help you get started if you have Docker installed know which Docker image you want to use, and want to launch a single Jupyter Notebook server in a container.

kubeflow - Machine Learning Toolkit for Kubernetes

  •    Python

The Kubeflow project is dedicated to making machine learning on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to train, test, and deploy best-of-breed open-source predictive models to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run KubeFlow.This document details the steps needed to run the Kubeflow project in any environment in which Kubernetes runs.

awesome-jupyter - A curated list of awesome Jupyter projects, libraries and resources

  •    

A curated list of awesome Jupyter projects, libraries and resources. Jupyter is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Your contributions are always welcome! Please take a look at the contribution guidelines first.

kubeflow - Machine Learning Toolkit for Kubernetes

  •    Go

The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Kubeflow is a platform for data scientists who want to build and experiment with ML pipelines. Kubeflow is also for ML engineers and operational teams who want to deploy ML systems to various environments for development, testing, and production-level serving.

oauthenticator - OAuth + JupyterHub Authenticator = OAuthenticator

  •    Python

A generic implementation, which you can use with any provider, is also available. For an example docker image using OAuthenticator, see the examples directory.

nbgrader - A system for assigning and grading notebooks

  •    HTML

A system for assigning and grading Jupyter notebooks. Documentation can be found on Read the Docs.

repo2docker - Turn git repositories into Jupyter enabled Docker Images

  •    Python

jupyter-repo2docker takes as input a repository source, such as a GitHub repository. It then builds, runs, and/or pushes Docker images built from that source. See the repo2docker documentation for more information.

repo2docker - Turn repositories into Jupyter-enabled Docker images

  •    Python

repo2docker fetches a git repository and builds a container image based on the configuration files found in the repository. See the repo2docker documentation for more information on using repo2docker.

cf-for-k8s - The open source deployment manifest for Cloud Foundry on Kubernetes

  •    Shell

Cloud Foundry For Kubernetes (cf-for-k8s) blends the popular CF developer API with Kubernetes, Istio, and other open source technologies. The project aims to improve developer productivity for organizations using Kubernetes. cf-for-k8s can be installed atop any conformant environment in minutes. If you're new to Kubernetes, we recommend this Getting Started Guide, which walks you though deploying cf-for-k8s on your machine using a local kind (Kubernetes In Docker) cluster. The guide configures your cf-for-k8s deployment as a developer-edition that runs on your laptop and can handle approximately 10 small applications.

kubeadm-ha - Kubernetes high availiability deploy based on kubeadm (for v1

  •    Smarty

kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.

aws-eks-base - This boilerplate contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud

  •    HCL

This repository contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. The main development and delivery tool is terraform. In our company’s work, we have tried many infrastructure solutions and services and traveled the path from on-premise hardware to serverless. As of today, Kubernetes has become our standard platform for deploying applications, and AWS has become the main cloud.

ml-workspace - 🛠 All-in-one web-based IDE specialized for machine learning and data science.

  •    Jupyter

The ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. This workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated. The workspace requires Docker to be installed on your machine (📖 Installation Guide).

kubeadm-dind-cluster - A Kubernetes multi-node test cluster based on kubeadm

  •    Shell

A Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker). Supports both local workflows and workflows utilizing powerful remote machines/cloud instances for building Kubernetes, starting test clusters and running e2e tests.

aws-iam-authenticator - A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster

  •    Go

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster. The initial work on this tool was driven by Heptio. The project recieves contributions from multiple community engineers and is currently maintained by Heptio and Amazon EKS OSS Engineers. If you are an administrator running a Kubernetes cluster on AWS, you already need to manage AWS IAM credentials to provision and update the cluster. By using AWS IAM Authenticator for Kubernetes, you avoid having to manage a separate credential for Kubernetes access. AWS IAM also provides a number of nice properties such as an out of band audit trail (via CloudTrail) and 2FA/MFA enforcement.