OpenCDN - Content Delivery Network

  •        5073

OpenCDN aims at the development of an application-level Content Delivery Network, suitable for replication and splitting of live and recorded multimedia content. It uses Relay technology, which splits incoming media packets for each downstream flow. Media distribution is hierarchically arranged among participating Nodes, coordinated by a centralized control unit.

It could deliver of Live Streaming contents to millions of viewers. Its development is based on Apple Darwin Streaming Server.

Source code location: http://sourceforge.net/projects/opencdn/

http://labtel.ing.uniroma1.it/opencdn/

Tags
Implementation
License
Platform

   




Related Projects

Coral CDN- Content Distribution Network


Coral is a peer-to-peer content distribution network. Sites that run Coral automatically replicate content. Using modern peer-to-peer indexing techniques, CoralCDN will efficiently find a cached object if it exists anywhere in the network.

edgedns - A high performance DNS cache designed for Content Delivery Networks


A high performance DNS cache designed for Content Delivery Networks, with built-in security mechanisms to protect origins, clients and itself. On Linux, you may use that sample systemd service to start it.

NSQ - A realtime distributed messaging platform in Go


NSQ is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day. It promotes distributed and decentralized topologies without single points of failure, enabling fault tolerance and high availability coupled with a reliable message delivery guarantee. It scales horizontally, without any centralized brokers. Built-in discovery simplifies the addition of nodes to the cluster.

phxqueue - A high-availability, high-throughput and highly reliable distributed queue based on the Paxos algorithm


PhxQueue is a high-availability, high-throughput and highly reliable distributed queue based on the Paxos protocol. It guarantees At-Least-Once Delivery. It is widely used in WeChat for WeChat Pay, WeChat Media Platform, and many other important businesses.

OpenLiteSpeed - High performance, lightweight, HTTP server


OpenLiteSpeed is a high-performance, lightweight, open source HTTP server developed and copyrighted by LiteSpeed Technologies. It is event driven and it can handle hundreds of thousands of concurrent connections without load spikes.


API Blueprint - A powerful high-level API description language for web APIs


API Blueprint is a powerful high-level API design language for web APIs. It is simple and accessible to everybody involved in the API design lifecycle. Its syntax is concise yet expressive. With API Blueprint you can quickly prototype and model APIs to be created or describe already deployed mission-critical APIs. From a car to the largest Content Distribution Network (CDN) in the world.

go-oryx - The go-oryx is SRS++, focus on real-time live streaming cluster.


Oryx is next generation media streaming server, extract service to processes which communicates over http with each other, to get more flexible, low latency, programmable and high maintainable server. Oryx will implement most of features of srs, which is industrial-strength live streaming cluster, for the best conceptual integrity and the simplest implementation. In another words, oryx is next-generation srs, the srs-ng.

RocketMQ - Distributed messaging and streaming data platform


Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.

Traffic Squeezer - WAN Network Traffic Acceleration solution


Traffic Squeezer is an Open-Source Project which provides WAN Network Traffic Acceleration solution, Internet Optimization solution, and any generic Network Data Communications Optimization and acceleration solution through a set of procedures on a Linux based network device.

memcached - A fully featured Memcached client build on top of Node


memcached is a fully featured Memcached client for Node.js. memcached is built with scaling, high availability and exceptional performance in mind. We use consistent hashing to store the data across different nodes. Consistent hashing is a scheme that provides a hash table functionality in a way that adding or removing a server node does not significantly change the mapping of the keys to server nodes. The algorithm that is used for consistent hashing is the same as libketama. There are different ways to handle errors for example, when a server becomes unavailable you can configure the client to see all requests to that server as cache misses until it goes up again. It's also possible to automatically remove the affected server from the consistent hashing algorithm or provide memcached with a failover server that can take the place of the unresponsive server.

contentful.js - JavaScript library for Contentful's Delivery API (node & browser)


JavaScript SDK for the Contentful Content Delivery API and Content Preview API. It helps you to easily access your Content stored in Contentful with your JavaScript applications. Contentful provides content infrastructure for digital teams to power websites, apps, and devices. Unlike a CMS, Contentful was built to integrate with the modern software stack. It offers a central hub for structured content, powerful management and delivery APIs, and a customizable web app that enable developers and content creators to ship their products faster.

patroni - A template for PostgreSQL High Availability with ZooKeeper, etcd, or Consul


You can find a version of this documentation that is searchable and also easier to navigate at patroni.readthedocs.io.There are many ways to run high availability with PostgreSQL; for a list, see the PostgreSQL Documentation.

governor - Runners to orchestrate a high-availability PostgreSQL


Compose are no longer maintaining Governor as an active project. We are pleased to say that Governor seeded the Patroni project which Compose has now adopted as their HA solution. We recommend it to anyone seeking similar functionality to Governor. We have archived the project on GitHub; you are free to use it and fork it, but we will not be accepting issues or pull requests.There are many ways to run high availability with PostgreSQL; here we present a template for you to create your own custom fit high availability solution using etcd and python for maximum accessibility.

skuld - Distributed task tracking system.


Skuld is (or aims to become) a hybrid AP/CP distributed task queue, targeting linear scaling with nodes, robustness to N/2-1 failures, extremely high availability for enqueues, guaranteed at-least-once delivery, approximate priority+FIFO ordering, and reasonable bounds on task execution mutexing. Each run of a task can log status updates to Skuld, checkpointing their progress and allowing users to check how far along their tasks have gone. Skuld combines techniques from many distributed systems: Dynamo-style consistent hashing, quorums over vnodes, and anti-entropy provide the highly-available basis for Skuld's immutable dataset, including enqueues, updates, and completions. All AP operations are represented by Convergent Replicated Data Types (CRDTs), ensuring convergence in the absence of strong consistency. CP operations (e.g. claims) are supported by a leader election/quorum protocol similar to Viewstamped Replication or Raft, supported by additional constraints on handoff transitions between disparate cohorts.

libchan - Like Go channels over the network


This provides great flexibility in scaling an application by breaking it down into loosely coupled concurrent services. The same application could be composed of goroutines communicating over in-memory channels; then transition to separate unix processes, each assigned to a processor core, and communicating over high-performance IPC; then to a cluster of machines communicating over authenticated TLS sessions. All along it benefits from the concurrency model which has made Go so popular. Not all transports have the same semantics. In-memory Go channels guarantee exactly-once delivery; TCP, TLS, and the various HTTP socket families do not guarantee delivery. Messages arrive in order but may be arbitrarily delayed or lost. There are no ordering invariants across channels.

Squid - HTTP reverse proxy optimizes web delivery


Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. Cached content means data is served locally and users will see this through faster download speeds with frequently-used content.

kubeadm-ha - Kubernetes high availiability deploy based on kubeadm (for v1


kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.

HAProxy - The Reliable, High Performance TCP/HTTP Load Balancer


HAProxy is a fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware.

yoke - Postgres high-availability cluster with auto-failover and automated cluster recovery.


Yoke is a Postgres redundancy/auto-failover solution that provides a high-availability PostgreSQL cluster that's simple to manage. Note: The ini file can be named anything and reside anywhere. All Yoke needs is the /path/to/config.ini on startup.