Channel Manager framework for distributed applications

  •        67

The framework implements data flow programming approach ( ) The general idea of the paradigm is building distributed application from components using channels and ports. Each component is an executable (running as separate pro...



Related Projects

MPAPI - Parallel and Distributed Applications Framework

Message Passing API (MPAPI) is a framework that enables programmers to easily write parallel as well as distributed software systems without having to use standard thread synchronization techniques like locks, monitors, semaphors, mutexes and volatile memory. It is written in...

flowgraph - Ready-send coordination layer on top of goroutines.

Go (Golang) offers direct support for concurrent programming with goroutines, channels, and the select statement. Used together they offer all the building blocks necessary for programming across many cores and many Unix boxes. But so much is possible with goroutines that constructing scaleable and reliable systems (that won't deadlock or be throttled by bottlenecks) requires the application or invention of additional concepts.Flowgraphs are a distinct model of concurrent programming that augment channels with ready-send handshake mechanisms to ensure that no data is sent before the receiver is ready. MPI (a framework for supercomputer computation) directly supports flowgraph computation, but doesn't address flow-based computation within a single Unix process. Go with its goroutines (more efficient than threads according to Rob Pike) facilitates taking the MPI model down to whatever granularity the concurrent programmer wants.

Atomix - Scalable, fault-tolerant distributed systems protocols and primitives for the JVM

Atomix is an event-driven framework for coordinating fault-tolerant distributed systems built on the Raft consensus algorithm. It provides the building blocks that solve many common distributed systems problems including group membership, leader election, distributed concurrency control, partitioning, and replication.

DataflowSDK-examples - Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines

Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines. This repository hosts example pipelines that use the Cloud Dataflow SDK and demonstrate the basic functionality of Google Cloud Dataflow.A good starting point for new users is our set of word count (java, python) examples, which compute word frequencies. This series of four successively more detailed pipelines is described in detail in the accompanying walkthrough.

Apache Beam - Unified model for defining both batch and streaming data-parallel processing pipelines

Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow.


The intention of the Lite framework is to support actor-based programming in .NET languages. The actor model differs from conventional object-oriented progamming in that objects communicate via asynchronous message-passing instead of method calling.


dcmpi (DataCutter with MPI support): A parallel/distributed dataflow computing framework using filters and streams. Can use MPI or raw sockets for communication.

FastFlow: programming multi-core

Pattern-based multi/many-core parallel programming framework

paco - Small utility library for coroutine-driven asynchronous generic programming in Python +3.4

Small and idiomatic utility library for coroutine-driven asynchronous generic programming in Python +3.4.Built on top of asyncio, paco provides missing capabilities from Python stdlib in order to write asynchronous cooperative multitasking in a nice-ish way. Also, paco aims to port some of functools and itertools standard functions to the asynchronous world.

Genetic Programming Framework

The Distributed Genetic Programming Framework is a scalable Java genetic programming environment. It comes with an optional specialization for evolving assembler-syntax algorithms. The evolution can be performed in parallel in any computer network.

Gpars - Groovy parallel systems

Gpars is a framework which provides straightforward Java or Groovy-based APIs to declare, which parts of the code should be performed in parallel. Collections can have their elements processed concurrently, closures can be turned into composable asynchronous functions and run in the background on your behalf, mutable data can be protected by agents or software transactional memory.

edflow - Erlang Dataflow programming framework

Erlang Dataflow programming framework

py-aqueduct - A python dataflow/pipe framework for event based programming. (prototype)

A python dataflow/pipe framework for event based programming. (prototype)

status - status of various projects hosted on GitHub

Personal----These are released, but are geared towards personal use; as such, they may be buggy or be of very limited use.* [exceptionpp]( -- personal C++ exception library* [django_iin_lookup]( -- look up credit card brand and stuff, given the first six digits of a credit cardIn Progress----Very buggy, currently in active development.* [spp]( -- C++ scheduling engine* [courses](

bistro - Bistro is a flexible distributed scheduler, a high-performance framework supporting multiple paradigms while retaining ease of configuration, management, and monitoring

This README is a very abbreviated introduction to Bistro. Visit for a more structured introduction, and for the docs.Bistro is a toolkit for making distributed computation systems. It can schedule and run distributed tasks, including data-parallel jobs. It enforces resource constraints for worker hosts and data-access bottlenecks. It supports remote worker pools, low-latency batch scheduling, dynamic shards, and a variety of other possibilities. It has command-line and web UIs.

failure-detectors - Agreement in Asynchronous Distributed Systems

Agreement in Asynchronous Distributed Systems

Pydusa- Parallel Programming in Python

Pydusa is a package for parallel programming using Python. It contains a module for doing MPI programming in Python. We have added parallel solver packages such as Parallel SuperLU for solving sparse linear systems.

Amarok Framework Library

This framework library is an attempt to take advantage of the actor/agent programming model for standalone desktop applications. Most of the concepts are inspired by the actor model, Microsoft Robotics CCR and the TPL Dataflow library.

DataflowJavaSDK - Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines

Google Cloud Dataflow SDK for Java is a distribution of Apache Beam designed to simplify usage of Apache Beam on Google Cloud Dataflow service. This artifact includes the parent POM for other Dataflow SDK artifacts.