JPPF - Parallelize computationally intensive tasks and execute them on a Grid

  •        779

JPPF enables applications with large processing power requirements to be run on any number of computers, in order to dramatically reduce their processing time. This is done by splitting an application into smaller parts that can be executed simultaneously on different machines.

http://jppf.org/

Tags
Implementation
License
Platform

   




Related Projects

Hypertable - A high performance, scalable, distributed storage and processing system for structured


Hypertable is based on Google's Bigtable Design, which is a proven scalable design that powers hundreds of Google services. Many of the current scalable NoSQL database offerings are based on a hash table design which means that the data they manage is not kept physically ordered. Hypertable keeps data physically sorted by a primary key and it is well suited for Analytics.

UACluster2


UACluster2 is set of manuals and tools to create and manage high performance computing cluster based on Microsoft Hyper-V virtual machines. It needs Microsoft HPC Server 2008 (Microsoft HPC Server 2008 R2) as a basis of cluster creation.

Apache Beam - Unified model for defining both batch and streaming data-parallel processing pipelines


Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow.

doAzureParallel - A R package that allows users to submit parallel workloads in Azure


The doAzureParallel package is a parallel backend for the widely popular foreach package. With doAzureParallel, each iteration of the foreach loop runs in parallel on an Azure Virtual Machine (VM), allowing users to scale up their R jobs to tens or hundreds of machines.doAzureParallel is built to support the foreach parallel computing package. The foreach package supports parallel execution - it can execute multiple processes across some parallel backend. With just a few lines of code, the doAzureParallel package helps create a cluster in Azure, register it as a parallel backend, and seamlessly connects to the foreach package.

StratoSphere - Cloud Computing Framework for Big Data Analytics


The Stratosphere System is an open-source cluster/cloud computing framework for Big Data analytics. It comprises of An extensible higher level language (Meteor) to quickly compose queries for common and recurring use cases, A parallel programming model (PACT, an extension of MapReduce) to run user-defined operations, An efficient massively parallel runtime (Nephele) for fault tolerant execution of acyclic data flows.



hari2981-pydra


Pydra is a distributed and parallel computing framework for python. Pydra aims to provide an easy to use framework for writing and running distributed programs for developers, and an easy to manage cluster for the administrators.

guerilla-cluster - A set of bash tools for doing distributed computing on a guerilla cluster


A set of bash tools for doing distributed computing on a guerilla cluster

Rings Parallel Processing System


The system stores files in a distributed database enviornement and can also be used easily to execute java processes in parallel across a decentralized network of computing agents.

Flare - Decentralized Processing using Spark and Ethereum


Flare is the (uncontested) first implementation of decentralized computing with Ethereum. The goal is to use Apache Spark and Cassandra, two technologies designed for cluster computation, and integrate a connection with Ethereum to provide trusted, verifiably correct, computation in untrusted systems. This will allow anyone with internet access to run code in a distributed decentralized processing network.

Hazelcast Jet - Distributed data processing engine, built on top of Hazelcast


Hazelcast Jet is a distributed computing platform built for high-performance stream processing and fast batch processing. It embeds Hazelcast In Memory Data Grid (IMDG) to provide a lightweight package of a processor and a scalable in-memory storage. It supports distributed java.util.stream API support for Hazelcast data structures such as IMap and IList, Distributed implementations of java.util.{Queue, Set, List, Map} data structures highly optimized to be used for the processing

status - status of various projects hosted on GitHub


Personal----These are released, but are geared towards personal use; as such, they may be buggy or be of very limited use.* [exceptionpp](https://github.com/cripplet/exceptionpp) -- personal C++ exception library* [django_iin_lookup](https://github.com/cripplet/django_iin_lookup) -- look up credit card brand and stuff, given the first six digits of a credit cardIn Progress----Very buggy, currently in active development.* [spp](https://github.com/cripplet/spp) -- C++ scheduling engine* [courses](

firestarter - an easy way to start an ipython parallel cluster distributed over a network


an easy way to start an ipython parallel cluster distributed over a network

BigImp - Adventures in Cluster Computing and Image Processing


Adventures in Cluster Computing and Image Processing

PDSProgram4 - Program Assignment #4 for my Parallel and Distributed Computing course


Program Assignment #4 for my Parallel and Distributed Computing course

PDSProgram3 - Program Assignment #3 for my Parallel and Distributed Computing course


Program Assignment #3 for my Parallel and Distributed Computing course

PDSProgram2 - Program Assignment #2 for my Parallel and Distributed Computing course


Program Assignment #2 for my Parallel and Distributed Computing course

PDSProgram1 - Program Assignment #1 for my Parallel and Distributed Computing course


Program Assignment #1 for my Parallel and Distributed Computing course

ParLib - Library for parallel and distributed computing.


Library for parallel and distributed computing.

dispy - Python framework for distributed, parallel computing


Python framework for distributed, parallel computing

PDC-Project - Course Project: Parallel and Distributed Computing (EMDC@IST)


Course Project: Parallel and Distributed Computing (EMDC@IST)