Displaying 1 to 15 from 15 results

bashful - Use a yaml file to stitch together commands and bash snippits and run them with a bit of style

  •    Go

This is beta quality! Use at your own risk. Use a yaml file to stitch together commands and bash snippets and run them with a bit of style. Why? Because your bash script should be quiet and shy-like (...and not such a loud mouth).

JPPF - Parallelize computationally intensive tasks and execute them on a Grid

  •    Java

JPPF enables applications with large processing power requirements to be run on any number of computers, in order to dramatically reduce their processing time. This is done by splitting an application into smaller parts that can be executed simultaneously on different machines.

future - :rocket: R package: future: Unified Parallel and Distributed Processing in R for Everyone

  •    R

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user. In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a sequential strategy, which means it is resolved in the current R session. Other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions in parallel on the current machine or concurrently on a compute cluster.




parallel - Parallel Processing for Amp.

  •    PHP

True parallel processing using native threading and multiprocessing for parallelizing code, without blocking.This library is a component for Amp that provides native threading, multiprocessing, process synchronization, shared memory, and task workers. Like other Amp components, this library uses Coroutines built from Promises and Generators to make writing asynchronous code more like writing synchronous code.

pssh - Parallel SSH Tools

  •    Python

PSSH is supported on Python 2.5 and later (including Python 3.1 and later). It was originally written and maintained by Brent N. Chun. Due to his busy schedule, Brent handed over maintenance to Andrew McNabb in October 2009. This project originally located at Google Code. Since Google Code has been closed, and has not appeared elsewhere, I (lilydjwg) have exported it to GitHub.

infantry - Run MapReduce in client's browser.

  •    Javascript

Run MapReduce in client's browser. An example application can be found inside example/ directory of the source code. The example generates chunks of data constituting person names from an NLTK corpus. The map/reduce prepares a dictionary of alphabets as keys and the number of names starting with the particular alphabet as the value.


massiv - Efficient Haskell Arrays featuring Parallel computation

  •    Haskell

massiv is a Haskell library for array manipulation. Performance is one of its main goals, thus it is able to run effortlessly almost all operations in parallel as well as sequentially. The name for this library comes from the Russian word Massiv (Масси́в), which means an Array.

PP-MM-A03 - Parallel Processing - Matrix Multiplication (Cannon, DNS, LUdecomp)

  •    TeX

This repository started out as a class project for the Parallel Processing course at UIC with Professor Kshemkalyani. Several matrix multiplication algorithms are implemented in C, timings are recorded and a report was written.

pagmo2 - A C++ / Python platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model

  •    C++

pagmo (C++) or pygmo (Python) is a scientific library for massively parallel optimization. It is built around the idea of providing a unified interface to optimization algorithms and to optimization problems and to make their deployment in massively parallel environments easy. If you are using pagmo/pygmo as part of your research, teaching, or other activities, we would be grateful if you could star the repository and/or cite our work. The DOI of the latest version and other citation resources are available at this link.

future

  •    R

Reproducibility is part of the core design, which means that perfect, parallel random number generation (RNG) is supported regardless of the amount of chunking, type of load balancing, and future backend being used. To enable parallel RNG, use argument future.seed = TRUE. Note that, except for the built-in parallel package, none of these higher-level APIs implement their own parallel backends, but they rather enhance existing ones. The foreach framework leverages backends such as doParallel, doMC and doFuture, and the future.apply framework leverages the future ecosystem and therefore backends such as built-in parallel, future.callr, and future.batchtools.

future.callr - :rocket: R package future.callr: A Future API for Parallel Processing using 'callr'

  •    R

The future package provides a generic API for using futures in R. A future is a simple yet powerful mechanism to evaluate an R expression and retrieve its value at some point in time. Futures can be resolved in many different ways depending on which strategy is used. There are various types of synchronous and asynchronous futures to choose from in the future package. This package, future.callr, provides a type of futures that utilizes the callr package.

mlbgameday - Multi-core processing of 'Gameday' data from Major League Baseball Advanced Media

  •    R

Designed to facilitate extract, transform and load for MLBAM “Gameday” data. The package is optimized for parallel processing of data that may be larger than memory. There are other packages in the R universe that were built to perform statistics and visualizations on these data, but mlbgameday is concerned primarily with data collection. More uses of these data can be found in the pitchRx, openWAR, and baseballr packages. Although the package is optimized for parallel processing, it will also work without registering a parallel backend. When only querying a single day's data, a parallel backend may not provide much additional performance. However, parallel backends are suggested for larger data sets, as the process will be faster by several orders of magnitude.

elixir-paratize - Elixir library providing some handy parallel processing facilities that supports configuring number of workers and timeout

  •    Elixir

Elixir library providing some handy parallel processing facilities that supports configuring number of workers and timeout. This library is inspired by Parex.