MPJ Express - Parallel Programming in Java

  •        1144

MPJ Express is an open source Java message passing library that allows application developers to write and execute parallel applications for multicore processors and compute clusters/clouds. It allows writing parallel Java applications using an MPI-like API.

http://mpj-express.org/index.html

Tags
Implementation
License
Platform

   




Related Projects

UACluster2

  •    

UACluster2 is set of manuals and tools to create and manage high performance computing cluster based on Microsoft Hyper-V virtual machines. It needs Microsoft HPC Server 2008 (Microsoft HPC Server 2008 R2) as a basis of cluster creation.

ArrayFire - Parallel Computing Library

  •    C++

ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (CUDA, OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.

future - :rocket: R package: future: Unified Parallel and Distributed Processing in R for Everyone

  •    R

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user. In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a sequential strategy, which means it is resolved in the current R session. Other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions in parallel on the current machine or concurrently on a compute cluster.

cpp-taskflow - Fast C++ Parallel Programming with Task Dependency Graphs

  •    C++

A fast C++ header-only library to help you quickly build parallel programs with complex task dependencies. Cpp-Taskflow lets you quickly build parallel dependency graphs using modern C++17. It supports both static and dynamic tasking, and is by far faster, more expressive, and easier for drop-in integration than existing libraries.

accelerate - Embedded language for high-performance array computations

  •    Haskell

Data.Array.Accelerate defines an embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations). These computations are online-compiled and executed on a range of architectures. Chapter 6 of Simon Marlow's book Parallel and Concurrent Programming in Haskell contains a tutorial introduction to Accelerate.


Parallel Runtime Library

  •    

Parallel Runtime Library is optimized library that provide Easy to use and High Performance Parallelism Computing. Parallel Runtime Library provide: Effective Parallel Runtime, Concurrent Data Structure, Task and Data Parallel, Producer and Consumer and Agent Model.

futhark - :boom::computer::boom: A data-parallel functional programming language

  •    Haskell

Futhark is a purely functional data-parallel programming language. Its optimising compiler is able to compile it to typically very performant GPU code. The language and compiler is developed at DIKU at the University of Copenhagen, originally as part of the HIPERFIT centre. Although still under heavy development, Futhark is already useful for practical high-performance programming. For more information, see the website.

Julia - Language for Technical Computing

  •    Julia

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. This computation is automatically distributed across all available compute nodes, and the result, reduced by summation (+), is returned at the calling node.

OrangeFS - Scale-out Network File System

  •    C

OrangeFS is a scale-out network file system designed for use on high-end computing (HEC) systems that provides very high-performance access to multi-server-based disk storage, in parallel. The OrangeFS server and client are user-level code, making them very easy to install and manage. OrangeFS has optimized MPI-IO support for parallel and distributed applications, and it is leveraged in production installations and used as a research platform for distributed and parallel storage.

Co-dfns - High-performance, Reliable, and Parallel APL

  •    APL

The Co-dfns project aims to provide a high-performance, high-reliability compiler for a parallel extension of the Dyalog dfns programming language. The dfns language is a functionally oriented, lexically scoped dialect of APL. The Co-dfns language extends the dfns language to include explicit task parallelism with implicit structures for synchronization and determinism. The language is designed to enable rigorous formal analysis of programs to aid in compiler optimization and programmer productivity, as well as in the general reliability of the code itself. Our mission is to deliver scalable APL programming to information and domain experts across many fields, expanding the scope and capabilities of what you can effectively accomplish with APL.

Arraymancer - A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU, OpenCL and embedded devices

  •    Nim

Arraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing and in particular a deep learning ecosystem. The library is inspired by Numpy and PyTorch. The library provides ergonomics very similar to Numpy, Julia and Matlab but is fully parallel and significantly faster than those libraries. It is also faster than C-based Torch.

Practical Parallel and Concurrent Programming

  •    

Eight two-week units of courseware (slides, lecture notes, samples, tools) for teaching how to program parallel/concurrent applications at a high-level using Microsoft’s Parallel Extensions to the .NET Framework.

clustershell - Scalable cluster administration Python framework — Manage node sets, node groups and execute commands on cluster nodes in parallel

  •    Python

ClusterShell is an event-driven open source Python library, designed to run local or distant commands in parallel on server farms or on large Linux clusters. It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. ClusterShell's primary goal is to improve the administration of high- performance clusters by providing a lightweight but scalable Python API for developers. It also provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library features.

lwjgl3 - LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL), audio (OpenAL) and parallel computing (OpenCL) applications

  •    Kotlin

LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL/Vulkan), audio (OpenAL) and parallel computing (OpenCL) applications. This access is direct and high-performance, yet also wrapped in a type-safe and user-friendly layer, appropriate for the Java ecosystem.LWJGL is an enabling technology and provides low-level access. It is not a framework and does not provide higher-level utilities than what the native libraries expose. As such, novice programmers are encouraged to try one of the frameworks or game engines that make use of LWJGL, before working directly with the library.

hpat - A compiler-based big data framework in Python

  •    Python

High Performance Analytics Toolkit (HPAT) scales analytics/ML codes in Python to bare-metal cluster/cloud performance automatically. It compiles a subset of Python (Pandas/Numpy) to efficient parallel binaries with MPI, requiring only minimal code changes. HPAT is orders of magnitude faster than alternatives like Apache Spark. HPAT's documentation can be found here.

StratoSphere - Cloud Computing Framework for Big Data Analytics

  •    Java

The Stratosphere System is an open-source cluster/cloud computing framework for Big Data analytics. It comprises of An extensible higher level language (Meteor) to quickly compose queries for common and recurring use cases, A parallel programming model (PACT, an extension of MapReduce) to run user-defined operations, An efficient massively parallel runtime (Nephele) for fault tolerant execution of acyclic data flows.

compute - A C++ GPU Computing Library for OpenCL

  •    C++

Boost.Compute is a GPU/parallel-computing library for C++ based on OpenCL. The core library is a thin C++ wrapper over the OpenCL API and provides access to compute devices, contexts, command queues and memory buffers.

chapel - a Productive Parallel Programming Language

  •    Chapel

Chapel is a modern programming language designed for productive parallel computing at scale. Chapel's design and implementation have been undertaken with portability in mind, permitting Chapel to run on multicore desktops and laptops, commodity clusters, and the cloud, in addition to the high-end supercomputers for which it was originally undertaken. Chapel is developed and released under the terms of the Apache 2.0 license, though it also makes use of third-party packages under their own licensing terms. See the LICENSE file in this directory for details.

Transactional Entity Framework

  •    C++

UNLEASH THE POWER OF PARALLEL COMPUTING WITH AUTOMATIC TRANSACTIONAL MEMORY






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.