Displaying 1 to 20 from 77 results

spack - A flexible package manager that supports multiple versions, configurations, platforms, and compilers

  •    Python

Spack is a multi-platform package manager that builds and installs multiple versions and configurations of software. It works on Linux, macOS, and many supercomputers. Spack is non-destructive: installing a new version of a package does not break existing installations, so many configurations of the same package can coexist. Spack offers a simple "spec" syntax that allows users to specify versions and configuration options. Package files are written in pure Python, and specs allow package authors to write a single script for many different builds of the same package. With Spack, you can build your software all the ways you want to.

compute - A C++ GPU Computing Library for OpenCL

  •    C++

Boost.Compute is a GPU/parallel-computing library for C++ based on OpenCL. The core library is a thin C++ wrapper over the OpenCL API and provides access to compute devices, contexts, command queues and memory buffers.

singularity - Singularity: Application containers for Linux

  •    Go

Singularity is an open source container platform designed to be simple, fast, and secure. Singularity is optimized for EPC and HPC workloads, allowing untrusted users to run untrusted containers in a trusted way. Check out who is using Singularity and some use cases of Singularity on our website.

ompi - Open MPI main development repository

  •    C

Open MPI main development repository




MPJ Express - Parallel Programming in Java

  •    Java

MPJ Express is an open source Java message passing library that allows application developers to write and execute parallel applications for multicore processors and compute clusters/clouds. It allows writing parallel Java applications using an MPI-like API.

nextflow - A DSL for data-driven computational pipelines

  •    Groovy

With the rise of big data, techniques to analyse and run experiments on large datasets are increasingly necessary. Parallelization and distributed computing are the best ways to tackle this kind of problem, but the tools commonly available to the bioinformatics community traditionally lack good support for these techniques, or provide a model that fits badly with the specific requirements in the bioinformatics domain and, most of the time, require the knowledge of complex tools or low-level APIs.

learn-julia-the-hard-way - Learn Julia the hard way!

  •    Makefile

The Julia base package is pretty big, although at the same time, there are lots of other packages around to expand it with. The result is that on the whole, it is impossible to give a thorough overview of all that Julia can do in just a few brief exercises. Therefore, I had to adopt a little 'bias', or 'slant' if you please, in deciding what to focus on and what to ignore. Julia is a technical computing language, although it does have the capabilities of any general purpose language and you'd be hard-pressed to find tasks it's completely unsuitable for (although that does not mean it's the best or easiest choice for any of them). Julia was developed with the occasional reference to R, and with an avowed intent to improve upon R's clunkiness. R is a great language, but relatively slow, to the point that most people use it to rapid prototype, then implement the algorithm for production in Python or Java. Julia seeks to be as approachable as R but without the speed penalty.

futhark - :boom::computer::boom: A data-parallel functional programming language

  •    Haskell

Futhark is a purely functional data-parallel programming language. Its optimising compiler is able to compile it to typically very performant GPU code. The language and compiler is developed at DIKU at the University of Copenhagen, originally as part of the HIPERFIT centre. Although still under heavy development, Futhark is already useful for practical high-performance programming. For more information, see the website.


future - :rocket: R package: future: Unified Parallel and Distributed Processing in R for Everyone

  •    R

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user. In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a sequential strategy, which means it is resolved in the current R session. Other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions in parallel on the current machine or concurrently on a compute cluster.

CuBLAS.Net

  •    

A wrapper for NVidia's CuBLAS (Compute Unified Basic Linear Algebra Subprograms) for the CLR.

LMS

  •    

Blog de atividades do Microsoft Innovation Center - Interop, também conhecido como LMS, na Unicamp.

UACluster2

  •    

UACluster2 is set of manuals and tools to create and manage high performance computing cluster based on Microsoft Hyper-V virtual machines. It needs Microsoft HPC Server 2008 (Microsoft HPC Server 2008 R2) as a basis of cluster creation.

Parallel Dwarfs

  •    CSharp

The Parallel Dwarfs project is a suite of 13 kernels (as VS projects in C++/C#/F#) parallelized using various technologies such as MPI, OpenMP, TPL, MPI.Net, etc. It also has a driver to run them, collect traces, and visualize the results using Vampir, Jumpshot, Xperf and Excel

Shared Genomics Project MPI Codebase

  •    

The Shared Genomics project has developed parallelised statistical applications (MPI/OpenMP) which can analyse large genomic data-sets containing thousands of Single Nucleotide Polymorphisms (SNP). The code is based on the popular PLINK SNP-analysis program.

Transactional Entity Framework

  •    C++

UNLEASH THE POWER OF PARALLEL COMPUTING WITH AUTOMATIC TRANSACTIONAL MEMORY

Interop Router

  •    

This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.

udocker - A basic user tool to execute simple docker containers in batch or interactive systems without root privileges

  •    Python

A basic user tool to execute simple docker containers in user space without requiring root privileges. Enables download and execution of docker containers by non-privileged users in Linux systems where docker is not available. It can be used to pull and execute docker containers in Linux batch systems and interactive clusters that are managed by other entities such as grid infrastructures or externally managed batch or interactive systems. The INDIGO udocker does not require any type of privileges nor the deployment of services by system administrators. It can be downloaded and executed entirely by the end user.

skale - High performance distributed data processing engine

  •    Javascript

High performance distributed data processing and machine learning.Skale provides a high-level API in Javascript and an optimized parallel execution engine on top of NodeJS.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.