Displaying 1 to 13 from 13 results

nextflow - A DSL for data-driven computational pipelines

  •    Groovy

With the rise of big data, techniques to analyse and run experiments on large datasets are increasingly necessary. Parallelization and distributed computing are the best ways to tackle this kind of problem, but the tools commonly available to the bioinformatics community traditionally lack good support for these techniques, or provide a model that fits badly with the specific requirements in the bioinformatics domain and, most of the time, require the knowledge of complex tools or low-level APIs.

toil - A scalable, efficient, cross-platform and easy-to-use workflow engine in pure Python

  •    Python

ATTENTION: Toil has moved from https://github.com/BD2KGenomics/toil to https://github.com/DataBiosphere/toil as of July 5th, 2018. Toil is a scalable, efficient, cross-platform pipeline management system, written entirely in Python, and designed around the principles of functional programming.

hpc-in-a-day - a temporary fork of softwarecarpentry/hpc-novice

  •    Python

Novice introduction to high performance computing. This material was conceived as a sandbox project for swcarpentry/hpc-novice. Parts of it will be contributed to swcarpentry/hpc-novice in due course. The material as such targets future users of a HPC infrastructure of any discipline. The learners are expected to have an introductory level of programming skills and should know their way around the UNIX command line on a beginners level as well.




batchtools - Tools for computation on batch systems

  •    R

As a successor of the packages BatchJobs and BatchExperiments, batchtools provides a parallel implementation of Map for high performance computing systems managed by schedulers like Slurm, Sun Grid Engine, OpenLava, TORQUE/OpenPBS, Load Sharing Facility (LSF) or Docker Swarm (see the setup section in the vignette). Next, you need to setup batchtools for your HPC (it will run sequentially otherwise). See the vignette for instructions.

clustermq - R package to send function calls as jobs on LSF, SGE, Slurm, PBS/Torque, or each via SSH

  •    R

Computations are done entirely on the network and without any temporary files on network-mounted storage, so there is no strain on the file system apart from starting up R once per job. This way, we can also send data and results around a lot quicker. All calculations are load-balanced, i.e. workers that get their jobs done faster will also receive more function calls to work on. This is especially useful if not all calls return after the same time, or one worker has a high load.

slurm-https - A simple HTTPS API for Slurm

  •    Go

A simple HTTPS API for Slurm. By default, the server listen on :8443.

future

  •    R

The future package provides a generic API for using futures in R. A future is a simple yet powerful mechanism to evaluate an R expression and retrieve its value at some point in time. Futures can be resolved in many different ways depending on which strategy is used. There are various types of synchronous and asynchronous futures to choose from in the future package. This package, future.batchtools, provides a type of futures that utilizes the batchtools package. This means that any type of backend that the batchtools package supports can be used as a future. More specifically, future.batchtools will allow you or users of your package to leverage the compute power of high-performance computing (HPC) clusters via a simple switch in settings - without having to change any code at all.


funnel - Funnel is a toolkit for distributed task execution via a simple, standard API.

  •    Go

Funnel is a toolkit for distributed, batch task execution, including a server, worker, and a set of compute, storage, and database backends. Given a task description, Funnel will find a worker to execute the task, download inputs, run a series of (Docker) containers, upload outputs, capture logs, and track the whole process. Funnel is an implementation of the GA4GH Task Execution Schemas, an effort to standardize the APIs used for task execution across many platforms.

sparkhpc - launching and controlling spark on hpc clusters

  •    Python

This package tries to greatly simplify deploying and managing Apache Spark clusters on HPC resources. This will install the python package to your default package directory as well as the sparkcluster and hpcnotebook command-line scripts.

fyrd - Submit functions and shell scripts to torque and slurm clusters or local machines using python

  •    Python

Note: Development is currently primarily on the 0.6.2 branch. The master branch reflects the highest 0.6.1 version, which is technically more stable than 0.6.2, but which has serveral bugs fixed by 0.6.2 as well as a constricted feature set. Installing via pip will install the 0.6.2 version. Allows simple job submission with dependency tracking and queue waiting on either torque, slurm, or locally with the multiprocessing module. It uses simple techniques to avoid overwhelming the queue and to catch bugs on the fly.

HPC - A collection of various resources, examples, and executables for the general NREL HPC user community's benefit

  •    Jupyter

This repository serves as a collection of walkthroughs, utilities, and other resources to improve the NREL HPC user's quality of life, both novice and veteran. We are here to help: If you need help with a specific issue or would like to see a topic covered please open an issue. If you have materials that could be useful for the NREL community, please see our contributing guidelines, and open a pull request.

wlm-operator - Singularity implementation of k8s operator for interacting with SLURM.

  •    Go

The singularity-cri and wlm-operator projects were created by Sylabs to explore interaction between the Kubernetes and HPC worlds. In 2020, rather than dilute our efforts over a large number of projects, we have focused on Singularity itself and our supporting services. We're also looking forward to introducing new features and technologies in 2021. At this point we have archived the repositories to indicate that they aren't under active development or maintenance. We recognize there is still interest in singularity-cri and wlm-operator, and we'd like these projects to find a home within a community that can further develop and maintain them. The code is open-source under the Apache License 2.0, to be compatible with other projects in the k8s ecosystem.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.