Displaying 1 to 20 from 20 results

jstorm - Enterprise Stream Process Engine

  •    Java

Alibaba JStorm is an enterprise fast and stable streaming process engine. It runs program up to 4x faster than Apache Storm. It is easy to switch from record mode to mini-batch mode. It is not only a streaming process engine. It means one solution for real time requirement, whole realtime ecosystem.

Apache Tez - A Framework for YARN-based, Data Processing Applications In Hadoop

  •    Java

Apache Tez is an extensible framework for building high performance batch and interactive data processing applications, coordinated by YARN in Apache Hadoop. Tez improves the MapReduce paradigm by dramatically improving its speed, while maintaining MapReduce’s ability to scale to petabytes of data. Important Hadoop ecosystem projects like Apache Hive and Apache Pig use Apache Tez, as do a growing number of third party data access applications developed for the broader Hadoop ecosystem.

Fluo - Make incremental updates to large data sets stored in Apache Accumulo

  •    Java

Apache Fluo (incubating) is an open source implementation of Percolator (which populates Google's search index) for Apache Accumulo. Fluo makes it possible to update the results of a large-scale computation, index, or analytic as new data is discovered. When combining new data with existing data, Fluo offers reduced latency when compared to batch processing frameworks (e.g Spark, MapReduce).

Hazelcast Jet - Distributed data processing engine, built on top of Hazelcast

  •    Java

Hazelcast Jet is a distributed computing platform built for high-performance stream processing and fast batch processing. It embeds Hazelcast In Memory Data Grid (IMDG) to provide a lightweight package of a processor and a scalable in-memory storage. It supports distributed java.util.stream API support for Hazelcast data structures such as IMap and IList, Distributed implementations of java.util.{Queue, Set, List, Map} data structures highly optimized to be used for the processing




Apache Beam - Unified model for defining both batch and streaming data-parallel processing pipelines

  •    Java

Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow.

spring-cloud-dataflow - Spring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines

  •    Java

Spring Cloud Data Flow is a toolkit for building data integration and real-time data processing pipelines.Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks.

batch-shipyard - Execute batch and HPC Dockerized workloads on Azure Batch with shared file system provisioning and linking support

  •    Python

Additionally, Batch Shipyard provides the ability to provision and manage entire standalone remote file systems (storage clusters) in Azure, independent of any integrated Azure Batch functionality.Batch Shipyard is now integrated directly into Azure Cloud Shell and you can execute any Batch Shipyard workload using your web browser or the Microsoft Azure Android and iOS app.


pssh - Parallel SSH Tools

  •    Python

PSSH is supported on Python 2.5 and later (including Python 3.1 and later). It was originally written and maintained by Brent N. Chun. Due to his busy schedule, Brent handed over maintenance to Andrew McNabb in October 2009. This project originally located at Google Code. Since Google Code has been closed, and has not appeared elsewhere, I (lilydjwg) have exported it to GitHub.

monad-batcher - An applicative monad that batches commands for later more efficient execution

  •    Haskell

The monad-batcher package provides the Batcher applicative monad that batches commands for later more efficient execution. See the example.

launcher - A simple utility for executing multiple sequential or multi-threaded applications in a single multi-node batch job

  •    Shell

Launcher is a utility for performing simple, data parallel, high throughput computing (HTC) workflows on clusters, massively parallel processor (MPP) systems, workgroups of computers, and personal machines. Launcher does not need to be compiled. Unpack the tarball or clone the repository in the desired directory. Then, set LAUNCHER_DIR to point to that location. Python 2.7 or greater and hwloc are required for full functionality. See INSTALL for more information.

euphoria - Euphoria is an open source Java API for creating unified big-data processing flows

  •    Java

A Java API for creating unified big-data processing flows providing an engine independent programming model which can express both batch and stream transformations.

corb2 - MarkLogic tool for processing and reporting on content, enhanced from the original CoRB

  •    Java

CORB is a Java tool designed for bulk content-reprocessing of documents stored in MarkLogic. CORB stands for COntent Reprocessing in Bulk and is a multi-threaded workhorse tool at your disposal. In a nutshell, CORB works off a list of documents in a database and performs operations against those documents. CORB operations can include generating a report across all documents, manipulating the individual documents, or a combination thereof. This document provides a comprehensive overview of CORB and the options available to customize the execution of a CORB job, as well as the ModuleExecutor Tool, which can be used to execute a single (XQuery or JavaScript) module in MarkLogic.

batchman - This library for Android will take any set of events and batch them up before sending it to the server

  •    Java

BatchMan (short for batch manager) is an android library implementation responsible for batching of events based on the configurations done by the client, and giving the batch back to the client. The library has been written in a more flexible way, so that the client can plugin his own implementations for batching.

oomact - Object Oriented Modular Abstract Calibration Toolbox

  •    C++

Modular toolbox to build online / offline incremental / batch calibration modules. This work is supported in part by the European Union's Seventh Framework Programme (FP7/2007-2013) under grant #610603 (EUROPA2).

replace - Generic file search & replace tool, written in Python 3

  •    Python

Generic file search & replace tool, written in Python 3. The following command lines are executed and verified by clitest, using the clitest README.md command.