NPipeline

  •        79

NPipeline is a .NET port of the Apache Commons Pipeline components. It is a lightweight set of utilities that make it simple to implement parallelized data processing systems.

http://npipeline.codeplex.com/

Tags
Implementation
License
Platform

   




Related Projects

Modular toolkit for Data Processing MDP


The Modular toolkit for Data Processing (MDP) is a Python data processing framework. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new i

NIPO Data Processing Component Framework


NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes it scalable, flexible and enables a broad range of usage scenarios.

de.cau.dataprocessing - A multi purpose data processing framework using data parallelism mechanism.


A multi purpose data processing framework using data parallelism mechanism.

SiteWhere - The Open Platform for Internet of Things (IoT)


SiteWhere is an open source platform for capturing, storing, integrating, and analyzing data from IoT devices. SiteWhere is a multi-tenant, application enablement platform for the Internet of Things (IoT) providing device management, complex event processing (CEP) and integration through a modern, scalable architecture. SiteWhere provides REST APIs for all system functionality.

Samza - Distributed Stream Processing Framework


Apache Samza is a distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management. It provides a very simple call-back based process message API that should be familiar to anyone who's used Map/Reduce. Samza was originally developed at LinkedIn. It's currently used to process tracking data, service log data, and for data ingestion pipelines for realtime services.



data-mill - A framework to organize automated data collection-and-processing pipelines.


A framework to organize automated data collection-and-processing pipelines.

geometry-api-java


The Esri Geometry API for Java can be used to enable spatial data processing in 3rd-party data-processing solutions. Developers of custom MapReduce-based applications for Hadoop can use this API for spatial processing of data in the Hadoop system. The API is also used by the [Hive UDF’s](https://github.com/Esri/spatial-framework-for-hadoop) and could be used by developers building geometry functions for 3rd-party applications such as [Cassandra]( https://cassandra.apache.org/), [HBase](http:

Performance Co-Pilot - System Performance and Analysis Framework.


Performance Co-Pilot (PCP) provides a framework and services to support system-level performance monitoring and management. It presents a unifying abstraction for all of the performance data in a system, and many tools for interrogating, retrieving and processing that data. The distributed PCP architecture makes it especially useful for those seeking centralized monitoring of distributed processing.

sumpter - Data processing pipeline building software framework


Data processing pipeline building software framework

elastic.js - Data processing framework based on node.js


Data processing framework based on node.js

gigantron - Framework for Data Processing


Framework for Data Processing

NWIO - Cocoa framework for processing binary data in small chunks


Cocoa framework for processing binary data in small chunks

MaPy - Light-weight Framework for large-scale data processing and analysis written in Python.


Light-weight Framework for large-scale data processing and analysis written in Python.

BioJava - Java Framework for Processing Biological Data


BioJava is an open-source project dedicated to providing a Java framework for processing biological data. It provides analytical and statistical routines, parsers for common file formats and allows the manipulation of sequences and 3D structures. The goal of the biojava project is to facilitate rapid application development for bioinformatics.

inv1 - framework php - Easily create web services and data processing


framework php - Easily create web services and data processing

tangle - A python framework for distributed data processing


A python framework for distributed data processing

GPSTool - GPSTool provides a framework for processing GPS data.


GPSTool provides a framework for processing GPS data.

hadoop-binary-analysis - Framework that makes processing arbitrary binary data in Hadoop easier


Framework that makes processing arbitrary binary data in Hadoop easier

ceplog - Log data analysis using Esper (complex event processing framework)


Log data analysis using Esper (complex event processing framework)

Apache Beam - Unified model for defining both batch and streaming data-parallel processing pipelines


Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow.