A fast, reliable asset pipeline, supporting constant-time rebuilds and compact build definitions. Comparable to the Rails asset pipeline in scope, though it runs on Node and is backend-agnostic. For background and architecture, see the introductory blog post.For the command line interface, see broccoli-cli.
builder build frontend browser asset pipelineStream.js is a lightweight (2.6 KB minified, gzipped), intensely tested (700+ assertions, 97% coverage) functional programming library for operating upon collections of in-memory data. It requires EcmaScript 5+, has built-in support for ES6 features and works in all current browsers, Node.js and Java 8 Nashorn. Before explaining how Stream.js works in detail, here's a few real world code samples.
stream streaming-api stream-pipeline functional collection pipeline lazy utils arrayA crawler of vertical communities achieved by GOLANG. Latest stable Release: Version 1.2 (Sep 23, 2014).
spider crawler schedule pipelineData integration pipelines as code: pipelines, tasks and commands are created using declarative Python code. PostgreSQL as a data processing engine.
etl data-integration postgresql pipeline dataThis file will give you a taste of what gulp does. Node already supports a lot of ES2015, to avoid compatibility problem we suggest to install Babel and rename your gulpfile.js as gulpfile.babel.js.
build stream system make tool asset pipeline series parallel streamingEach model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction. Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
machine-learning artificial-intelligence tensorflow kubernetes elasticsearch cassandra ipython spark kafka netflixoss presto airflow pipeline jupyter-notebook zeppelin docker redis neural-network gpu microservicesPapermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. To parameterize your notebook designate a cell with the tag parameters.
jupyter notebooks notebook-generator nteract publishing pipelineDevelop powerful pipelines with the help of SDKs and simply check-in your code into a git repository. Gaia automatically clones your code repository, compiles your code to a binary and executes it on-demand. All results are streamed back and formatted to a user-friendly graphical output. Automation Engineer, DevOps, SRE, Cloud Engineer, Platform Engineer - they all have one in common: The majority of tech people are not motivated to take up this work and they are hard to recruit.
pipeline automation cplusplus build deployment kubernetes continuous-integration continuous-delivery continuous-testing devops devops-toolsThere may continue to be specific use-cases for firebase-queue, however if you're looking for a general purpose, scalable queueing system for Firebase then it is likely that building on top of Google Cloud Functions for Firebase is the ideal route. A fault-tolerant, multi-worker, multi-stage job pipeline built on the Firebase Realtime Database.
job task queue worker firebase realtime pipelineVector is a high-performance, end-to-end (agent & aggregator) observability data pipeline that puts you in control of your observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow. Vector enables dramatic cost reduction, novel data enrichment, and data security where you need it, not where is most convenient for your vendors. Additionally, it is open source and up to 10x faster than every alternative in the space. To get started, follow our quickstart guide or install Vector.
parser events router pipeline metrics vector logs stream-processing forwarder observabilityhttpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads. This will display help for the tool. Here are all the switches it supports.
http osint pipeline cybersecurity ssl-certificate bugbounty pentest-toolAlways know what to expect from your data. Great Expectations helps data teams eliminate pipeline debt, through data testing, documentation, and profiling.
data-science pipeline exploratory-data-analysis eda data-engineering data-quality data-profiling datacleaner exploratory-analysis cleandata dataquality datacleaning mlops pipeline-tests pipeline-testing dataunittest data-unit-tests exploratorydataanalysis pipeline-debt data-profilersNew to MLJ? Start here. Wanting to integrate an existing machine learning model into the MLJ framework? Start here.
data-science machine-learning statistics pipeline clustering julia pipelines regression tuning classification ensemble-learning predictive-modeling tuning-parameters stackingVector is a high-performance, end-to-end (agent & aggregator) observability data pipeline that puts you in control of your observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow. Vector enables dramatic cost reduction, novel data enrichment, and data security where you need it, not where it is most convenient for your vendors. Additionally, it is open source and up to 10x faster than every alternative in the space. To get started, follow our quickstart guide or install Vector.
parser events router pipeline metrics vector logs stream-processing forwarder observability蓝鲸智云标准运维(SOPS)
flow pipeline devops-tools bpmn-engine bpmn2 blueking flowengineA JavaScript application framework for machine learning and its engineering. With the mission of enabling JavaScript engineers to utilize the power of machine learning without any prerequisites and the vision to lead front-end technical field to the intelligention. Pipcook is to become the JavaScript application framework for the cross-cutting area of machine learning and front-end interaction.
machine-learning js pipeline tensorflowKedro is an open-source Python framework for creating reproducible, maintainable and modular data science code. It borrows concepts from software engineering and applies them to machine-learning code; applied concepts include modularity, separation of concerns and versioning. Our Get Started guide contains full installation instructions, and includes how to set up Python virtual environments.
pipeline pipelines-as-code hacktoberfest data-versioning data-abstraction mlops kedro cookiecutter-data-scienceThe Tekton Pipelines project provides k8s-style resources for declaring CI/CD-style pipelines. Note that starting from the 0.27 release of Tekton, you need to have a cluster with Kubernetes version 1.19 or later.
kubernetes pipeline cdf hacktoberfest tektonGollum is an n:m multiplexer that gathers messages from different sources and broadcasts them to a set of destinations.Gollum originally started as a tool to MUL-tiplex LOG-files (read it backwards to get the name). It quickly evolved to a one-way router for all kinds of messages, not limited to just logs. Gollum is written in Go to make it scalable and easy to extend without the need to use a scripting language.
gollum logging logger stream log logs message-bus multiplexer pipeline messaging pub-sub message-queue
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.