Displaying 1 to 20 from 105 results

kafka-rest - Confluent REST Proxy for Kafka

The Kafka REST Proxy provides a RESTful interface to a Kafka cluster, making it easy to produce and consume messages, view the state of the cluster, and perform administrative actions without using the native Kafka protocol or clients.

kafkacat - Generic command line non-JVM Apache Kafka producer and consumer

kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8, think of it as a netcat for Kafka.In producer mode kafkacat reads messages from stdin, delimited with a configurable delimiter (-D, defaults to newline), and produces them to the provided Kafka cluster (-b), topic (-t) and partition (-p).

librdkafka - The Apache Kafka C/C++ library

Copyright (c) 2012-2016, Magnus Edenhill.librdkafka is a C library implementation of the Apache Kafka protocol, containing both Producer and Consumer support. It was designed with message delivery reliability and high performance in mind, current figures exceed 1 million msgs/second for the producer and 3 million msgs/second for the consumer.

Sarama - Go library for Apache Kafka 0.8, 0.9, and 0.10.

Package sarama is a pure Go client library for dealing with Apache Kafka (versions 0.8 and later). It includes a high-level API for easily producing and consuming messages, and a low-level API for controlling bytes on the wire when the high-level API is insufficient.

CAP - CAP is a library based on

CAP is a library based on .Net standard, which is a solution to deal with distributed transactions, also has the function of EventBus, it is lightweight, easy to use, and efficiently.In the process of building an SOA or MicroService system, we usually need to use the event to integrate each services. In the process, the simple use of message queue does not guarantee the reliability. CAP is adopted the local message table program integrated with the current database to solve the exception may occur in the process of the distributed system calling each other. It can ensure that the event messages are not lost in any case.

Gizmo - A Microservice Toolkit from The New York Times

Gizmo Microservice Toolkit toolkit provides packages to put together server and pubsub daemons with the following features like Standardized configuration and logging, Health check endpoints with configurable strategies, Configuration for managing pprof endpoints and log levels, Structured logging containing basic request information, Useful metrics for endpoints, Graceful shutdowns, Basic interfaces to define our expectations and vocabulary and lot more.

Oryx 2 - Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

The Oryx open source project provides infrastructure for lambda-architecture applications on top of Spark, Spark Streaming and Kafka. On this, it provides further support for real-time, large scale machine learning, and end-to-end applications of this support for common machine learning use cases, like recommendations, clustering, classification and regression.

pykafka - Apache Kafka client for Python; high-level & low-level consumer/producer, with great performance

PyKafka is a cluster-aware Kafka>=0.8.2 client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka, and runs under Python 2.7+, Python 3.4+, and PyPy.PyKafka's primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.

schema-registry - Schema registry for Kafka

Schema Registry provides a RESTful interface for storing and retrieving versioned Avro schemas for use with Kafka.

ruby-kafka - A Ruby client library for Apache Kafka

A Ruby client library for Apache Kafka, a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.Although parts of this library work with Kafka 0.8 – specifically, the Producer API – it's being tested and developed against Kafka 0.9. The Consumer API is Kafka 0.9+ only.

kafka-net - Native C# client for Kafka queue servers.

Native C# client for Apache Kafka.Copyright 2014, James Roland under Apache License, V2.0. See LICENSE file.

Nakadi - A distributed event bus that implements a RESTful API abstraction on top of Kafka-like queues

Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues. It provides abstract event delivery via a secured RESTful API, Enable convenient development of event-driven applications and asynchronous microservices, Efficient low latency event delivery.

confluent-kafka-python - Confluent's Apache Kafka Python client

confluent-kafka-python is Confluent's Python client for Apache Kafka and the Confluent Platform.High performance - confluent-kafka-python is a lightweight wrapper around librdkafka, a finely tuned C client.

php-rdkafka - Kafka client for PHP

PHP-rdkafka is a thin librdkafka binding providing a working PHP 5 / PHP 7 Kafka 0.8 / 0.9 / 0.10 client.It supports the high level and low level consumers, producer, and metadata APIs.

dnpipes - Distributed Named Pipes

Distributed Named Pipes (or: dnpipes) are essentially a distributed version of Unix named pipes comparable to, for example, SQS in AWS or the Service Bus in Azure. Conceptually, we're dealing with a bunch of distributed processes (dpN above). These distributed processes may be long-running (such as dp0 or dp5) or batch-oriented ones, for example dp3 or dp6. There are a number of situations where you want these distributed processes to communicate, very similar to what IPC enables you to do on a single machine. Now, dnpipes are a simple mechanism to facilitate IPC between distributed processes. What follows is an interface specification as well as a reference implementation for dnpipes.

sarama-cluster - Cluster extensions for Sarama, the Go client library for Apache Kafka 0.9

Cluster extensions for Sarama, the Go client library for Apache Kafka 0.9 (and later). You need to install Ginkgo & Gomega to run tests. Please see http://onsi.github.io/ginkgo for more details.