kafka-node - Node.js client for Apache Kafka 0.8 and later.

  •        11

Kafka-node is a Node.js client with Zookeeper integration for Apache Kafka 0.8.1 and later. Follow the instructions on the Kafka wiki to build Kafka 0.8 and get a test broker up and running.

https://github.com/SOHU-Co/kafka-node

Dependencies:

async : ^2.5.0
binary : ~0.3.0
bl : ^1.2.0
buffer-crc32 : ~0.2.5
buffermaker : ~1.2.0
debug : ^2.1.3
denque : ^1.3.0
lodash : ^4.17.4
minimatch : ^3.0.2
nested-error-stacks : ^2.0.0
node-zookeeper-client : ~0.2.2
optional : ^0.1.3
retry : ^0.10.1
uuid : ^3.0.0

Tags
Implementation
License
Platform

   




Related Projects

kafka-python - Python client for Apache Kafka

  •    Python

Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the official java client, with a sprinkling of pythonic interfaces (e.g., consumer iterators).kafka-python is best used with newer brokers (0.9+), but is backwards-compatible with older versions (to 0.8.0). Some features will only be enabled on newer brokers. For example, fully coordinated consumer groups -- i.e., dynamic partition assignment to multiple consumers in the same group -- requires use of 0.9+ kafka brokers. Supporting this feature for earlier broker releases would require writing and maintaining custom leadership election and membership / health check code (perhaps using zookeeper or consul). For older brokers, you can achieve something similar by manually assigning different partitions to each consumer instance with config management tools like chef, ansible, etc. This approach will work fine, though it does not support rebalancing on failures. See <https://kafka-python.readthedocs.io/en/master/compatibility.html> for more details.

librdkafka - The Apache Kafka C/C++ library

  •    C

Copyright (c) 2012-2016, Magnus Edenhill.librdkafka is a C library implementation of the Apache Kafka protocol, containing both Producer and Consumer support. It was designed with message delivery reliability and high performance in mind, current figures exceed 1 million msgs/second for the producer and 3 million msgs/second for the consumer.

kq - Kafka-based Job Queue for Python

  •    Python

KQ (Kafka Queue) is a lightweight Python library which lets you queue and execute jobs asynchronously using Apache Kafka. It uses kafka-python under the hood. You may need to use sudo depending on your environment.

kafkacat - Generic command line non-JVM Apache Kafka producer and consumer

  •    C

kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8, think of it as a netcat for Kafka.In producer mode kafkacat reads messages from stdin, delimited with a configurable delimiter (-D, defaults to newline), and produces them to the provided Kafka cluster (-b), topic (-t) and partition (-p).


ruby-kafka - A Ruby client library for Apache Kafka

  •    Ruby

A Ruby client library for Apache Kafka, a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.Although parts of this library work with Kafka 0.8 – specifically, the Producer API – it's being tested and developed against Kafka 0.9. The Consumer API is Kafka 0.9+ only.

pykafka - Apache Kafka client for Python; high-level & low-level consumer/producer, with great performance

  •    Python

PyKafka is a cluster-aware Kafka>=0.8.2 client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka, and runs under Python 2.7+, Python 3.4+, and PyPy.PyKafka's primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.

confluent-kafka-go - Confluent's Apache Kafka Golang client

  •    Go

confluent-kafka-go is Confluent's Golang client for Apache Kafka and the Confluent Platform.High performance - confluent-kafka-go is a lightweight wrapper around librdkafka, a finely tuned C client.

spring-integration-kafka

  •    Java

The Spring Integration Kafka extension project provides inbound and outbound channel adapters for Apache Kafka. Apache Kafka is a distributed publish-subscribe messaging system that is designed for high throughput (terabytes of data) and low latency (milliseconds). For more information on Kafka and its design goals, see the Kafka main page.Starting from version 2.0 version this project is a complete rewrite based on the new spring-kafka project which uses the pure java Producer and Consumer clients provided by Kafka 0.9.x.x and 0.10.x.x..

kafka - Load-balancing, resuming Kafka consumer for go, backed by Zookeeper.

  •    Go

Kafka libraries, tools and example applications built on top of the sarama package.The following tools can be useful for discovery, testing, and benchmarking. They also serve as examples of how to use Sarama.

kafka-monitor - Monitor the availability of Kakfa clusters with generated messages.

  •    Java

Kafka Monitor is a framework to implement and execute long-running kafka system tests in a real cluster. It complements Kafka’s existing system tests by capturing potential bugs or regressions that are only likely to occur after prolonged period of time or with low probability. Moreover, it allows you to monitor Kafka cluster using end-to-end pipelines to obtain a number of derived vital stats such as end-to-end latency, service availability and message loss rate. You can easily deploy Kafka Monitor to test and monitor your Kafka cluster without requiring any change to your application.Kafka Monitor can automatically create the monitor topic with the specified config and increase partition count of the monitor topic to ensure partition# >= broker#. It can also reassign partition and trigger preferred leader election to ensure that each broker acts as leader of at least one partition of the monitor topic. This allows Kafka Monitor to detect performance issue on every broker without requiring users to manually manage the partition assignment of the monitor topic.

kafka-stack-docker-compose - docker compose files to create a fully working kafka stack

  •    Shell

This replicates as well as possible real deployment configurations, where you have your zookeeper servers and kafka servers actually all distinct from each other. This solves all the networking hurdles that comes with Docker and docker-compose, and is compatible cross platform. This configuration fits most development requirements.

kafka-websocket - Websocket server interface for Kafka distributed message broker

  •    Java

kafka-websocket is a simple websocket server interface to the kafka distributed message broker. It supports clients subscribing to topics, including multiple topics at once, and sending messages to topics. Messages may be either text or binary, the format for each is described below. A client may produce and consume messages on the same connection.

cruise-control - Cruise-control is the first of its kind to fully automate the dynamic workload rebalance and self-healing of a kafka cluster

  •    Java

Cruise Control is a product that helps run Apache Kafka clusters at large scale. Due to the popularity of Apache Kafka, many companies have bigger and bigger Kafka clusters. At LinkedIn, we have 1800+ Kafka brokers, which means broker deaths are an almost daily occurrence and balancing the workload of Kafka also becomes a big overhead.Kafka Cruise control is designed to address this operation scalability issue.

php-rdkafka - Kafka client for PHP

  •    C

PHP-rdkafka is a thin librdkafka binding providing a working PHP 5 / PHP 7 Kafka 0.8 / 0.9 / 0.10 client.It supports the high level and low level consumers, producer, and metadata APIs.

docker-kafka - Kafka (and Zookeeper) in Docker

  •    Shell

This repository provides everything you need to run Kafka in Docker.For convenience also contains a packaged proxy that can be used to get data from a legacy Kafka 7 cluster into a dockerized Kafka 8.

hermes - Fast and reliable message broker built on top of Kafka.

  •    Java

Hermes is an asynchronous message broker built on top of Kafka. We provide reliable, fault tolerant REST interface for message publishing and adaptive push mechanisms for message sending.Questions? We are on gitter.

Burrow - Kafka Consumer Lag Checking

  •    Go

Burrow is a monitoring companion for Apache Kafka that provides consumer lag checking as a service without the need for specifying thresholds. It monitors committed offsets for all consumers and calculates the status of those consumers on demand. An HTTP endpoint is provided to request status on demand, as well as provide other Kafka cluster information. There are also configurable notifiers that can send status out via email or HTTP calls to another service.Burrow is written in Go, so before you get started, you should install and set up Go.

Maxwell's daemon - A mysql-to-json kafka producer

  •    Java

This is Maxwell's daemon, an application that reads MySQL binlogs and writes row updates to Kafka as JSON. Maxwell has a low operational bar and produces a consistent, easy to ingest stream of updates. It allows you to easily "bolt on" some of the benefits of stream processing systems without going through your entire code base to add (unreliable) instrumentation points.