go-kafka-example - Golang Kafka consumer and producer example

  •        153

We would love you for the contribution to go-kafka-example, check the LICENSE file for more info.

https://github.com/vsouza/go-kafka-example

Tags
Implementation
License
Platform

   




Related Projects

goka - Goka is a compact yet powerful distributed stream processing library for Apache Kafka written in Go

  •    Go

Goka is a compact yet powerful distributed stream processing library for Apache Kafka written in Go. Goka aims to reduce the complexity of building highly scalable and highly available microservices. Goka extends the concept of Kafka consumer groups by binding a state table to them and persisting them in Kafka. Goka provides sane defaults and a pluggable architecture.

librdkafka - The Apache Kafka C/C++ library

  •    C

Copyright (c) 2012-2016, Magnus Edenhill.librdkafka is a C library implementation of the Apache Kafka protocol, containing both Producer and Consumer support. It was designed with message delivery reliability and high performance in mind, current figures exceed 1 million msgs/second for the producer and 3 million msgs/second for the consumer.

kafka-python - Python client for Apache Kafka

  •    Python

Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the official java client, with a sprinkling of pythonic interfaces (e.g., consumer iterators).kafka-python is best used with newer brokers (0.9+), but is backwards-compatible with older versions (to 0.8.0). Some features will only be enabled on newer brokers. For example, fully coordinated consumer groups -- i.e., dynamic partition assignment to multiple consumers in the same group -- requires use of 0.9+ kafka brokers. Supporting this feature for earlier broker releases would require writing and maintaining custom leadership election and membership / health check code (perhaps using zookeeper or consul). For older brokers, you can achieve something similar by manually assigning different partitions to each consumer instance with config management tools like chef, ansible, etc. This approach will work fine, though it does not support rebalancing on failures. See <https://kafka-python.readthedocs.io/en/master/compatibility.html> for more details.

confluent-kafka-go - Confluent's Apache Kafka Golang client

  •    Go

confluent-kafka-go is Confluent's Golang client for Apache Kafka and the Confluent Platform.High performance - confluent-kafka-go is a lightweight wrapper around librdkafka, a finely tuned C client.

kafkacat - Generic command line non-JVM Apache Kafka producer and consumer

  •    C

kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8, think of it as a netcat for Kafka.In producer mode kafkacat reads messages from stdin, delimited with a configurable delimiter (-D, defaults to newline), and produces them to the provided Kafka cluster (-b), topic (-t) and partition (-p).


kq - Kafka-based Job Queue for Python

  •    Python

KQ (Kafka Queue) is a lightweight Python library which lets you queue and execute jobs asynchronously using Apache Kafka. It uses kafka-python under the hood. You may need to use sudo depending on your environment.

kafka - Load-balancing, resuming Kafka consumer for go, backed by Zookeeper.

  •    Go

Kafka libraries, tools and example applications built on top of the sarama package.The following tools can be useful for discovery, testing, and benchmarking. They also serve as examples of how to use Sarama.

ruby-kafka - A Ruby client library for Apache Kafka

  •    Ruby

A Ruby client library for Apache Kafka, a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.Although parts of this library work with Kafka 0.8 – specifically, the Producer API – it's being tested and developed against Kafka 0.9. The Consumer API is Kafka 0.9+ only.

Maxwell's daemon - A mysql-to-json kafka producer

  •    Java

This is Maxwell's daemon, an application that reads MySQL binlogs and writes row updates to Kafka as JSON. Maxwell has a low operational bar and produces a consistent, easy to ingest stream of updates. It allows you to easily "bolt on" some of the benefits of stream processing systems without going through your entire code base to add (unreliable) instrumentation points.

pykafka - Apache Kafka client for Python; high-level & low-level consumer/producer, with great performance

  •    Python

PyKafka is a cluster-aware Kafka>=0.8.2 client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka, and runs under Python 2.7+, Python 3.4+, and PyPy.PyKafka's primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.

spring-integration-kafka

  •    Java

The Spring Integration Kafka extension project provides inbound and outbound channel adapters for Apache Kafka. Apache Kafka is a distributed publish-subscribe messaging system that is designed for high throughput (terabytes of data) and low latency (milliseconds). For more information on Kafka and its design goals, see the Kafka main page.Starting from version 2.0 version this project is a complete rewrite based on the new spring-kafka project which uses the pure java Producer and Consumer clients provided by Kafka 0.9.x.x and 0.10.x.x..

kafka-node - Node.js client for Apache Kafka 0.8 and later.

  •    Javascript

Kafka-node is a Node.js client with Zookeeper integration for Apache Kafka 0.8.1 and later. Follow the instructions on the Kafka wiki to build Kafka 0.8 and get a test broker up and running.

winton-kafka-streams - A Python implementation of Apache Kafka Streams

  •    Python

Implementation of Apache Kafka's Streams API in Python. Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. Kafka has Streams API added for building stream processing applications using Apache Kafka. Applications built with Kafka's Streams API do not require any setup beyond the provision of a Kafka cluster.

Debezium - Stream changes from your databases.

  •    Java

Debezium is a distributed platform that turns your existing databases into event streams, so applications can see and respond immediately to each row-level change in the databases. Debezium is built on top of Apache Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, from where your application consumes them. This makes it possible for your application to easily consume all of the events correctly and completely.

faust - Python Stream Processing

  •    Python

Faust is a stream processing library, porting the ideas from Kafka Streams to Python. It is used at Robinhood to build high performance distributed systems and real-time data pipelines that process billions of events every day.

Burrow - Kafka Consumer Lag Checking

  •    Go

Burrow is a monitoring companion for Apache Kafka that provides consumer lag checking as a service without the need for specifying thresholds. It monitors committed offsets for all consumers and calculates the status of those consumers on demand. An HTTP endpoint is provided to request status on demand, as well as provide other Kafka cluster information. There are also configurable notifiers that can send status out via email or HTTP calls to another service.Burrow is written in Go, so before you get started, you should install and set up Go.

Kafka - A high-throughput distributed messaging system

  •    Java

Kafka provides a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by "logging" and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to Hadoop.

watermill - Building event-driven applications easy way in Go.

  •    Go

Watermill is a Go library for working efficiently with message streams. It is intended for building event driven applications, enabling event sourcing, RPC over messages, sagas and basically whatever else comes to your mind. You can use conventional pub/sub implementations like Kafka or RabbitMQ, but also HTTP or MySQL binlog if that fits your use case. Note: Watermill should run reliably in a production environment, but it is still under heavy development and the public API may change before the 1.0.0 release.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.