Copyright (c) 2012-2016, Magnus Edenhill.librdkafka is a C library implementation of the Apache Kafka protocol, containing both Producer and Consumer support. It was designed with message delivery reliability and high performance in mind, current figures exceed 1 million msgs/second for the producer and 3 million msgs/second for the consumer.
kafka kafka-consumer apache-kafka high-performance librdkafka kafka-producer c-plus-plus consumer kafka-client kafka-libraryCode examples that show how to integrate Apache Kafka 0.8+ with Apache Storm 0.9+ and Apache Spark 1.1+ while using Apache Avro as the data serialization format. Take a look at the Kafka Streams code examples at https://github.com/confluentinc/examples.
apache-kafka kafka apache-storm storm spark apache-spark integration avro apache-avroPyKafka is a cluster-aware Kafka>=0.8.2 client for Python. It includes Python implementations of Kafka producers and consumers, which are optionally backed by a C extension built on librdkafka, and runs under Python 2.7+, Python 3.4+, and PyPy.PyKafka's primary goal is to provide a similar level of abstraction to the JVM Kafka client using idioms familiar to Python programmers and exposing the most Pythonic API possible.
kafka c-extension apache-kafka kafka-client kafka-libraryDebezium is a distributed platform that turns your existing databases into event streams, so applications can see and respond immediately to each row-level change in the databases. Debezium is built on top of Apache Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, from where your application consumes them. This makes it possible for your application to easily consume all of the events correctly and completely.
change-data-capture kafka-connect apache-kafka debezium cdc database kafka kafka-producer database-migration eventsFramework used to simplify Apache Kafka based Ruby applications development.It allows programmers to use approach similar to standard HTTP conventions (params and params_batch) when working with asynchronous Kafka messages.
karafka-framework kafka-topic kafka kafka-client kafka-clients kafka-producer kafka-consumer apache-kafka kafka-message karafka-application sidekiq rails kafka-ruby ruby-on-rails rubygems rubygem ruby-library kafka-libraryYou can download kt via the Releases section.
kafka json cli apache-kafkaWirbelsturm is a Vagrant and Puppet based tool to perform 1-click local and remote deployments, with a focus on big data related infrastructure. Wirbelsturm's goal is to make tasks such as "I want to deploy a multi-node Storm cluster" simple, easy, and fun.
vagrant puppet kafka apache-kafka storm apache-storm spark apache-sparkKQ (Kafka Queue) is a lightweight Python library which lets you queue and execute jobs asynchronously using Apache Kafka. It uses kafka-python under the hood. You may need to use sudo depending on your environment.
kafka job-queue async kafka-client kafka-consumer kafka-producer python3 python2 python-library python-3 python-2 asynchronous queueing jobqueue apache-kafka worker-queue producer-consumer serialization python-3-5Spring Boot Backend for Kafka Sprout
typescript kafka spring-boot apache developer-tools apache-kafkaLike my work? I am Principal Consultant at Data Syndrome, a consultancy offering assistance and training with building full-stack analytics products, applications and systems. Find us on the web at datasyndrome.com. There is now a video course using code from chapter 8, Realtime Predictive Analytics with Kafka, PySpark, Spark MLlib and Spark Streaming. Check it out now at datasyndrome.com/video.
data-syndrome data data-science analytics apache-spark apache-kafka kafka spark predictive-analytics machine-learning machine-learning-algorithms airflow python-3 python3 amazon-ec2 agile-data agile-data-science vagrant amazon-web-servicesGem used to send messages to Kafka in an easy way.message that you want to send should be either binary or stringified (to_s, to_json, etc).
waterdrop ruby-kafka karafka-framework karafka karafka-application kafka apache-kafka rubygems rubygemkafka-connect-hdfs is a Kafka Connector for copying data between Kafka and Hadoop HDFS.Documentation for this connector can be found here.
confluent kafka apache-kafka kafka-connect-hdfs kafka-connector hadoop hdfs big-data streamingMethods emit and respond themselves are side-effect free. They only generate Emission values that are interpreted by the EventSourcing stage.
event-sourcing functional-programming reactive-programming apache-kafka akka-persistence akka-streamsThis list is for anyone wishing to learn about Apache Kafka, but do not have a starting point. You can help by sending Pull Requests to add more information.
kafka streaming-data data-pipeline stream-processing apache-kafka apache-sparkA Kafka metric sink for Apache Spark
apache-spark apache-kafka metric-sink metrics-gathering metrics kafka kafka-producer sparkcppkafka allows C++ applications to consume and produce messages using the Apache Kafka protocol. The library is built on top of librdkafka, and provides a high level API that uses modern C++ features to make it easier to write code while keeping the wrapper's performance overhead to a minimum. cppkafka is a high level C++ wrapper for rdkafka, aiming to allow using rdkafka in a simple, less error prone way.
kafka rdkafka librdkafka apache-kafkaDocker image for Kafka (0.10.x - 0.11.x - 1.0.x) message broker including Zookeeper
kafka zookeeper docker apache-kafkaThis repository contains code examples for Kafka Streams API.
knoldus sbt kafka apache-kafka kafka-streams examplesThis will create a single-node kafka broker (listening on localhost:9092), a local zookeeper instance and create the topic test-topic with 1 replication-factor and 1 partition.
kafka docker apache-kafka database big-data data-science
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.