Displaying 1 to 20 from 38 results

stream-reactor - Streaming reference architecture for ETL with Kafka and Kafka-Connect

  •    Scala

Lenses offers SQL (for data browsing and Kafka Streams), Kafka Connect connector management, cluster monitoring and more. A collection of components to build a real time ingestion pipeline.

Debezium - Stream changes from your databases.

  •    Java

Debezium is a distributed platform that turns your existing databases into event streams, so applications can see and respond immediately to each row-level change in the databases. Debezium is built on top of Apache Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, from where your application consumes them. This makes it possible for your application to easily consume all of the events correctly and completely.

strimzi-kafka-operator - Apache Kafka running on Kubernetes and OpenShift

  •    Java

Strimzi provides a way to run an Apache Kafka cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project. Documentation to the current master branch as well as all releases can be found on our website.

cp-docker-images - Docker images for Confluent Platform.

  •    Python

Docker images for deploying and running the Confluent Platform. The images are currently available on DockerHub. They are currently only available for Confluent Platform 3.0.1 and after.Full documentation for using the images can be found here.




kafka-connect-storage-cloud - Kafka Connect suite of connectors for Cloud storage (currently including Amazon S3)

  •    Java

Documentation for this connector can be found here.To build a development version you'll need a recent version of Kafka. You can build kafka-connect-storage-cloud with Maven using the standard lifecycle phases.

kafka-connect-storage-common - Shared software among connectors that target distributed filesystems and cloud storage

  •    Java

Shared software modules among Kafka Connectors that target distributed filesystems and cloud storage.To build a development version you'll need a recent version of Kafka. You can build kafka-connect-storage-common with Maven using the standard lifecycle phases.


kafka-connect-solr - Kafka Connect connector for writing to Solr.

  •    Java

A Kafka Connect connector copying data from Kafka to Solr.

kafka-connect-splunk - Kafka Connect connector for receiving data and writing data to Splunk.

  •    Java

This connector allows Kafka Connect to emulate a Splunk Http Event Collector. This connector support receiving data and writing data to Splunk. The Sink Connector will transform data from a Kafka topic into a batch of json messages that will be written via HTTP to a configured Splunk Http Event Collector.

kafka-connect-spooldir - Kafka Connect connector for reading CSV files into Kafka.

  •    Java

A Kafka Connect connector reading delimited files from the file system.

kafka-jdbc-connector - Simple way to copy data from relational databases into kafka.

  •    Scala

Simple way to copy data from relational databases into kafka. To copy data between Kafka and another system, users create a Connector for the system which they want to pull data from or push data to. Connectors come in two flavors: SourceConnectors to import data from another system and SinkConnectors to export data from Kafka to other datasources. This project is an implementation of SourceConnector which allows users to copy data from relational databases into Kafka topics. It provides a flexible way to keep logic of selecting next batch of records inside database stored procedures which is invoked from the connector tasks in each poll cycle.

fast-data-connect-cluster - Create Kafka-Connect clusters with docker

  •    Shell

A docker image for setting up Kafka Connect clusters. This part of fast-data-dev is targeted to more advanced users and is a special case since it doesn't set-up a Kafka cluster, instead it expects to find a Kafka Cluster with Schema Registry up and running.

kafka-connect-kcql-smt - Kafka-Connect SMT (Single Message Transformations) with SQL syntax (Using Apache Calcite for the SQL parsing)

  •    Scala

Use SQL to drive the transformation of the Kafka message(key or/and value) when using Kafka Connect. Before SMT you needed a KStream app to take the message from the source topic apply the transformation to a new topic. We have developed a KStreams library ( you can find on github) to make it easy expressing simple Kafka streams transformations. However the extra topic is not required anymore for Kafka Connect!.

kafka-connect-tools - Kafka Connect Tooling

  •    Scala

This is a tiny command line interface (CLI) around the Kafka Connect REST Interface to manage connectors. It is used in a git like fashion where the first program argument indicates the command: it can be one of [ps|get|rm|create|run|status|status|plugins|describe|validate|restart|pause|resume]. The CLI is meant to behave as a good unix citizen: input from stdin; output to stdout; out of band info to stderr and non-zero exit status on error. Commands dealing with configuration expect or produce data in .properties style: key=value lines and comments start with a #.

kafka-connect-ui - Web tool for Kafka Connect |

  •    Javascript

This is a web tool for Kafka Connect for setting up and managing connectors for multiple connect clusters.

kafka-connectors-tests - Test suite for Kafka Connect connectors based on Landoop's Coyote and docker

  •    Dockerfile

An independent set of tests for various Kafka Connect connectors. We setup and test various connectors in a pragmatic environment. That is we spawn at least a broker, a zookeeper instance, a schema registry, a connect distributed instance and any other software needed (e.g elasticsearch, redis, cassandra) in docker containers and then perform tests using standard tools. This practice permits us to verify that a connector does work, as well as provide a basic example of how to setup and test it. Advanced tests verify how the connector performs in special cases.

kafka-helm-charts - Kubernetes Helm charts for Apache Kafka and Kafka Connect and other components for data streaming and data integration

  •    Smarty

Stream-reactor and Kafka Connectors any environment variable beginning with CONNECT is used to build the Kafka Connect properties file, the Connect cluster is started with this file in distributed mode. Any environment variable starting with CONNECTOR is used to make the Connector properties file, which is posted into the Connect cluster to start the connector. Documentation for the Lenses chart can be found here.