Displaying 1 to 20 from 27 results

stream-reactor - Streaming reference architecture for ETL with Kafka and Kafka-Connect

  •    Scala

Lenses offers SQL (for data browsing and Kafka Streams), Kafka Connect connector management, cluster monitoring and more. A collection of components to build a real time ingestion pipeline.

Debezium - Stream changes from your databases.

  •    Java

Debezium is a distributed platform that turns your existing databases into event streams, so applications can see and respond immediately to each row-level change in the databases. Debezium is built on top of Apache Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, from where your application consumes them. This makes it possible for your application to easily consume all of the events correctly and completely.

strimzi-kafka-operator - Apache Kafka running on Kubernetes and OpenShift

  •    Java

Strimzi provides a way to run an Apache Kafka cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project. Documentation to the current master branch as well as all releases can be found on our website.

cp-docker-images - Docker images for Confluent Platform.

  •    Python

Docker images for deploying and running the Confluent Platform. The images are currently available on DockerHub. They are currently only available for Confluent Platform 3.0.1 and after.Full documentation for using the images can be found here.




kafka-connect-storage-cloud - Kafka Connect suite of connectors for Cloud storage (currently including Amazon S3)

  •    Java

Documentation for this connector can be found here.To build a development version you'll need a recent version of Kafka. You can build kafka-connect-storage-cloud with Maven using the standard lifecycle phases.

kafka-connect-storage-common - Shared software among connectors that target distributed filesystems and cloud storage

  •    Java

Shared software modules among Kafka Connectors that target distributed filesystems and cloud storage.To build a development version you'll need a recent version of Kafka. You can build kafka-connect-storage-common with Maven using the standard lifecycle phases.


kafka-connect-kcql-smt - Kafka-Connect SMT (Single Message Transformations) with SQL syntax (Using Apache Calcite for the SQL parsing)

  •    Scala

Use SQL to drive the transformation of the Kafka message(key or/and value) when using Kafka Connect. Before SMT you needed a KStream app to take the message from the source topic apply the transformation to a new topic. We have developed a KStreams library ( you can find on github) to make it easy expressing simple Kafka streams transformations. However the extra topic is not required anymore for Kafka Connect!.

kafka-connect-tools - Kafka Connect Tooling

  •    Scala

This is a tiny command line interface (CLI) around the Kafka Connect REST Interface to manage connectors. It is used in a git like fashion where the first program argument indicates the command: it can be one of [ps|get|rm|create|run|status|status|plugins|describe|validate|restart|pause|resume]. The CLI is meant to behave as a good unix citizen: input from stdin; output to stdout; out of band info to stderr and non-zero exit status on error. Commands dealing with configuration expect or produce data in .properties style: key=value lines and comments start with a #.

kafka-connect-ui - Web tool for Kafka Connect |

  •    Javascript

This is a web tool for Kafka Connect for setting up and managing connectors for multiple connect clusters.

kafka-connectors-tests - Test suite for Kafka Connect connectors based on Landoop's Coyote and docker

  •    Dockerfile

An independent set of tests for various Kafka Connect connectors. We setup and test various connectors in a pragmatic environment. That is we spawn at least a broker, a zookeeper instance, a schema registry, a connect distributed instance and any other software needed (e.g elasticsearch, redis, cassandra) in docker containers and then perform tests using standard tools. This practice permits us to verify that a connector does work, as well as provide a basic example of how to setup and test it. Advanced tests verify how the connector performs in special cases.

kafka-helm-charts - Kubernetes Helm charts for Apache Kafka and Kafka Connect and other components for data streaming and data integration

  •    Smarty

Stream-reactor and Kafka Connectors any environment variable beginning with CONNECT is used to build the Kafka Connect properties file, the Connect cluster is started with this file in distributed mode. Any environment variable starting with CONNECTOR is used to make the Connector properties file, which is posted into the Connect cluster to start the connector. Documentation for the Lenses chart can be found here.

streamx - kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)

  •    Java

StreamX is a kafka-connect based connector to copy data from Kafka to Object Stores like Amazon s3, Google Cloud Storage and Azure Blob Store. It focusses on reliable and scalable data copying. It can write the data out in different formats (like parquet, so that it can readily be used by analytical tools) and also in different partitioning requirements. StreamX inherits rich set of features from kafka-connect-hdfs.

kafka-connect-mongo - Kafka mongo connector (deeply inspired by https://github

  •    Kotlin

Simply change the connector class to MongoCronSourceConnector and add a schedule parameter in the source config, this connector will export all the data from your collection to kafka through the same way of mongo source connect. Tips: the script will use _id as the offset for each bulk read, so all your messages should have an auto increment field called _id.

connect - CLI tool and Go client library for the Kafka Connect REST API

  •    Go

A fast, portable, self-documenting CLI tool to inspect and manage Kafka Connect connectors via the REST API. Because you don't want to be fumbling through runbooks of curl commands when something's going wrong, or ever really. This project also contains a Go library for the Kafka Connect API usable by other Go tools or applications. See Using the Go Library for details.

cp-ansible - Ansible playbooks for the Confluent Platform

  •    Shell

You can find the documentation for running this playbook at https://docs.confluent.io/current/tutorials/cp-ansible/docs/index.html.