strimzi-kafka-operator - Apache Kafka running on Kubernetes and OpenShift

  •        215

Strimzi provides a way to run an Apache Kafka cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project. Documentation to the current master branch as well as all releases can be found on our website.

http://strimzi.io/
https://github.com/strimzi/strimzi-kafka-operator


Dependencies:

io.strimzi:api:null
io.strimzi:operator-common:null
io.fabric8:kubernetes-client:3.1.8
io.fabric8:kubernetes-model:2.0.4
io.fabric8:openshift-client:3.1.8
io.fabric8:zjsonpatch:0.3.0
com.squareup.okhttp3:okhttp:3.9.1
org.apache.logging.log4j:log4j-api:2.11.0
org.apache.logging.log4j:log4j-core:2.11.0
org.apache.logging.log4j:log4j-slf4j-impl:2.11.0
org.apache.kafka:kafka-clients:2.0.0
org.apache.kafka:kafka_2.12:2.0.0
org.apache.zookeeper:zookeeper:3.4.13
org.slf4j:slf4j-api:1.7.25
org.scala-lang:scala-library:2.12.6
com.fasterxml.jackson.core:jackson-core:2.9.5
com.fasterxml.jackson.core:jackson-annotations:2.9.5
com.fasterxml.jackson.core:jackson-databind:2.9.5
com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.9.5
io.vertx:vertx-core:3.5.1
io.strimzi:certificate-manager:null

Tags
Implementation
License
Platform

   




Related Projects

stream-reactor - Streaming reference architecture for ETL with Kafka and Kafka-Connect

  •    Scala

Lenses offers SQL (for data browsing and Kafka Streams), Kafka Connect connector management, cluster monitoring and more. A collection of components to build a real time ingestion pipeline.

fast-data-dev - Kafka Docker for development

  •    Shell

Apache Kafka docker image for developers; with Landoop Lenses (landoop/kafka-lenses-dev) or Landoop's open source UI tools (landoop/fast-data-dev). Have a full fledged Kafka installation up and running in seconds and top it off with a modern streaming platform (only for kafka-lenses-dev), intuitive UIs and extra goodies. Also includes Kafka Connect, Schema Registry, Landoop Stream Reactor 25+ Connectors and more.

Debezium - Stream changes from your databases.

  •    Java

Debezium is a distributed platform that turns your existing databases into event streams, so applications can see and respond immediately to each row-level change in the databases. Debezium is built on top of Apache Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, from where your application consumes them. This makes it possible for your application to easily consume all of the events correctly and completely.

winton-kafka-streams - A Python implementation of Apache Kafka Streams

  •    Python

Implementation of Apache Kafka's Streams API in Python. Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. Kafka has Streams API added for building stream processing applications using Apache Kafka. Applications built with Kafka's Streams API do not require any setup beyond the provision of a Kafka cluster.


alpakka-kafka - Alpakka Kafka connector - Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka

  •    Scala

Systems don't come alone. In the modern world of microservices and cloud deployment, new components must interact with legacy systems, making integration an important key to success. Reactive Streams give us a technology-independent tool to let these heterogeneous systems communicate without overwhelming each other. The Alpakka project is an open source initiative to implement stream-aware, reactive, integration pipelines for Java and Scala. It is built on top of Akka Streams, and has been designed from the ground up to understand streaming natively and provide a DSL for reactive and stream-oriented programming, with built-in support for backpressure. Akka Streams is a Reactive Streams and JDK 9+ java.util.concurrent.Flow-compliant implementation and therefore fully interoperable with other implementations.

faust - Python Stream Processing

  •    Python

Faust is a stream processing library, porting the ideas from Kafka Streams to Python. It is used at Robinhood to build high performance distributed systems and real-time data pipelines that process billions of events every day.

cilium - HTTP, gRPC, and Kafka Aware Security and Networking for Containers with BPF and XDP

  •    Go

Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos. A new Linux kernel technology called BPF is at the foundation of Cilium. It supports dynamic insertion of BPF bytecode into the Linux kernel at various integration points such as: network IO, application sockets, and tracepoints to implement security, networking and visibility logic. BPF is highly efficient and flexible. To learn more about BPF, read more in our extensive BPF and XDP Reference Guide.

liftbridge - Lightweight, fault-tolerant message streams.

  •    Go

Liftbridge provides lightweight, fault-tolerant message streams by implementing a durable stream augmentation for the NATS messaging system. It extends NATS with a Kafka-like publish-subscribe log API that is highly available and horizontally scalable. Use Liftbridge as a simpler and lighter alternative to systems like Kafka and Pulsar or use it to add streaming semantics to an existing NATS deployment. See the introduction post on Liftbridge and this post for more context and some of the inspiration behind it.

kafka-connect-jdbc - Kafka Connect connector for JDBC-compatible databases

  •    Java

A Kafka Connect JDBC connector for copying data between databases and Kafka.

kafka-storm-starter - Code examples that show to integrate Apache Kafka 0

  •    Scala

Code examples that show how to integrate Apache Kafka 0.8+ with Apache Storm 0.9+ and Apache Spark 1.1+ while using Apache Avro as the data serialization format. Take a look at the Kafka Streams code examples at https://github.com/confluentinc/examples.

kafka-topics-ui - Web Tool for Kafka Topics using Kafka Rest |

  •    Javascript

Browse Kafka topics and understand what's happening on your cluster. Find topics / view topic metadata / browse topic data (kafka messages) / view topic configuration / download data. This is a web tool for the confluentinc/kafka-rest proxy. Config: If you don't use our docker image, keep in mind that Kafka-REST-Proxy CORS support can be a bit buggy, so if you have trouble setting it up, you may need to provide CORS headers through a proxy (i.e. nginx).

spring-integration-kafka

  •    Java

The Spring Integration Kafka extension project provides inbound and outbound channel adapters for Apache Kafka. Apache Kafka is a distributed publish-subscribe messaging system that is designed for high throughput (terabytes of data) and low latency (milliseconds). For more information on Kafka and its design goals, see the Kafka main page.Starting from version 2.0 version this project is a complete rewrite based on the new spring-kafka project which uses the pure java Producer and Consumer clients provided by Kafka 0.9.x.x and 0.10.x.x..

Kafka - A high-throughput distributed messaging system

  •    Java

Kafka provides a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by "logging" and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to Hadoop.

Agile_Data_Code_2 - Code for Agile Data Science 2.0, O'Reilly 2017, Second Edition

  •    Jupyter

Like my work? I am Principal Consultant at Data Syndrome, a consultancy offering assistance and training with building full-stack analytics products, applications and systems. Find us on the web at datasyndrome.com. There is now a video course using code from chapter 8, Realtime Predictive Analytics with Kafka, PySpark, Spark MLlib and Spark Streaming. Check it out now at datasyndrome.com/video.

docker-kafka - Kafka (and Zookeeper) in Docker

  •    Shell

This repository provides everything you need to run Kafka in Docker.For convenience also contains a packaged proxy that can be used to get data from a legacy Kafka 7 cluster into a dockerized Kafka 8.

postgres-operator - PostgreSQL Operator Creates/Configures/Manages PostgreSQL Clusters on Kubernetes

  •    Go

The postgres-operator is a controller that runs within a Kubernetes cluster that provides a means to deploy and manage PostgreSQL clusters. Please view the official Crunchy Data PostgreSQL Operator documentation here. If you are interested in contributing or making an update to the documentation, please view the Contributing Guidelines.

Samza - Distributed Stream Processing Framework

  •    Java

Apache Samza is a distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management. It provides a very simple call-back based process message API that should be familiar to anyone who's used Map/Reduce. Samza was originally developed at LinkedIn. It's currently used to process tracking data, service log data, and for data ingestion pipelines for realtime services.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.