kafka_exporter - Kafka exporter for Prometheus

  •        506

Kafka exporter for Prometheus. For other metrics from Kafka, have a look at the JMX exporter. Support Apache Kafka version 0.10.1.0 (and later).

https://github.com/danielqsj/kafka_exporter

Tags
Implementation
License
Platform

   




Related Projects

node_exporter - Exporter for machine metrics

  •    Go

Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.The WMI exporter is recommended for Windows users.

redis_exporter - Prometheus Exporter for Redis Metrics. Supports Redis 2.x, 3.x and 4.x

  •    Go

and adjust the host name accordingly. Here is an example Kubernetes deployment configuration for how to deploy the redis_exporter as a sidecar with a Redis instance.

wmi_exporter - Prometheus exporter for Windows machines using WMI

  •    Go

Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation). See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.

elasticsearch_exporter - Elasticsearch stats exporter for Prometheus

  •    Go

Prometheus exporter for various metrics about ElasticSearch, written in Go. You can find a helm chart in the stable charts repository at https://github.com/kubernetes/charts/tree/master/stable/elasticsearch-exporter.

blackbox_exporter - Blackbox prober exporter

  •    Go

The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.Visiting http://localhost:9115/probe?target=google.com&module=http_2xx will return metrics for a HTTP probe against google.com. The probe_success metric indicates if the probe succeeded. Adding a debug=true parameter will return debug information for that probe.


DCMonitor - Data Center monitor, included zookeeper, kafka, druid

  •    Java

A simple, lightweight Data Center monitor, currently includes Zookeeper, Kafka, Druid(in progress). Motivated by KafkaOffsetMonitor, but faster and more stable. It is written in java, and use Prometheus as historical metrics storage.

ruby-kafka - A Ruby client library for Apache Kafka

  •    Ruby

A Ruby client library for Apache Kafka, a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.Although parts of this library work with Kafka 0.8 – specifically, the Producer API – it's being tested and developed against Kafka 0.9. The Consumer API is Kafka 0.9+ only.

chaperone - A Kafka audit system

  •    Java

As Kafka audit system, Chaperone monitors the completeness and latency of data stream. The audit metrics are persisted in database for Kafka users to quantify the loss of their topics if any.Basically, Chaperone cuts timeline into 10min buckets and assigns message to corresponding bucket according to its event time. The stats of the bucket are updated accordingly, like the total message count. Periodically, the stats are sent out to a dedicated Kafka topic, say 'chaperone-audit'. ChaperoneCollector consumes those stats from this topic and persists them into database.

secor - Secor is a service implementing Kafka log persistence

  •    Java

Kafka to s3/gs/swift logs exporter

kube-state-metrics - Add-on agent to generate and expose cluster-level metrics.

  •    Go

kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. (See examples in the Metrics section below.) It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods.The metrics are exported through the Prometheus golang client on the HTTP endpoint /metrics on the listening port (default 8080). They are served either as plaintext or protobuf depending on the Accept header. They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. You can also open /metrics in a browser to see the raw metrics.

Maxwell's daemon - A mysql-to-json kafka producer

  •    Java

This is Maxwell's daemon, an application that reads MySQL binlogs and writes row updates to Kafka as JSON. Maxwell has a low operational bar and produces a consistent, easy to ingest stream of updates. It allows you to easily "bolt on" some of the benefits of stream processing systems without going through your entire code base to add (unreliable) instrumentation points.

Swarmprom - Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager

  •    Shell

Swarmprom is a starter kit for Docker Swarm monitoring with Prometheus, Grafana, cAdvisor, Node Exporter, Alert Manager and Unsee.

go-grpc-prometheus - Prometheus monitoring for your gRPC Go servers.

  •    Go

Prometheus monitoring for your gRPC Go servers and clients. A sister implementation for gRPC Java (same metrics, same semantics) is in grpc-ecosystem/java-grpc-prometheus.

pushgateway - push acceptor for ephemeral and batch jobs

  •    Go

The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.The Pushgateway is explicitly not an aggregator or distributed counter but rather a metrics cache. It does not have statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program.

nginx-lua-prometheus - Prometheus metric library for Nginx written in Lua

  •    Lua

This is a Lua library that can be used with Nginx to keep track of metrics and expose them on a separate web page to be pulled by Prometheus. You need to install nginx package with lua support (libnginx-mod-http-lua on newer Debian versions, or nginx-extras on older ones). The library file, prometheus.lua, needs to be available in LUA_PATH. If this is the only Lua library you use, you can just point lua_package_path to the directory with this git repo checked out (see example below).

confluent-kafka-dotnet - Confluent's Apache Kafka .NET client

  •    CSharp

confluent-kafka-dotnet is Confluent's .NET client for Apache Kafka and the Confluent Platform.High performance - confluent-kafka-dotnet is a lightweight wrapper around librdkafka, a finely tuned C client.

confluent-kafka-python - Confluent's Apache Kafka Python client

  •    Python

confluent-kafka-python is Confluent's Python client for Apache Kafka and the Confluent Platform.High performance - confluent-kafka-python is a lightweight wrapper around librdkafka, a finely tuned C client.

kafka-rest - Confluent REST Proxy for Kafka

  •    Java

The Kafka REST Proxy provides a RESTful interface to a Kafka cluster, making it easy to produce and consume messages, view the state of the cluster, and perform administrative actions without using the native Kafka protocol or clients.

kq - Kafka-based Job Queue for Python

  •    Python

KQ (Kafka Queue) is a lightweight Python library which lets you queue and execute jobs asynchronously using Apache Kafka. It uses kafka-python under the hood. You may need to use sudo depending on your environment.