kafka_exporter - Kafka exporter for Prometheus

  •        1316

Kafka exporter for Prometheus. For other metrics from Kafka, have a look at the JMX exporter. Support Apache Kafka version 0.10.1.0 (and later).

https://github.com/danielqsj/kafka_exporter

Tags
Implementation
License
Platform

   




Related Projects

Signoz - Open-source Observability platform and an alternative to DataDog, NewRelic

  •    Javascript

SigNoz is an opensource observability platform. SigNoz uses distributed tracing to gain visibility into your systems and powers data using Kafka (to handle high ingestion rate and backpressure) and Apache Druid (Apache Druid is a high performance real-time analytics database), both proven in the industry to handle scale.

node_exporter - Exporter for machine metrics

  •    Go

Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.The WMI exporter is recommended for Windows users.

redis_exporter - Prometheus Exporter for Redis Metrics. Supports Redis 2.x, 3.x and 4.x

  •    Go

and adjust the host name accordingly. Here is an example Kubernetes deployment configuration for how to deploy the redis_exporter as a sidecar with a Redis instance.

elasticsearch-prometheus-exporter - Prometheus exporter plugin for Elasticsearch

  •    Java

This is a builtin exporter from Elasticsearch to Prometheus. It collects all relevant metrics and makes them available to Prometheus via the Elasticsearch REST API. These settings can be also updated dynamically.

wmi_exporter - Prometheus exporter for Windows machines using WMI

  •    Go

Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation). See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.


elasticsearch_exporter - Elasticsearch stats exporter for Prometheus

  •    Go

Prometheus exporter for various metrics about ElasticSearch, written in Go. You can find a helm chart in the stable charts repository at https://github.com/kubernetes/charts/tree/master/stable/elasticsearch-exporter.

blackbox_exporter - Blackbox prober exporter

  •    Go

The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.Visiting http://localhost:9115/probe?target=google.com&module=http_2xx will return metrics for a HTTP probe against google.com. The probe_success metric indicates if the probe succeeded. Adding a debug=true parameter will return debug information for that probe.

DCMonitor - Data Center monitor, included zookeeper, kafka, druid

  •    Java

A simple, lightweight Data Center monitor, currently includes Zookeeper, Kafka, Druid(in progress). Motivated by KafkaOffsetMonitor, but faster and more stable. It is written in java, and use Prometheus as historical metrics storage.

ruby-kafka - A Ruby client library for Apache Kafka

  •    Ruby

A Ruby client library for Apache Kafka, a distributed log and message bus. The focus of this library will be operational simplicity, with good logging and metrics that can make debugging issues easier.Although parts of this library work with Kafka 0.8 – specifically, the Producer API – it's being tested and developed against Kafka 0.9. The Consumer API is Kafka 0.9+ only.

pihole-exporter - A Prometheus exporter for PI-Hole's Raspberry PI ad blocker

  •    Go

This is a Prometheus exporter for PI-Hole's Raspberry PI ad blocker. Grafana dashboard is available here on the Grafana dashboard website and also here on the GitHub repository.

chaperone - A Kafka audit system

  •    Java

As Kafka audit system, Chaperone monitors the completeness and latency of data stream. The audit metrics are persisted in database for Kafka users to quantify the loss of their topics if any.Basically, Chaperone cuts timeline into 10min buckets and assigns message to corresponding bucket according to its event time. The stats of the bucket are updated accordingly, like the total message count. Periodically, the stats are sent out to a dedicated Kafka topic, say 'chaperone-audit'. ChaperoneCollector consumes those stats from this topic and persists them into database.

chaperone - A Kafka audit system

  •    Java

As Kafka audit system, Chaperone monitors the completeness and latency of data stream. The audit metrics are persisted in database for Kafka users to quantify the loss of their topics if any. Basically, Chaperone cuts timeline into 10min buckets and assigns message to corresponding bucket according to its event time. The stats of the bucket are updated accordingly, like the total message count. Periodically, the stats are sent out to a dedicated Kafka topic, say 'chaperone-audit'. ChaperoneCollector consumes those stats from this topic and persists them into database.

secor - Secor is a service implementing Kafka log persistence

  •    Java

Kafka to s3/gs/swift logs exporter

tempo - Grafana Tempo is a high volume, minimal dependency distributed tracing backend.

  •    Go

Grafana Tempo is an open source, easy-to-use and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki. Tempo can be used with any of the open source tracing protocols, including Jaeger, Zipkin, OpenCensus, Kafka, and OpenTelemetry. It supports key/value lookup only and is designed to work in concert with logs and metrics (exemplars) for discovery. Check out the Integration Guides to see examples of OpenTelemetry instrumentation with Tempo.

kube-state-metrics - Add-on agent to generate and expose cluster-level metrics.

  •    Go

kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. (See examples in the Metrics section below.) It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods.The metrics are exported through the Prometheus golang client on the HTTP endpoint /metrics on the listening port (default 8080). They are served either as plaintext or protobuf depending on the Accept header. They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. You can also open /metrics in a browser to see the raw metrics.

Maxwell's daemon - A mysql-to-json kafka producer

  •    Java

This is Maxwell's daemon, an application that reads MySQL binlogs and writes row updates to Kafka as JSON. Maxwell has a low operational bar and produces a consistent, easy to ingest stream of updates. It allows you to easily "bolt on" some of the benefits of stream processing systems without going through your entire code base to add (unreliable) instrumentation points.

sloth - 🦥 Easy and simple Prometheus SLO (service level objectives) generator

  •    Go

Meet the easiest way to generate SLOs for Prometheus. Sloth generates understandable, uniform and reliable Prometheus SLOs for any kind of service. Using a simple SLO spec that results in multiple metrics and multi window multi burn alerts.

Swarmprom - Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager

  •    Shell

Swarmprom is a starter kit for Docker Swarm monitoring with Prometheus, Grafana, cAdvisor, Node Exporter, Alert Manager and Unsee.

go-grpc-prometheus - Prometheus monitoring for your gRPC Go servers.

  •    Go

Prometheus monitoring for your gRPC Go servers and clients. A sister implementation for gRPC Java (same metrics, same semantics) is in grpc-ecosystem/java-grpc-prometheus.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.