Fluentd - Data collector, Log Everything in JSON

  •        5569

Fluentd is an event collector system. It is a generalized version of syslogd, which handles JSON objects for its log messages. It collects logs from various data sources and writes them to files, database or other types of storages.

http://fluentd.org/
https://github.com/fluent/fluentd

Tags
Implementation
License
Platform

   




Related Projects

nxlog - Multi platform Log management


nxlog is a modular, multi-threaded, high-performance log management solution with multi-platform support. In concept it is similar to syslog-ng or rsyslog but is not limited to unix/syslog only. It can collect logs from files in various formats, receive logs from the network remotely over UDP, TCP or TLS/SSL . It supports platform specific sources such as the Windows Eventlog, Linux kernel logs, Android logs, local syslog etc.

Flume - Log management using HDFS


Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.

Graylog2 - Open Source Log Management


Graylog2 is an open source log management solution that stores your logs in ElasticSearch. It consists of a server written in Java that accepts your syslog messages via TCP, UDP or AMQP and stores it in the database. The second part is a web interface that allows you to manage the log messages from your web browser. Take a look at the screenshots or the latest release info page to get a feeling of what you can do with Graylog2.

Epylog - a Syslog parser


Epylog is a syslog parser which runs periodically, looks at your logs, processes some of the entries in order to present them in a more comprehensible format, and then mails you the output. It is written specifically for large network clusters where a lot of machines (around 50 and upwards) log to the same loghost using syslog or syslog-ng.

Octopussy - Perl/XML Logs Analyzer, Alerter & Reporter


Octopussy is a Log analyzer tool. It analyzes the log, generates reports and alerts the admin. It has LDAP support to maintain users list. It exports report by Email, FTP & SCP. Scheduled reports could be generated. RRD tool to generate graphs.



fluent-plugin-detect-exceptions - A fluentd plugin that scans line-oriented log streams and combines exceptions stacks into a single log entry


fluent-plugin-detect-exceptions is an output plugin for fluentd which scans a log stream text messages or JSON records for multi-line exception stack traces: If a consecutive sequence of log messages forms an exception stack trace, the log messages are forwarded as a single, combined log message. Otherwise, the input log data is forwarded as is.Text log messages are assumed to contain single lines and are combined by concatenating them.

clolog - Log collector service (node.js mongodb fluentd )


Log collector service (node.js mongodb fluentd )

nxlog


A multi-platform universal log collector and forwarder

docker-fluentd - Docker container to collect other docker container logs with Fluentd


By default, the container logs are stored in /var/log/docker/yyyyMMdd.log inside this logging container. The data is buffered, so you may also see buffer files like /var/log/docker/20141114.b507c71e6fe540eab.log where "b507c71e6fe540eab" is a hash identifier. You can mount that container volume back to host. Also, by modifying fluent.conf and rebuilding the Docker image, you can stream up your logs to Elasticsearch, Amazon S3, MongoDB, Treasure Data, etc.docker-fluentd uses Fluentd inside to tail log files that are mounted on /var/lib/docker/containers/<CONTAINER_ID>/<CONTAINER_ID>-json.log. It uses the tail input plugin to tail JSON-formatted log files that each Docker container emits.

Zenoss - Open Source IT Management


Zenoss Core is an open source IT monitoring product that delivers the functionality to effectively manage the configuration, health, performance of networks, servers and applications through a single, integrated software package.

Logsandra - log management using Cassandra


Logsandra is a log management application written in Python and using Cassandra as back-end. It is written as demo for cassandra but it is worth to take a look. It provides support to create your own parser.

X-Itools: Enterprise Collaboration


Enterprise Collaboration modules and strong Log Analysis modules

ploddle - A syslog-compatible log collector, with built-in web interface for searching


A syslog-compatible log collector, with built-in web interface for searching

liblogfaf - A library that logs messages using non-blocking UDP datagrams.


liblogfaf (faf stands for fire-and-forget) is a dynamic library that is designed to be LD_PRELOAD-ed while starting a process that uses openlog() & syslog() functions to send syslog messages. It overrides logging functions to make log messages sent as UDP datagrams instead of getting written to /dev/log (which can block). This is useful for processes that call syslog() as part of their main execution flow and can therefore be easily broken when /dev/log buffer gets full, for example when the process that is expected to read from it (usually system syslog daemon like rsyslog or syslog-ng) stops doing that.Please note that liblogfaf should not be used in an environment where reliable log message delivery is required.

Clarity - Web interface for the grep


Clarity is a Splunk like web interface for your server log files. It supports searching (using grep) as well as trailing log files in realtime. It has been written using the event based architecture based on EventMachine and so allows real-time search of very large log files.

Chainsaw - log viewer and analysis tool


Chainsaw is a companion application to Log4j written by members of the Log4j development community. Chainsaw can read log files formatted in Log4j's XMLLayout, receive events from remote locations, read events from a DB, it can even work with the JDK 1.4 logging events.

funnel - A minimalistic 12 factor log router written in Go


The 12 factor rule for logging says that an app "should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout." The execution environment should take care of capturing the logs and perform further processing with it. Funnel is this "execution environment".All you have to do from your app is to print your log line to stdout, and pipe it to funnel. You can still use any logging library inside your app to handle other stuff like log level, structured logging etc. But don't bother about the log destination. Let funnel take care whether you want to just write to files or stream your output to Kafka. Think of it as a fluentd/logstash replacement(with minimal features!) but having only stdin as an input.

DAD - Log aggregation, analysis, alerting and correlation for Windows, Syslog and text based logs.


Log aggregation, analysis, alerting and correlation for Windows, Syslog and text based logs.

White-elephant - Hadoop log aggregator and dashboard


White Elephant is a Hadoop log aggregator and dashboard which enables visualization of Hadoop cluster utilization across users. The server is a JRuby web application. In a production environment it can be deployed to tomcat and reads aggregated usage data directly from Hadoop. This data is stored in an in-memory database provided by HyperSQL. Charting is provided by Rickshaw. This project is developed by LinkedIn.