Displaying 1 to 15 from 15 results

MozDef - MozDef: The Mozilla Defense Platform

  •    Javascript

The inspiration for MozDef comes from the large arsenal of tools available to attackers. Suites like metasploit, armitage, lair, dradis and others are readily available to help attackers coordinate, share intelligence and finely tune their attacks in real time. Defenders are usually limited to wikis, ticketing systems and manual tracking databases attached to the end of a Security Information Event Management (SIEM) system.The Mozilla Defense Platform (MozDef) seeks to automate the security incident handling process and facilitate the real-time activities of incident handlers.

elastiflow - Network flow Monitoring (Netflow, sFlow and IPFIX) with the Elastic Stack

  •    Shell

ElastiFlow™ provides network flow data collection and visualization using the Elastic Stack (Elasticsearch, Logstash and Kibana). It supports Netflow v5/v9, sFlow and IPFIX flow types (1.x versions support only Netflow v5/v9). The following dashboards are provided.

HELK - The Incredible HELK

  •    Shell

A Hunting ELK (Elasticsearch, Logstash, Kibana) with advanced analytic capabilities.At the end of the HELK installation, you will have a similar output with the information you need to access the primary HELK components. Remember that the default username and password for the HELK are helk:hunting.

docker_offensive_elk - Elasticsearch for Offensive Security

  •    Python

Traditional “defensive” tools can be effectively used for Offensive security data analysis, helping your team collaborate and triage scan results. In particular, Elasticsearch offers the chance to aggregate a moltitude of disparate data sources, query them with a unifed interface, with the aim of extracting actionable knowledge from a huge amount of unclassified data.

punt - Punt is a tiny and lightweight daemon which helps ship logs to Elasticsearch.

  •    Go

Punt is a lightweight and simple daemon that parses, transforms, mutates, and ships logs into Elasticsearch. Punt was built as a fast and reliable alternative to Logstash, which means it's focus is to fit directly into existing ELK setups. Punt was built at Discord to manage the over 4 billion log lines we process per day. When Discord originally started logging, we used a standard ELK stack setup. Initially this worked well for a low-volume of logs, however as our log volume grew (~750m log lines a day) Logstash quickly began to fall behind. As we spent more and more time tweaking and scaling Logstash/JVM/JRuby, we quickly realised it was not a long-term solution. Punt spawned out of a frustrating weekend dealing with constant Logstash lockups and JVM struggles.

docs-appcloud-service-offerings - The documentation to the services in the Swisscom Application Cloud marketplace

  •    HTML

This repo is part of the documentation for the Swisscom Application Cloud. It is bound using the so called Book. To contribute, please create a pull request for this repo.

lgrep - CLI for searching logstash and other elasticsearch based systems

  •    Go

Search logstash and other elasticsearch based systems through a lucene queries to return line based messages or custom formatted results right on the command line. This repo contains both a library that may be used for searching the datasource and the tool that utilizes the library and provides a cli for searching.

k8s-elk - Kubernetes ELK - ElasticSearch, Kibana, Logstash, and all the trimmings


This repository currently includes the ElasticSearch, and Kibana configurations. ElasticSearch is run in 3 forms. The first is the "master" type, which is the master type from the ElasticSearch documentation. The second type is the "ingest" type, which is the ingest type from the ElasticSearch documentation. The ingest nodes include a horizonalPodAutoscaler based on CPU usage, and these nodes are connected to an internal service for Kibana, as well as an external service for HTTP input from outside. The third type is the "data" node. These are constructed using the statefulSet, and PersistentVolumeClaims which will scale accordingly. You can only scale ordinally (+1, -1, to the most recent pod), and all general ElasticSearch rules apply (if you remove more nodes than you can withstand failures between allowing the cluster to rebalanace itself, you'll be in trouble).

elk-stack - ELK Stack (Elasticsearch, Logstash & Kibana)

  •    Shell

Setup the main ELK Stack on a linux server using the shell script. Once, you've done the setup of ELK Stack you should setup the beat clients eg. filebeat, metricbeat on the different server.

NLog.StructuredLogging.Json - Structured logging for NLog using Json (formerly known as JsonFields)

  •    CSharp

Structured logging with NLog. Generates log entries as JSON. These can .e.g. be sent to Kibana over NXLog. for each LogEventInfo message, render one JSON object with any parameters as properties.

eslog_tutorial - From Raw Logs to Real Insights - A tutorial for getting started with log analytics using Elastic Stack


I have a lot of passion for the Elastic Stack and the things it enables its users to achieve with their data. However the path to getting to this point was longer for me than it needed to be. With this tutorial material I am hoping to help make the same path shorter for others. So ... Back when I began my journey with the Elastic Stack I quickly discovered that while the online documentation provides a wealth of reference material, there was little that described what those first few steps should be. Online I found very little that covered more than the most basic tasks. Eventually as I stumbled upon more and more hints and tips, slowly things fell in place. Finally one day it really "clicked", and I have been enjoying the benefits of working with data in the Elastic Stack ever since. This tutorial follows very closely the exact path traveled as I took my first steps. I hope you find it helpful.

logzio-nodejs - NodeJS logger for LogzIO

  •    Javascript

NodeJS logger for Logz.io. The logger stashes the log messages you send into an array which is sent as a bulk once it reaches its size limit (100 messages) or time limit (10 sec) in an async fashion. It contains a simple retry mechanism which upon connection reset (server side) or client timeout, wait a bit (default interval of 2 seconds), and try this bulk again. It does not block other messages from being accumulated and sent (async). The interval increases by a factor of 2 between each retry until it reaches the maximum allowed attempts (3). By default, any error is logged to the console. This can be changed by supplying a callback function.