Menu

Yearly archives "2017"

12 Articles

Graphite & Riemann

There are several ways of collecting runtime metrics out of your software. We’ve discussed of Munin or Datadog already, we could talk about Collectd as well, although these solutions would mostly aim at system monitoring, as opposed to distributed systems.

Business Intelligence may require collecting metrics from a cluster of workers, aggregating them into comprehensive graphs, such as short-living instances won’t imply a growing collection of distinct graphs.

 

Riemann is a Java web service, allowing to collect metrics over TCP or UDP, and serving with a simple web interface generating dashboards, displaying your metrics as they’re received. Configuring Riemann, you would be able to apply your input with transformations, filtering, … You may find a quickstart here, or something more exhaustive over there. A good starting point could be to keep it simple:

(logging/init {:file “/var/log/riemann/riemann.log”})
(let [host “0.0.0.0”]
(tcp-server {:host host})
(ws-server {:host host}))
(periodically-expire 5)
(let [index (index)] (streams (default :ttl 60 index (expired (fn [event] (info “expired” event))))))

Riemann Sample Dashboard

Riemann Sample Dashboard

Despite being usually suspicious of Java applications or Ruby web services, I tend to trust Riemann even under heavy workload (tens of collectd, forwarding hundreds of metrics per second).

Riemann Dashboard may look unappealing at first, although you should be able to build your own monitoring screen relatively easily. Then again, this would require a little practice, and some basic sense of aesthetics.

 

 

Graphite Composer

Graphite Composer

Graphite is a Python web service providing with a minimalist yet pretty powerful browser-based client, that would allow you to render graphs. Basic Graphite setup would usually involve some SQlite database storing your custom graphs and users, as long as another Python service: Carbon, storing metrics. Such setup would usually also involve Statsd, a NodeJS service listening for metrics, although depending on what you intend to monitor, you might find your way into writing to Carbon directly.

Setting up Graphite on Debian Stretch may be problematic, due to some python packages being deprecated, while the last Graphite packages aren’t available yet. After unsuccessfully trying to pull copies from PIP instead of APT, I eventually ended up setting my first production instance based on Devuan Jessie. Setup process would drastically vary based on distribution, versions, your web server, Graphite database, … Should you go there: consider all options carefully before starting.

Graphite could be used as is: the Graphite Composer would let you generate and save graphs, aggregating any collected metric, while the Dashboard view would let you aggregate several graphs into a single page. Although note you could use Graphite as part of Grafana as well, among others.

 

From there, note Riemann can be reconfigured forwarding everything to Graphite (or using your own filters), adding to your riemann.config:

(def graph (graphite {:host “10.71.100.164”}))
(streams graph)

This should allow you to run Graphite without Statsd, having Riemann collecting metrics from your software and forwarding them into Carbon.

 

The next step would be to configure your applications, forwarding data to Riemann (or Statsd, should you want to only use Graphite). Databases like Cassandra or Riak could forward some of their own internal metrics, using the right agent. Or, collecting BI metrics from your own code.

Graphite Dashboard

Graphite Dashboard

Using NodeJS, you will find a riemannjs module that does the job. Or node-statsd-client, for Statsd.

Having added some scheduled tasks to our code, querying for how many accounts we have, how many are closed, how many were active during the last day, week and month, … I’m eventually able to create a dashboard based on saved graphs, aggregating metrics in some arguably-meaningful fashion.

m242/Maildrop

A couple days ago, some colleague of mine asked me to set up our own Maildrop service, as served by maildrop.cc. Some of you may also be familiar with Mailinator, which offers with a similar service.

Note to be confused with Maildrop (popular MDA, as distributed in Debian packages, among others, …) m242/Maildrop is based on Scala, Java and a Redis queue. The project is divided into two workers connecting to Redis.

maildrop

maildrop

An SMTP worker would be processing inbound messages, eventually writing them to Redis. Listening on 127.0.0.1:25000 by default, nginx may be used proxying traffic.

Meanwhile, the HTTP worker would serve clients with any mailbox – no authentication required. Developers may write tests checking for some arbitrary mailbox using some HTTP API.

As you may guess: both workers and database may be scaled horizontally. Although being a pretty specific implementation, you probably won’t need it to.

Their GitHub project isn’t much active, sadly. A few issues piling up, two of which I’ve been able to post pull requests for. Then again, getting it working isn’t much complicated, and may prove pretty useful testing for regressions.

Lying DNS

As I was looking at alternative web browsers, with privacy in mind, I ended up installing Iridium (in two words: Chrome fork), tempted to try it out. I first went to Youtube, and after my first video, was being subjected to one of these advertisement sneaking in between videos, that horrible “skip add” button after a few seconds, … I was shocked enough, to open Chromium back.

This is a perfect occasion to discuss about lying DNS servers, and how having your own cache can help you clean up your web browsing experience from at least part of these unsolicited intrusions.

We would mainly talk about a lying DNS server (or cache, actually) alongside Internet censorship, whenever an operator either refuses to serve DNS records or diverts them somewhere else.
Yet some of the networks I manage may relay on DNS resolvers that would purposefully prevent some records from being resolved.

You may query google for lists of domain names likely to host irrelevant contents. These are traditionally formatted as an hosts file, and may be used as is on your computer, assuming you won’t need serving these records to other stations in your LAN.

Otherwise, servers such as Dnsmasq, Unbound or even Bind (using RPZ, although I would recommend sticking to Unbound) may be configured overwriting records for a given list of domains.
Currently, I trust a couple sites to provide me with a relatively exhaustive list of unwanted domain names, using a script to periodically re-download these, while merging them with my own blacklist (see my puppet modules set for further details on how I configure Unbound). I wouldn’t recommend this to anyone: using dynamic lists is risky to begin with … Then again, I wouldn’t recommend a beginner to setup his own DNS server: editing your own hosts file could be a start, though.

Instead of yet another conf dump, I’ld rather point to a couple posts I used when setting it up: the first one from Calomel (awesome source of sample configurations running on BSD, check it out, warmest recommendations …) and an other one from a blog I didn’t remember about – although I’ve probably found it at some point in the past, considering we pull domain names from the same sources…

Wazuh

As a follow-up to our previous OSSEC post, and to complete the one on Fail2ban & ELK, we’ll review today Wazuh.

netstat alerts

netstat alerts

As their documentation states it, “Wazuh is a security detection, visibility, and compliance open source project. It was born as a fork of OSSEC HIDS, later was integrated with Elastic Stack and OpenSCAP evolving into a more comprehensive solution“. We could remark that OSSEC packages used to be distributed on some Wazuh repository, while Wazuh is still listed as OSSEC official training, deployment and assistance services provider. You might still want to clean up some defaults, as you would soon end up receiving notifications for any connection being established or closed …

OSSEC is still maintained, last commit to their GitHub project was a couple days ago as of writing this post, while other commits are being pushed to Wazuh repository. If both products are still active, my last attempts configuring Kibana integration with OSSEC was a failure, due to Kibana5 not being supported. Considering Wazuh offers enterprise support, we could assume their sample configuration & ruleset are at least as relevant as those you’ld find with OSSEC.

wazuh manager status

wazuh manager status

Wazuh documentation is pretty straight-forward, a new service wazuh-api (NodeJS) would be required on your managers, which would then be used by Kibana querying Wazuh status. Debian packages were renamed from ossec-hids & ossec-hids-agent to wazuh-manager & wazuh-agent respectively. Configuration is somewhat similar, although you won’t be able to re-use those you could have installed alongside OSSEC. Note the wazuh-agent package would install an empty key file: you would need to drop it, prior to registering against your manager.

 

wazuh-agents

wazuh agents

Configuring Kibana integration, note Wazuh documentation misses some important detail, as reported on GitHub. That’s the single surprise I had reading through their documentation, the rest of their instructions work as expected: having installed and started wazuh-api service on your manager, then installed Kibana wazuh plugin on your all your Kibana instances, you would find some Wazuh menu showing on the left. Make sure your wazuh-alerts index is registered in the Management section, then go to Wazuh.

If uninitialized, you would be offered to enter your Wazuh backend URL, a port, a username and corresponding password, connecting to wazuh-api. Note that configuration would be saved into some new .wazuh index. Once configured, you would have some live view of your setup, which agents are connected, what alerts you’re receiving, … eventually, set up new dashboards.

Comparing this to OSSEC PHP web interface, marked as deprecated since years, … Wazuh takes the lead!

CIS compliance

CIS compliance

OSSEC alerts

OSSEC alerts

Wazuh Overview

Wazuh Overview

PCI Compliance

PCI Compliance

HighWayToHell

Quick post promoting HighWayToHell, a project I posted to GitHub recently, aiming to provide with a self-hosted Route53 alternative, that would include DNSSEC support.

Assuming you may not be familiar with Route53, the main idea is to generate DNS zones configuration based on conditionals.

edit health check

edit health check

We would then try to provide with a lightweight web service to manage DNS zones, their records, health checks and notifications. Contrarily to Route53: we would implement DNSSEC support.

HighWayToHell distribution

HighWayToHell distribution

HighWayToHell works with a Cassandra cluster storing persistent records, and at least one Redis server (pubsub, job queues, ephemeral tokens). Operations are split in four workers: one in charge of running health checks, an other one of sending notifications based on health checks last returned values and user-defined thresholds, a third one is in charge of generating DNS (bind or NSD) zone include files and zones configurations, the last worker implements an API gateway providing with a lightweight web app.

Theoretically, it could all run on one server, although hosting a DNS setup, you’ll definitely want to involve at least a pair of name servers, and probably want to use separate instances running your web frontend, or dealing with health checks.

list records

list records

Having created your first account, registered your first domain, you would be able to define your first health checks and DNS records.

add record

update record

You may grant third-party users with a specific roles accessing resources from your domains. You may enable 2FA on your accounts using apps such as Authy. You may create and manage tokens – to be used alongside our curl-based shell client, …

delegate management

delegate management

This is a couple weeks old project I didn’t have much time to work on, yet it should be exhaustive and reliable enough to fulfill my original expectations. Eventually, I’ll probably add an API-less management CLI: there still is at least one step, starting your database, that still requires inserting records manually, …

Curious to know more? See our QuickStart docs!

Any remark, bug-report or PR most welcome. Especially CSS contributions – as this is one of the rare topic I can’t bear having to deal with.

Ceph Luminous – 12

In the last few days, Ceph published Luminous 12.1.1 packages to their repositories, release candidate of their future LTS. Having had bad experiences with their previous RC, I gave it a fresh look, dropping ceph-deploy and writing my own Ansible roles instead.

typo listing RGW process

Small typo displaying Ceph Health, if you can notice it

Noticeable changes since Luminous include CephFS being -allegedly- stable. I didn’t test it myself yet, although I’ve been hearing about that feature being unstable since my first days testing Ceph, years ago.

RadosGateway multisite sync status

RadosGateway multisite sync status

Another considerable change that showed up and is now considered stable, is a replacement implementation of Ceph FileStore (relying on ext4, xfs or btrfs partitions), called BlueStore. The main change being that Ceph Object Storage processes would no longer mount a large filesystem storing their data. Beware that recovery scripts reconstructing block devices scanning for PG content in OSD filesystems would no longer work – it is yet unclear how a disaster recovery would work, recovering data from an offline cluster. No surprises otherwise, so far so good.

Also advertised: the RBD-mirror daemon (introduced lately) is now considered stable running in HA. From what I’ve seen, it is yet unclear how to configure HA starting several mirrors in a single cluster – I very much doubt this would work out of the box. We’ll probably have to wait a little longer, for Ceph documentation to reflect the last changes introduced on that matter.

As I had 2 Ceph clusters, I could confirm RadosGW Multisite configuration works perfectly. Now that’s not a new feature, still it’s the first time I actually set this up. Buckets are eventually replicated to remote cluster. Documentation is way more exhaustive, works as advertised: I’ll stick to this, until we learn more about RBD mirroring.

Querying for the MON commands

Ceph RestAPI Gateway

Freshly introduced: the Ceph RestAPI Gateway. Again, we’re missing some docs yet. On paper, this service should allow you to query your cluster as you would have with Ceph CLI tools, via a Restful API. Having set one up, it isn’t much complicated – I would recommend not to use their built-in webserver, and instead use nginx and uwsgi. The basics on that matter could be found on GitHub.

Ceph health turns to warning, watch for un-scrubbed PGs

Ceph health turns to warning, watch for un-scrubbed PGs

Even though Ceph Luminous shouldn’t reach LTS before their 12.2.0 release, as of today, I can confirm Debian Stretch packages are working relatively well on a 3-MON 3-MGR 3-OSD 2-RGW with some haproxy balancer setup, serving with s3-like buckets. Although note there is some weirdness regarding PG scrubbing, you may need to add a cron job … And if you consider running Ceph on commodity hardware, consider that their last releases may be broken.

ceph dashboard

Ceph Dashboard

edit: LTS released as of late August: SSE4.2 support still mandatory deploying your Luminous cluster, although a fix recently reached their master branch, ….

As of late September, Ceph 12.2.1 release can actually be installed on older, commodity hardware.
Meanwhile, a few screenshots of Ceph Dashboard were posted to ceph website, advertising on that new feature.

KeenGreeper

Short post today, introducing KeenGreeper, a project I freshly released to GitHub, aiming to replace Greenkeeper 2.

For those of you that may not be familiar with Greenkeeper, they provide with a public service that integrates with GitHub, detects your NodeJS projects dependencies and issues a pull request whenever one of these may be updated. In the last couple weeks, they started advertising about their original service being scheduled for shutdown, while Greenkeeper 2 was supposed to replace it. Until last week, I’ve only been using Greenkeeper free plan, allowing me to process a single private repository. Migrating to GreenKeeper 2, I started registering all my NodeJS repositories and only found out 48h later that it would cost me 15$/month, once they would have definitely closed the former service.

Considering that they advertise on their website the new version offers some shrinkwrap support, while in practice outdated wrapped libraries are just left to rot, … I figured writing my own scripts collection would definitely be easier.

keengreeper lists repositories, ignored packages, cached versions

keengreeper lists repositories, ignored packages, cached versions

That first release addresses the first basics, adding or removing repositories to some track list, adding or removing modules to some ignore list, preventing their being upgraded. Resolving the last version for a package from NPMJS repositories, and keeping these in a local cache. Works with GitHub as well as any ssh-capable git endpoint. Shrinkwrap libraries are actually refreshed, if such are present. Whenever an update is available, a new branch is pushed to your repository, named after the dependency and corresponding version.

refresh logs

refresh logs

Having registered your first repositories, you may run the update job manually and confirm everything works. Eventually, setup a cron task executing our job script.

Note that right now, it is assumed the user running these scripts has access to a SSH key granting it with write privileges to our repositories, pushing new branches. Now ideally, either you run this from your laptop and can assume using your own private key is no big deal, or I would recommend creating a third-party GitHub user, with its own SSH key pair, and grant it read & write access to the repositories you intend to have it work with.

repositories listing

repositories listing

Being a 1 day DYI project, every feedback is welcome, GitHub is good place to start, …

PM2

systemctl integration

systemctl integration

PM2 is a NodeJS processes manager. Started on 2013 on GitHub, PM2 is currently the 82nd most popular JavaScript project on GitHub.

 

PM2 is portable, and would work on Mac or Windows, as well as Unices. Integrated with Systemd, UpStart, Supervisord, … Once installed, PM2 would work as any other service.

 

The first advantage of introducing PM2, whenever dealing with NodeJS services, is that you would register your code as a child service of your main PM2 process. Eventually, you would be able to reboot your system, and recover your NodeJS services without adding any scripts of your own.

pm2 processes list

pm2 processes list

Another considerable advantage of PM2 is that it introduces clustering capabilities, most likely without requiring any change to your code. Depending on how your application is build, you would probably want to have at least a couple processes for each service that serves users requests, while you could see I’m only running one copy of my background or schedule processes:

 

nodejs sockets all belong to a single process

nodejs sockets all belong to a single process

Having enabled clustering while starting my foreground, inferno or filemgr processes, PM2 would start two instances of my code. Now one could suspect that, when configuring express to listen on a static port, starting two processes of that code would fail with some EADDRINUSE code. And that’s most likely what would happen, when instantiating your code twice. Yet when starting such process with PM2, the latter would take over socket operations:

And since PM2 is processing network requests, during runtime: it is able to balance traffic to your workers. While when deploying new revisions of your code, you may gracefully reload your processes, one by one, without disrupting service at any point.

pm2 process description

pm2 process description

PM2 also allows to track per-process resources usage:

Last but not least, PM2 provides with a centralized dashboard. Full disclosure: I only enabled this feature on one worker, once. And wasn’t much convinced, as I’m running my own supervision services, … Then again, you could be interested, if running a small fleet, or if ready to invest on Keymetrics premium features. Then again, it’s worth mentioning that your PM2 processes may be configured forwarding metrics to Keymetrics services, so you may keep an eye on memory or CPU usages, application versions, or even configure reporting, notifications, … In all fairness, I remember their dashboard looked pretty nice.

pm2 keymetrics dashboard, as advertised on their site

pm2 keymetrics dashboard, as advertised on their site

Devuan

devuan-logo

devuan-logo

In many ways, Devuan is very similar to the OS it was forked from: Debian. Most of its packages come directly from Debian repositories, only a few (381 as of today) were re-build for Devuan, wiping systemd from their dependencies.

The project started back in late 2014, while systemd screw its way into Debian – and most of common Linux distros. Systemd is accused to be more than a replacement to init, as it takes on parts of the boot process and runtime operations. Contradicting UNIX philosophy. Aggressively integrated itself into Linux core components. Systemd implies more reboot, targets desktop users, left of non-Linux kernel users (Debian had to drop their kfreebsd architecture), … is known to be buggy, when not described as broken by design, or identified as a trojan.

On the other hand, Poettering is backed by its employer, Red-Hat. Systemd is not the first pike of crap we can usually find in all package managers, such as PulseAudio or Avahi (the ones you should usually blacklist). No surprises, after these, Poettering is immune to criticisms, and can pretty much follow his ideas ignoring complaints from the community he’s dismantling.

In this context, a group of Debian users and contributors, identifying themselves as “Veteran Unix Admins”, organized and eventually came up with Devuan. So far, their objective is to drop systemd dependencies from Debian Jessie. Re-build a community around what made Debian strengths, while abiding to the UNIX philosophy.

The first time I read about Devuan, their web page was pretty ugly and minimalist. Last week-end, I checked it back and noticed they released their second Beta: time for giving it a look.

PXE install works exactly as we’ve been used to with Debian. So far, I’ve been setting up a KVM server, an OpenLDAP, an apache-based reverse proxy, a DHCP, a Squid proxy, a TFTP server, … there’s virtually no difference with Debian. The few things I could note being the devuan-baseconf and devuan-keyring packages or the /etc/devuan_release file.

My only complaint so far, is that some packages still ship with systemd services configuration (acpid, apache2, apt-cacher-ng, cron, munin-node, nginx, rsync, rsyslog, ssh, udev or unattended-upgrades files in /lib/systemd/system). Being a Beta release, I’m satisfied enough confirming that all these process work perfectly using scripts from /etc/init.d or /usr/sbin/service. Although for a first “stable” release, I would very much like to see a “clean” copy of Debian.

Fail2ban & ELK

Following up on a previous post regarding Kibana and ELK5 recent release, today we’ll follow up configuring some map visualizing hosts as Fail2ban blocks them.

Having installed Fail2ban and configured the few jails that are relevant for your system, look for Fail2ban log file path (variable logtarget, defined in /etc/fail2ban/fail2ban.conf, defaults to /var/log/fail2ban.log on debian).

The first thing we need is to have our Rsyslog processing Fail2ban logs. Here, we would use Rsyslog file input module (imfile), and force using FQDN instead of hostnames.

$PreserveFQDN on
module(load=”imfile” mode=”polling” PollingInterval=”10″)
input(type=”imfile”
  File=”/var/log/fail2ban.log”
  statefile=”/var/log/.fail2ban.log”
  Tag=”fail2ban: ”
  Severity=”info”
  Facility=”local5″)

Next, we’ll configure Rsyslog forwarding messages to some remote syslog proxy (which will, in turn, forward its messages to Logstash). Here, I usually rely on rsyslog RELP protocol (may require installing rsyslog-relp package), as it addresses some UDP flaws, without shipping with traditional TCP syslog limitations.

Relaying syslog messages to our syslog proxy: load in RELP output module, then make sure your Fail2ban logs would be relayed.

$ModLoad omrelp
local5.* :omrelp:rsyslog.example.com:6969

Before restarting Rsyslog, if it’s not already the case, make sure the remote system would accept your logs. You would need to load Rsyslog RELP input module. Make sure rsyslog-relp is installed, then add to your rsyslog configuration

$ModLoad imrelp
$InputRELPServerRun 6969

You should be able to restart Rsyslog on both your Fail2ban instance and syslog proxy.

Relaying messages to Logstash, we would be using JSON messages instead of traditional syslog formatting. To configure Rsyslog forwarding JSON messages to Logstash, we would use the following

template(name=”jsonfmt” type=”list” option.json=”on”) {
    constant(value=”{“)
    constant(value=”\”@timestamp\”:\””) property(name=”timereported” dateFormat=”rfc3339″)
    constant(value=”\”,\”@version\”:\”1″)
    constant(value=”\”,\”message\”:\””) property(name=”msg”)
    constant(value=”\”,\”@fields.host\”:\””) property(name=”hostname”)
    constant(value=”\”,\”@fields.severity\”:\””) property(name=”syslogseverity-text”)
    constant(value=”\”,\”@fields.facility\”:\””) property(name=”syslogfacility-text”)
    constant(value=”\”,\”@fields.programname\”:\””) property(name=”programname”)
    constant(value=”\”,\”@fields.procid\”:\””) property(name=”procid”)
    constant(value=”\”}\n”)
  }
local5.* @logstash.example.com:6968;jsonfmt

Configuring Logstash processing Fail2ban logs, we would first need to define a few patterns. Create some /etc/logstash/patterns directory. In there, create a file fail2ban.conf, with the following content:

F2B_DATE %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[ ]%{HOUR}:%{MINUTE}:%{SECOND}
F2B_ACTION (\w+)\.(\w+)\[%{POSINT:pid}\]:
F2B_JAIL \[(?\w+\-?\w+?)\]
F2B_LEVEL (?\w+)\s+

Then, configuring Logstash processing these messages, we would define an input dedicated to Fail2ban. Having tagged Fail2ban events, we will apply a Grok filter identifying blocked IPs and adding geo-location data. We’ll also include a sample output configuration, writing to ElasticSearch.

input {
  udp {
    codec => json
    port => 6968
    tags => [ “firewall” ]
  }
}

filter {
  if “firewall” in [tags] {
    grok {
      patterns_dir => “/etc/logstash/patterns”
      match => {
        “message” => [
          “%{F2B_DATE:date} %{F2B_ACTION} %{F2B_LEVEL:level} %{F2B_JAIL:jail} %{WORD:action} %{IP:ip} %{GREEDYDATA:msg}?”,
          “%{F2B_DATE:date} %{F2B_ACTION} %{WORD:level} %{F2B_JAIL:jail} %{WORD:action} %{IP:ip}”
        ]
      }
    }
    geoip {
      source => “ip”
      target => “geoip”
      database => “/etc/logstash/GeoLite2-City.mmdb”
      add_field => [ “[geoip][coordinates]”, “%{[geoip][longitude]}” ]
      add_field => [ “[geoip][coordinates]”, “%{[geoip][latitude]}” ]
    }
    mutate {
      convert => [ “[geoip][coordinates]”, “float” ]
    }
  }
}

output {
  if “_check_logstash” not in [tags] and “_grokparsefailure” not in [tags] {
    elasticsearch {
      hosts => [ “elasticsearch1.example.com”, “elasticsearch2.example.com”, “elasticsearch3.example.com” ]
      index => “rsyslog-%{+YYYY.MM.dd}”
    }
  }
  if “_grokparsefailure” in [tags] {
    file { path => “/var/log/logstash/failed-%{+YYYY-MM-dd}.log” }
  }
}

Note that if I would have recommended using RELP inputs last year, running Logstash 2.3: as of Logstash 5 this plugin is no longer available. Hence I would recommend setting up some Rsyslog proxy on your Logstash instance, in charge of relaying RELP messages as UDP ones to Logstash, through your loopback.

Moreover: assuming you would need to forward messages over a public or un-trusted network, then using Rsyslog RELP modules could be used with Stunnel encapsulation. Whereas running Debian, RELP output with TLS certificates does not seem to work as of today.

That being said, before restarting Logstash, if you didn’t already, make sure to define a template setting geoip type to geo_point (otherwise shows as string and won’t be usable defining maps). Create some index.json file with the following:

{
  “mappings”: {
    “_default_”: {
      “_all”: { “enabled”: true, “norms”: { “enabled”: false } },
      “dynamic_templates”: [
        { “template1”: { “mapping”: { “doc_values”: true, “ignore_above”: 1024, “index”: “not_analyzed”, “type”: “{dynamic_type}” }, “match”: “*” } }
      ],
      “properties”: {
        “@timestamp”: { “type”: “date” },
        “message”: { “type”: “string”, “index”: “analyzed” },
        “offset”: { “type”: “long”, “doc_values”: “true” },
        “geoip”: { “type” : “object”, “dynamic”: true, “properties” : { “location” : { “type” : “geo_point” } } }
      }
    }
  },
  “settings”: { “index.refresh_interval”: “5s” },
  “template”: “rsyslog-*”
}

kibana search

kibana search

Post your template to ElasticSearch

root@logstash# curl -XPUT http://elasticsearch.example.com:9200/_template/rsyslog?pretty -d@index.json

You may now restart Logstash. Check logs for potential errors, …

kibana map

kibana map

Now to configure Kibana: start by searching for Fail2ban logs. Save your search, so it can be re-used later on.

Then, in the Visualize panel, create a new Tile Map visualization, pick the search query you just saved. You’re being asked to select bucket type: click on Geo Coordinates. Insert a Label, click the Play icon to refresh the sample map on your right. Save your Visualization once you’re satisfied: these may be re-used defining Dashboards.