No, we don’t talk about the strange noises you can hear everyday in our office. We talk about the acronym that stands for Elasticsearch-Logstash-Kibana. This stack might be well-known bei dev-ops around the world and we also want to use it, to keep track on what is happening on our servers.
We considered what we want to do and what we need, so we decided to go with a available elk docker image and deploy it on our main system. Since we don’t have our own intranet, we want to secure the communication between the logstash input and our logging satellites via ssl. We want to use the lumberjack protocol for this and our logstash server should reject any connection which does not provide the certificate we provide. This is not yet in the latest release of the lumberjack server and lumberjack input, so we utilize a patched logstash 1.4.2. We applied the following patches with some small adjustments:
The first requirement is to tell the logstash server, that he has to reject invalid client certificates. For that, we need to extend the configuration for the lumberjack input, which is done with this pull request:
all the credits go to Patrick Flaherty!
Now we can configure it like this:
host => "0.0.0.0"
port => 8001
ssl_certificate => "path to .crt"
ssl_key => "path to .key"
ssl_cacert => "path to .crt"
ssl_client_cert_check => true
Perfect, now our configuration is passed to the server, but he doesn’t know yet what to do with the additional information. So let’s apply the second patch.
Thanks again to Patrick Flaherty, we can apply the following patch for the server instance:
Be careful, there are 2 small syntax errors which are fixed by the comments provided at the corresponding lines. We applied this patch to
vendor\bundle\jruby\ 1.9 \gems\jls-lumberjack-0.0.20\lib\lumberjack\server.rb
vendor\bundle\jruby\ 2.1 \gems\jls-lumberjack-0.0.20\lib\lumberjack\server.rb
Now, we just need to restart logstash with the new config and lumberjack input. And voilà connections will only be accepted if the same certificate is configured in our config.
Now that we can establish save connections, we can deploy the docker image on our machine. We don’t plan to include the authentication for kibana in the image itself, but instead just front it with an nginx and a configured basic authentication.
There is also a plugin for kibana which enables authentication and user-specific dashboards, but it was written for kibana3 and seems to have some issues with kibana4 (which we use).
You can follow the progress of the authentication plugin here.
node.js and hapi.js
Since we build our applications based on the amazing hapi.js, we need a reporter for its provided logging framework good to push our logs directly to the logstash input. We started our own small implementation based on the good-reporter interface and the lumberjack-protocol here. If you find it useful, give us some feedback or a pull request, if there is any feature missing you might need.
Now, not only our logs are send to logstash, but good also collects server information from which we can create kibana reports to have an insight in the performance of our applications.
Our neighbours might be concerned about our ELK’ing, but after we set this up, it’s totally reasonable to scream out loud how awesome this setup is!