influxdb _flat_logoSensu event handlers are for taking action on sensu events (produced by check results), such as sending an email alert, or storing metrics in InfluxDB. There are several types of handlers: pipe, tcp, udp, transport, and set.

TCP and UDP handlers send the event data to a remote socket. It is advisable to use UDP handler when you have several dozen hosts to monitor. With the UDP protocol, the load is practically non-existent on the Sensu server. This is what we are going to use in this article.

sensu_flat_logoSo we are going to see how to send metrics from Sensu into InfluxDB. To do that the Sensu handler will send all the data collected by Sensu into the InfluxDB database.


InfluxDB Prerequisite

For those who haven’t heard about it yet, InfluxDB is a time series, metrics, and analytics database. Time series databases are designed to address the problem of storing data resulting from successive measurements made over a period of time. It’s designed to support horizontal as well as vertical scaling and, best of all, it’s not written in Java — it’s written in Go.

To manage InfluxDB, we have two choices :
– Web Admin Interface (port 8083)
– Command Line Interface [API] (port 8086)

Like any good DevOps, we are going to do the job via CLI :), but you could connect to the web UI to see how it looks: http://influxdb-host-name:8083

So to start CLI, just execute the influx command on the InfluxDB server. It should look like this :

$ influx
Connected to http://localhost:8086 version 0.9
InfluxDB shell 0.9
>

For interacting with InfluxDB, the query language used is InfluxQL. It’s a SQL-like query language, so for those who are familiar with SQL, it should be easy.

When InfluxDB is first set up, it does not contain any actual databases beside the internal one, so we need to create a database which we will ultimately use to store our metrics.

To do that use the CREATE DATABASE <db-name> InfluxQL statement, where <db-name> is the name of the database you wish to create.

> CREATE DATABASE sensu
>

Then, we need to enable and configure the UDP section in the InfluxDB’s configuration file. Also, in order to have better performance and to avoid dropping metrics, there is some lines to uncomment and this will allow InfluxDB to enable its buffer. See below the UDP section :

[[udp]]
  enabled = true
   bind-address = ":8090"
   database = "sensu"
  # retention-policy = ""

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

   batch-size = 1000 # will flush if this many points get buffered
   batch-pending = 5 # number of batches that may be pending in memory
   batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
   read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.

Finally, restart the InfluxDB service and we should be ready to receive metrics.
So now let’s configure the Sensu handler.

Sensu Event Configuration

Requirements

The influxdb rubygem needs to be installed in Sensu’s Ruby environment in order to handle writing the data. Note that Sensu vendors their own embedded ruby, so having it available in the system Ruby won’t work.
E.g. /opt/sensu/embedded/bin/gem install influxdb


There is only one new file to create in your Sensu configuration directory, and this is the handler script :

/etc/sensu/extensions/influxdb_line_protocol.rb

#!/usr/bin/env ruby

require "sensu/extension"

module Sensu
    module Extension
        class InfluxDBLineProtocol < Mutator
            def name
                "influxdb_line_protocol"
            end

            def description
                "returns check output formatted for InfluxDB's line protocol"
            end

            def run(event)
                host = event[:client][:name]
                ip = event[:client][:address]
                metric = event[:check][:name]
                output = event[:check][:output]
            
                data = []
                output.split("\n").each do |result|
                    m = result.split
                    next unless m.count == 3
                    key = m[0].split('.', 2)[1]
                    key.gsub!('.', '_')
                    value = m[1].to_f
                    time = m[2].ljust(19, '0')
                    data << "#{key},host=#{host},ip=#{ip},metric=#{metric} value=#{value} #{time}"
                end

                yield data.join("\n"), 0
            end
        end
    end
end

What this script will do is collect all the data from Sensu output checks, parse them, and inject them into the InfluxDB database.

Next we have to define the above handler by editing the following JSON file :

/etc/sensu/conf.d/handlers.json

{
    "handlers": {
        "default": {
            "type": "set",
            "handlers": ["logstash", "influxdb"]
        },
        "logstash": {
            "type": "pipe",
            "command": "/etc/sensu/handlers/notification/logstash.rb"
        },
        "influxdb": {
            "type": "udp",
            "mutator": "influxdb_line_protocol",
            "socket": {
                       "host": "dockercompose_influxdb_1",
                       "port": 8090
            }
        }
}

Note : To fill this file, you can refer to the Sensu documentation

As you can see, I have another handler already defined (logstash). By doing that, I can define a set of handlers, it’s a way to send the same event data to multiple backends.

And finally, configure checks with the InfluxDB handler. In my case all my checks are defined with the handler ‘default’, so that all my checks results are sent to Logstash AND InfluxDB.

{
  "checks": {
    "load_metrics": {
       "type": "metric",
       "command": "/opt/sensu/embedded/bin/ruby /etc/sensu/plugins/linux/load-metrics.rb --scheme `grep name /etc/sensu/conf.d/client.json | cut -d\\\" -f4`",
       "interval": 60,
       "subscribers": ["linux"],
       "handlers": ["default"]
    },
...

Now restart sensu-server and you should see several metrics in the sensu database. To see the data, run the following query on the sensu database via the web admin interface :

select * from /.*/ order by time desc limit 3

influxdb web admin

Normally, nothing should appear in the sensu event log. But to track the event sent to InfluxDB, you can launch this command from the InfluxDB server :

nc -u -l 8090

Display Data on Grafana

Set InfluxDB as Datasource

In this step, we will add our InfluxDB database as a source in Grafana.

To add the data source, open the side menu by clicking the Grafana icon in the top header. In the side menu, click Data Sources. Click on the Add New link in the top header to bring up the data source definition screen.

grafana datasource

Populate this screen using your settings and test the connection to your database.

Build a Dashboard

Finally, you are ready to build your own dashboard with beautiful graphs.

grafana graph

grafana request

Conclusion

In this article, we have seen how to inject data via a Sensu event handler into an InfluxDB database, collect by Sensu engine and display this data in pretty graphs in Grafana.

We have proved that the stack of these 3 systems — Sensu, InfluxDB and Grafana — work together correctly and are ready to be put into a production environment.

About TrackIt

TrackIt, an Amazon Web Services Advanced Consulting Partner based in Marina del Rey, CA, offers a range of cloud management, consulting, and software development solutions. Their expertise includes Modern Software Development, DevOps, Infrastructure-As-Code, Serverless, CI/CD, and Containerization, with a focus on Media & Entertainment workflows, High-Performance Computing environments, and data storage.

TrackIt excels in cutting-edge software design, particularly in the areas of containerization, serverless architectures, and pipeline development. The company’s team of experts can help you design and deploy a custom solution tailored to your specific needs.

In addition to cloud management and modern software development services, TrackIt also provides an open-source AWS cost management tool to help users optimize their costs and resources on the platform. With its innovative approach and expertise, TrackIt is the ideal partner for organizations seeking to maximize the potential of their cloud infrastructure.

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.