Monitor the health of your application infrastructure with Elasticsearch & Kibana

Capture dโ€™รฉcran 2020 06 16 ร  09.10.06

Elasticsearch is an open-source, distributed search and analytics engine that is commonly used for log analytics, full-text search, and operational intelligence.โ€ฏKibanaโ€ฏis a free open-source data visualization tool that provides a tight integration with Elasticsearch and is the default choice for visualizing data stored in the latter. 

How They Work Together  

Data is sent to Elasticsearch in the form of JSON files via the Elasticsearch API or other ingestion tools such as Logstash or Amazon Kinesis Firehose. Elasticsearch then proceeds to store the document and adds a searchable reference to the document in the clusterโ€™s index which can be retrieved using the Elasticsearch API. This data stored in Elasticsearch can be used to easily set up dashboards and reports with Kibana to gain access to analytics and additional operational insights. 

โ€œThe ability to make sense out of data is no longer simply a competitive advantage for enterprises, it has become an absolute necessity for any company in an increasingly complex and statistics-driven world. The visualizations provided by Kibana on Elasticsearch data can quickly provide deep business insight.โ€ 

โ€” Brad WinettTrackIt President 

Helping ElephantDrive Take Advantage of Kibana Dashboards to Better Monitor their APIs 

ElephantDriveโ€ฏis a leading service supplier that provides individuals and businesses simple but powerful tools for protecting and accessing their data. With ElephantDrive, ordinary people enjoy the peace of mind that comes from the type of enterprise-class backup, storage, and data management that has historically only been available to big corporations. 

ElephantDrive wanted to improve its ability to store, analyze, and visualize log information, so they set up a basic ELKโ€ฏ(Elasticsearch, Logstash, Kibana)โ€ฏstack. The initial Kibana implementation was in place but without any of the dashboards that make it such a valuable tool, so ElephantDrive approached the TrackIt team and asked us to analyze ElephantDrivesโ€™s current Elasticsearch logs to recommend dashboards that could be set up to allow for better log monitoring. Two were created for this specific purpose: 

  1. A โ€˜data.operationโ€™ dashboard that displays the distribution of requests by operation in a pie chart 
  2. A โ€˜data.apiKeyโ€™ dashboard that displays the average response time per API key 

โ€œWe were able to get the basic stack up quickly, but wanted to turn the data into actionable information โ€” the Track It team not only helped us leverage the power of Kibanaโ€™s visualizations, but also provided the education, documentation, and tools for us to take the next steps on our ownโ€ 

โ€” Michael Fisher, ElephantDrive CEO and Co-Founder 

The following is a thorough tutorial that will first walk the reader through the general process of setting up dashboards using Elasticsearch and Kibana before illustrating the steps we took to set up these two dashboards for ElephantDrive. 

Accessing Elasticsearch & Kibana 

Communication with Elasticsearch is done via HTTP requests. We have used Postman in this example, which provides us with a more graphical interface to make requests. To access Elasticsearch, you can make requests in the following way using a curl in a shell script: 

curl -v โ€œhttp://ec2-XXX-XX-X-XX.compute-1.amazonaws.com:9200/_cat/indices?vโ€ 

To accessโ€ฏKibana, load this URL in your browser : 

http://ec2-XXX-XX-X-XX.compute-1.amazonaws.com:5601 

Logstash Ingestion Issue & How To Fix It 

ElephantDrive had an issue with their Logstash. Under some rare circumstances, the Logstash ingestion was failing and the following error message was received: 

[2020โ€“03โ€“04T22:34:52,349][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>[โ€œindexโ€, {:_id=>nil, :_index=>โ€logstash-2020.03.04", :_type=>โ€docโ€, :_routing=>nil}, #<LogStash::Event:0x16a5ee83>], :response=>{โ€œindexโ€=>{โ€œ_indexโ€=>โ€logstash-2020.03.04", โ€œ_typeโ€=>โ€docโ€, โ€œ_idโ€=>โ€AXCnr_f9Ski653_WeeEoโ€, โ€œstatusโ€=>400, โ€œerrorโ€=>{โ€œtypeโ€=>โ€mapper_parsing_exceptionโ€, โ€œreasonโ€=>โ€failed to parse [data]โ€, โ€œcaused_byโ€=>{โ€œtypeโ€=>โ€illegal_state_exceptionโ€, โ€œreasonโ€=>โ€Canโ€™t get text on a START_OBJECT at 1:171"}}}}}

This error was thought to be coming from a malformed log entry arriving at the exact moment a new Elasticsearch index is created. This would happen if the malformed log entry is the first one sent to Logstash on a new day since Logstash creates a new index each day. 

Since the Elasticsearch mapping is dynamically created from the message parsed by Logstash, a malformed message will put a wrong mapping in the index, which will, in turn, stop the correct message from being ingested. 

Fixing the Logstash Ingestion Issue 

If you are facing a similar issue, the first step to take is to shut down Logstash. Once Logstash is shut down, you need to delete the incriminated index. The index name can be found in the Logstash log (and is typically โ€œlogstash-YYYY.MM.DDโ€). 

Open Kibana, and go to โ€œDev Toolsโ€: 

Open Kibana to start

Delete the index with the following command (using the name of the incriminated index) : 

Capture dโ€™รฉcran 2020 06 16 ร  09.15.35

You can then restart Logstash and let it ingest new log entries. 

Once Logstash has recreated the index, you will also need to refresh the field list to get the correct mapping (under Management > Index Pattern > logstash-*) : 

Capture dโ€™รฉcran 2020 06 16 ร  09.16.16

Getting Started With Kibana 

How To Create A Dashboard with Kibana โ€” Pie Chart Example 

In this initial example, we will walk you through the process of creating a pie chart dashboard that shows the most performed queries. 

  1. Go to theโ€ฏโ€œVisualizeโ€โ€ฏtab and click theโ€ฏโ€œCreate new visualizationโ€โ€ฏbutton 
Capture dโ€™รฉcran 2020 06 16 ร  09.20.39
  1. Selectโ€ฏโ€œCompare parts of a wholeโ€ (Pie chart)
Capture dโ€™รฉcran 2020 06 16 ร  09.21.06
  1. The data required to create this dashboard is located in theโ€ฏโ€œlogstashโ€โ€ฏdatabase. 
Capture dโ€™รฉcran 2020 06 16 ร  09.22.48
  1. Selectโ€ฏโ€œSplit Slicesโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.26.28
  1. Underโ€ฏโ€œAggregationโ€, chooseโ€ฏโ€œTermsโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.26.56
  1. Selectโ€ฏโ€œdata.operation.keywordโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.27.19
  1. Choose how you would like to order the dataand alsothe number of slices withโ€ฏโ€œSizeโ€ 
Capture dโ€™รฉcran 2020 06 16 ร  09.27.40
  1. Click onโ€ฏโ€œApply changesโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.28.16

This is what the result looks like: 

Results after completion of steps 1-8

Creating the โ€˜Average response time per API Keyโ€™ Dashboard 

The first dashboard we created for ElephantDrive is a data table that displays the average response time per API key. The steps to implement this dashboard are as follows: 

  1. Go to theโ€ฏโ€œVisualizeโ€โ€ฏtab and click theโ€ฏโ€œCreate new visualizationโ€โ€ฏbutton 
Capture dโ€™รฉcran 2020 06 16 ร  09.29.27
  1. For this type of data, choosingโ€ฏโ€œData Tableโ€โ€ฏis a relevant choice.
Capture dโ€™รฉcran 2020 06 16 ร  09.29.52
  1. The data required to create this dashboardis located intheโ€ฏโ€œlogstashโ€โ€ฏdatabase. 
Capture dโ€™รฉcran 2020 06 16 ร  09.30.17
  1. Now we need to add a row for the API keys, so underโ€ฏโ€œselect buckets typeโ€, chooseโ€ฏโ€œSplit Rowsโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.30.40
  1. To find API keys, chooseโ€ฏTermsโ€ฏunderโ€ฏโ€œAggregationโ€โ€ฏand chooseโ€ฏdata.apiKey.keywordโ€ฏunderโ€ฏโ€œFieldโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.30.57
  1. To add the average response time per API key to the data table, click on โ€œAdd metricsโ€โ€ฏunderโ€ฏโ€œmetricsโ€
Capture dโ€™รฉcran 2020 06 16 ร  09.31.18
  1. Under โ€œAggregationโ€ chooseโ€ฏAverage, and underโ€ฏโ€œFieldโ€โ€ฏchooseโ€ฏdata.response.totalTime
Capture dโ€™รฉcran 2020 06 16 ร  09.32.29
  1. Click onโ€ฏโ€œApply changesโ€.โ€ฏWe now have a dashboard that shows us the average response time per API key:
Capture dโ€™รฉcran 2020 06 16 ร  09.33.07

Creating the โ€˜Average time per operationโ€™ Visualization 

The second dashboard we created for ElephantDrive displays the average time elapsed (in ms) by operation type. The steps to implement this dashboard are as follows: 

  1. Go to the โ€œVisualizeโ€ tab and click the โ€œCreate new visualizationโ€โ€ฏbutton 
Capture dโ€™รฉcran 2020 06 16 ร  11.10.19
  1. For this type of data, choosingโ€ฏโ€œVertical Barโ€โ€ฏis a relevant choice.
Capture dโ€™รฉcran 2020 06 16 ร  11.19.42
  1. The data required to create this dashboardis located intheโ€ฏโ€œlogstashโ€โ€ฏdatabase. 
Capture dโ€™รฉcran 2020 06 16 ร  11.20.10
  1. We first want to see the average time, so in the โ€œmetricsโ€ section choose โ€œAverageโ€ inโ€ฏAggregationโ€ฏand โ€œdata.response.totalTimeโ€ inโ€ฏField.
Capture dโ€™รฉcran 2020 06 16 ร  11.20.33
  1. Then we want to add a metric
Capture dโ€™รฉcran 2020 06 16 ร  11.22.06
  1. Select โ€œDote Sizeโ€
Capture dโ€™รฉcran 2020 06 16 ร  11.24.05
  1. And select โ€œCountโ€ asโ€ฏAggregation, to be able to see how many times the query has been used.
Capture dโ€™รฉcran 2020 06 16 ร  11.24.32
  1. Now, to see which method is used (GET, POST, PUT, etc..) wehave togo in theโ€ฏBucketโ€ฏsection, and chooseโ€ฏโ€œSplit Seriesโ€ 
Capture dโ€™รฉcran 2020 06 16 ร  11.24.53
  1. And then โ€œTermsโ€ asโ€ฏAggregation, โ€œdata.request.method.keywordโ€ asโ€ฏfield, andโ€ฏOrder Byโ€ฏโ€œAverage timeโ€ (created step 4)
Capture dโ€™รฉcran 2020 06 16 ร  11.25.27
  1. Finally, to see where the operation has been made, we will add a sub-bucket
Capture dโ€™รฉcran 2020 06 16 ร  11.27.05
  1. And create a โ€œTermsโ€โ€ฏSub Aggregation, with โ€œdata.operation.keywordโ€ asโ€ฏField
Capture dโ€™รฉcran 2020 06 16 ร  11.33.38

There you have it! 

Capture dโ€™รฉcran 2020 06 16 ร  11.34.20

And you can see more details by hovering over a section with your mouse 

Capture dโ€™รฉcran 2020 06 16 ร  11.34.41

Better Visibility & Enhanced Productivity In Log Monitoring 

Cloud based infrastructure can sometimes feel like a black box with only limited visibility into its efficiency. Utilizing Kibana and Elasticsearch provides ElephantDrive with informative dashboards that provide insight into their compute environment, enhancing the efficacy of their log monitoring efforts. 

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.