Blog Detail

preview image DevOps
by Anurag Srivastava, Feb 9, 2019, 12:06:18 PM | 5 minutes

Configure Logstash to push MySQL data into Elasticsearch

I am taking the example of a bqstack website which is build using MySQL database. So basically what I am going to do is configure Logstash using JDBC input plugin to connect with MySQL database. After connecting to MySQL database I will run the query to fetch the records from the database and will push that record into Elasticsearch index.

Once we are getting the data from MySQL database into Elasticsearch, we can create the dashboards in Kibana as per our requirement. I need to create the Logstash configuration file inside /etc/logstash/conf.d/ directory. So let us create a file as blog.conf and write the code as follows:


# file: blog.conf
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-5.1.23-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# mysql jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://url-of-db:3306/db_name?zeroDateTimeBehavior=convertToNull"
# The user we wish to execute our statement as
jdbc_user => "username"
jdbc_password => "password"
schedule => "* * * * *"
# our query to fetch blog details
statement => "SELECT blg.*, concat(au.first_name, ' ',au.last_name) as name,au.email as email, cc.category_name, cc.category_image FROM `blog_blogs` as blg left join auth_user as au on au.id = blg.author_id left join category_category as cc on cc.id = blg.category_id where blg.id > :sql_last_value order by blg.create_date"
use_column_value => true
tracking_column => id
tracking_column_type => "numeric"
}
}
output {
elasticsearch {
hosts => "http://127.0.0.1:9200"
index => "bqstack"
document_type => "blogs"
}
}


In the above Logstash configuration file we have input and output section, under input we are connecting to MySQL database for fetching the data and under output, we are sending that data to Elasticsearch cluster.  Under the input section, we have jdbc block in which first is jdbc_driver_library which tells the jdbc driver library path. JDBC input library does not contain the jdbc driver so we need to download it and then provide the path under jdbc_driver_library parameter. Next is jdbc_driver_class where we need to provide the driver class, then we need to provide the jdbc connection string. For connection string, it has a syntax where we have to provide the db type, URL of database, port of database, database name and then we need to set username and password parameter of the database.

Once these database connection related parameters are set we can set the scheduler by setting the schedule parameter then comes the actual query which we are going to execute on connected database. There as a specific syntax using which we can set the schedule frequency. The syntax for JDBC input plugin is quite similar to the cron. For example:

"* * * * * " => runs every second
"30 2 * * *" => runs as 2:30AM
"10 22 * * *" => runs at 10:10PM


Statement parameter is there to write the query, in the above query I have compared the blog id with sql_last_value which is dynamic and refer to the id column of the table. After each query execution, the value of sql_last_value is updated and set in the file. Next time when the query is executed this value is picked from the file. We are providing the data type of the tracking column along with the tracking column name, as id is a numeric value so I have given it as numeric for tracking_column_type parameter.

After doing all those configuration changes we need to execute logstash using following command on Ubuntu:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/blog.conf --path.data=/tmp/bq


This is a one time command and after running it the scheduler will start working and as per our scheduler entry, the query will be executed every second. We can do the cron entry to auto start the Logstash configuration execution once system restarts. We need to run following command for opening the crontab in Linux:


crontab -e

Above command opens the crontab file where we can write the following entry to ensure that after machine restart Logstash configuration is executed and JDBC input plugin scheduler starts executing the queries to fetch the data from the database.

@reboot /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/blog.conf --path.data=/tmp/bq

Above crontab entry starts with @reboot which works after each machine restart after that we have given the expression to execute the Logstash configuration.

In this way, we can configure Logstash using JDBC input plugin to read the RDBMS data and put it into Elasticsearch. Once data is there in Elasticsearch we can create the visualizations and dashboards using Kibana.

Other Blogs on Elastic Stack:
Introduction to Elasticsearch

Elasticsearch Installation and Configuration on Ubuntu 14.04
Log analysis with Elastic stack 
Elasticsearch Rest API
Basics of Data Search in Elasticsearch
Elasticsearch Rest API
Wildcard and Boolean Search in Elasticsearch
Configure Logstash to push MySQL data into Elasticsearch 
Metrics Aggregation in Elasticsearch
Bucket Aggregation in Elasticsearch
How to create Elasticsearch Cluster

If you found this article interesting, you can explore  "Mastering Kibana 6.0" and "Kibana 7 Quick Start Guide" to get more insight about Kibana and how we can configure ELK to create dashboards for key performance indicators.

About Author

Anurag Srivastava

Author | Blogger | Tech Lead | Elastic Stack | Innovator |

View Profile

Comments (4)

  • user image
    Amitav Swain
    May 31, 2019, 4:54:54 AM

    Can I pass more than two columns into the tracking_column because i want to track the two column value change

  • user image
    jitender yadav
    Jun 5, 2019, 11:41:42 AM

    same question, can you please help us

  • user image
    Jitu Singh
    Jun 5, 2019, 12:08:20 PM

    Hi ANurag

  • user image
    Anurag Srivastava
    Jun 5, 2019, 12:11:33 PM

    @Amitav, @Jitender: No you can not use more than one column here and the reason behind it is that we want to track the change in the table which can easily be done using the auto increment field or timestamp field.

Leave a comment

Related Blogs

Snapshot and Restore Elasticsearch Indices

Sep 16, 2019, 5:55:06 AM | Anurag Srivastava

htop: An Interactive Process Viewer

Oct 13, 2018, 8:49:59 PM | Anurag Srivastava

Configure Logstash to send MongoDB data into Elasticsearch

Mar 9, 2019, 8:20:38 AM | Anurag Srivastava

How to create Elasticsearch Cluster

Apr 6, 2019, 8:41:41 PM | Anurag Srivastava

Introduction to Elasticsearch

Apr 14, 2018, 1:18:05 PM | Anurag Srivastava

Log analysis with Elastic stack

Jan 31, 2018, 6:11:29 AM | Anurag Srivastava

Load csv Data into Elasticsearch

Feb 9, 2019, 6:34:22 PM | Anurag Srivastava

Configure Jenkins for Automated Code Deployment

Jun 13, 2018, 3:44:01 PM | Anurag Srivastava

Why SonarQube is important for IT projects ?

Apr 24, 2018, 2:52:28 PM | Anurag Srivastava

SonarQube installation on Ubuntu

May 12, 2018, 4:47:07 PM | Anurag Srivastava

Top Blogs

Configure SonarQube Scanner with Jenkins

Jun 21, 2018, 4:58:11 AM | Anurag Srivastava

Build and deploy Angular code using Python

Jun 26, 2018, 4:50:18 PM | Anurag Srivastava

Configure Jenkins for Automated Code Deployment

Jun 13, 2018, 3:44:01 PM | Anurag Srivastava

SonarQube installation on Ubuntu

May 12, 2018, 4:47:07 PM | Anurag Srivastava

Execute Commands on Remote Machines using sshpass

Jul 16, 2018, 5:00:02 PM | Anurag Srivastava

Why SonarQube is important for IT projects ?

Apr 24, 2018, 2:52:28 PM | Anurag Srivastava

Elasticsearch Rest API

Jul 31, 2018, 6:16:42 PM | Anurag Srivastava

Analyze your project with SonarQube

Jun 2, 2018, 10:49:54 AM | Anurag Srivastava

Install Jenkins on Ubuntu

May 26, 2018, 6:42:02 PM | Anurag Srivastava

Wildcard and Boolean Search in Elasticsearch

Aug 10, 2018, 7:14:40 PM | Anurag Srivastava