ELK log analysis system construction

ELK log analysis system

1. Introduction to ELK Log Analysis System

(1) Log server

  • Improve security
  • Centralized storage of logs
  • Disadvantage: Difficulty in analyzing logs

(2) Composition of ELK log analysis system

  1. Elasticsearch
  2. Logstach
  3. Kibana
  • Log processing steps
  1. Centralized management of logs
  2. Format logs (Logstash) and output to Elasticsearch
  3. Index and store formatted data (Elasticsearch)
  4. Display of front-end data (Kibana)
  • Overview of Elasticsearch

    Provides a full-text search engine with distributed multi-user capabilities

  • Elasticsearch core concepts

  1. near real time

  2. cluster

  3. node

  4. index

    index (library) → type (table) → document (record)

  5. Shards and Replicas

  • Introduction to Logstash
  1. A powerful data processing tool
  2. Can realize data transmission, format processing, formatted output
  3. Data input, data processing (such as filtering, rewriting, etc.)
  • LogStash main components
  1. Shipper
  2. Indexer
  3. Broker
  4. Search and Storage
  5. Web Interface
  • Introduction to Kibana
  1. An open source analytics and visualization platform for Elasticsearch
  2. Search and view data stored in Elasticsearch indexes
  3. Advanced data analysis and presentation through various charts
  • Main features of Kibana
  1. Seamless Elasticsearch Integration
  2. Integrate data, complex data analysis
  3. Benefit more team members
  4. Flexible interface, easier to share
  5. Simple configuration and visualization of multiple data sources
  6. Simple data export

Second, ELK log analysis system builds actual combat

  • Experimental environment: VMware Workstation 15.5, Xshell 6, Centos7.6
  • Package versions: elasticsearch-5.5.0, logstash-5.5.1, kibana-5.5.1, elasticsearch-head.tar, node-v8.2.1, phantomjs-2.1.1
  • Experimental virtual machine IP assignment:
Device name/function IP
node1: elasticsearch,elasticsearch-head 192.168.50.133
node2: elasticsearch,logstash,kibana 192.168.50.134
  • Experimental steps:

1. Set the hostname for each device and add hostname resolution in the host file

hostnamectl set-hostname node1    ## Node 1
hostnamectl set-hostname node2    ## Node 2

edit host document: vim /etc/hosts
 Add to:
192.168.50.133 node1
192.168.50.134 node2

2. Turn off the firewall of the two servers

systemctl stop firewalld && setenforce 0

3. Install es on node1

rpm -ivh elasticsearch-5.5.0.rpm    ## Install
systemctl daemon-reload    Reload service configuration
systemctl enable elasticsearch.service    ## Set up to start automatically

4. Edit the es configuration file and modify it

vim /etc/elasticsearch/elasticsearch.yml

Modify the following:

17 cluster.name: my-elk_cluster              ## cluster name
23 node.name: node1                             ## node name
33 path.data: /data/elk_data                   ## data storage path
37 path.logs: /var/log/elasticsearch      ## log storage path
43 bootstrap.memory_lock: false           ## Do not lock memory at startup
55 network.host: 0.0.0.0                      ## Provide the IP address bound to the service, listen on all addresses
59 http.port: 9200                              ## The listening port is 9200
68 discovery.zen.ping.unicast.hosts: ["node1", "node2"]     ## Cluster discovery via unicast discovery

5. Create a data storage path

mkdir -p /data/elk_data
chown elasticsearch.elasticsearch /data/elk_data/     ## Set directory permissions

6. Enable es service

systemctl start elasticsearch.service

7. Check whether the service port is open

netstat -natp | grep 9200     ## If you just start it, you can't see port 9200, and it will be there after about 10 seconds.

8. Open the browser to visit the two nodes respectively

http://192.168.50.133:9200/
http://192.168.50.134:9200/

## Node 1:
{
  "name" : "node1",
  "cluster_name" : "my-elk-cluster",
  "cluster_uuid" : "Tl4HiPhqSLmvuCmK8slYtA",
  "version" : {
    "number" : "5.5.0",
    "build_hash" : "260387d",
    "build_date" : "2017-06-30T23:16:05.735Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

## Node 2:
{
  "name" : "node2",
  "cluster_name" : "my-elk_cluster",
  "cluster_uuid" : "VTnP4Wo2R3i4_3PQ-dtyDg",
  "version" : {
    "number" : "5.5.0",
    "build_hash" : "260387d",
    "build_date" : "2017-06-30T23:16:05.735Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

9. Check the cluster health status

http://192.168.50.133:9200/_cluster/health?pretty
http://192.168.50.134:9200/_cluster/health?pretty

{
  "cluster_name" : "my-elk-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

{
  "cluster_name" : "my-elk_cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

10. The above json format is not friendly and cannot monitor the cluster status well or perform some data indexing operations, so we will install an e lasticsearch-head data visualization tool. Before installing this tool, we must first install the node component dependency package and phantomjs front-end framework

## Install node component dependencies
1.Install the build environment: yum -y install gcc gcc-c++ make
2.unzip: tar zxvf /opt/node-v8.2.1.tar.gz /opt
3.Enter the software directory to configure:
cd node-v8.2.1/
./configure
4.Compile: make -j3   ## It takes a long time, you need to be patient
5.Install: make install

11. Install the phantomjs front-end framework

unzip: tar jxvf /opt/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src/

The command lets the system recognize: cp /usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/

12. Install the elasticsearch-head data visualization tool

unzip: tar zxvf /opt/elasticsearch-head.tar.gz -C /usr/local/src/
Enter the directory: cd /usr/local/src/elasticsearch-head/
Install: npm install

Edit configuration file: vim /etc/elasticsearch/elasticsearch.yml
 Add the following two lines:
http.cors.enabled: true     ## Enable cross-domain access support, the default is false
http.cors.allow-origin: "*"   ## Cross-domain access to allowed domain addresses
PS: Note that these two lines must be added, otherwise they cannot be accessed head Tools page

reboot es Serve: systemctl restart elasticsearch

13. Start the elasticsearch-head startup service

cd /usr/local/src/elasticsearch-head/
start up: npm run start &

Check if it is turned on:
[root@node1 elasticsearch-head]# netstat -natp | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      10654/grunt 

Now that es is built, we can create an index first

exist node1 Create an index on index-demo,Type is test,You can see that the successful creation [Note: now web Create an index for the page, and then enter the command to insert]
curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'

Return content:
{
  "_index" : "index-demo",
  "_type" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "created" : true
}

Open the Web page and enter the server ip where the elasticsearch-head tool is installed: http://192.168.50.133:9200/

14. Install logstash on node2 and make related settings

1.Install rpm Bag: rpm -ivh logstash-5.5.1.rpm
2.turn on logstash: systemctl start logstash.service
3.Set the boot to start automatically: systemctl enable logstash.service
4.Will logstash Command to establish a soft connection: ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

★logstash Command Explanation:
-f: With this option you can specify logstash The configuration file, according to the configuration file configuration logstash
-e: followed by a string that can be treated as logstash configuration (if it is " ", the default is to use stdin as input, stdout as output)
-t: Test that the configuration file is correct, then exit

15. Generate logstash logs into elasticsearch (system logs)

Modify the permissions of the system log file: chmod o+r /var/log/messages

edit logstash Configuration file: vim /etc/logstash/conf.d/system.conf
 Write the following:

input {
    file{
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
       }
      }
output {
      elasticsearch {
        hosts => ["192.168.50.133:9200"]
        index => "system-%{+YYYY.MM.dd}"
         }
       }
       
Restart the service: systemctl restart logstash

16. Install Kibana on node2

1.Install: rpm -ivh kibana-5.5.1-x86_64.rpm
2.Modify the configuration file: vim /etc/kibana/kibana.yml

2 server.port: 5601                ## open port
7 server.host: "0.0.0.0"           ## Listening address (full network segment)
21 elasticsearch.url: "http://192.168.50.133:9200" ## Establish a relationship with elasticsearch
30 kibana.index: ".kibana"          ## Add .kibana index in elasticsearch

3.Start the service: systemctl start kibana
  Self-starting at boot: systemctl enable kibana

17. Create an index in Kibana to view logs collected from es

①Click Management

②Create an index, enter the index name, and click the create button to create

③Click the Discover button and select "system" in the upper left corner to view the log information on the right

Tags: Linux ELK

Posted by rosegarden on Sun, 15 May 2022 20:40:45 +0300