ELK log enterprise case: (version 5.3)

1. shell three swordsmen cohabitation and analysis nginx log:
1) In the enterprise production environment, what are the log contents mainly used for? The log content is mainly used by operation and maintenance personnel, developers and DBA s to troubleshoot software service faults, because the log content can find the abnormality or fault cause of software service at the first time, so as to solve the problem at the first time and reduce the loss of the enterprise.
 
2) In the enterprise, the log content is not only used for troubleshooting and locating problems. The operation and maintenance u personnel and developers can also analyze, count and filter the log content, and evaluate the website visits, PV (Page Veiw), UV (uniq visitor), IP (independent IP), access behavior, etc.
 
3) Based on SHELL programming, the three swordsmen awk, sed and grep analyze and count the online nginx log and the total number of requests of nginx access log in the whole day. The operation instructions are as follows:
wc -l access_20200228.log|cut -d" " -f1
awk '{print $0}' access_20200228.log|wc -l
awk '{print NR}' access_20200228.log|tail -1
sed = access_20200228.log|tail -2|head -1
grep -aic "" access_20200228.log

4) Analyze and count the online nginx logs based on SHELL programming three swordsmen awk, sed and grep, and count the total requests of nginx access logs throughout the day (09:00-11:00). The operation instructions are as follows:

grep "2020:09:00" access_20200228.log|wc -l|more
grep "2020:11:00" access_20200228.log|wc -l
sed -n '/2020:09:00/'p access_20200228.log
awk "/2020:09:00/,/2020:11:00/" access_20200228.log|wc -l
sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|wc -l

5) Analyze and count the online nginx logs based on SHELL programming three swordsmen awk, sed and grep, count the total number of requests in the whole day (09:00-11:00) of the nginx access log, print the IP of the accessed users, and print the top 20 IP of the traffic. The operation instructions are as follows:
Print the access user IP:

sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|cut -d" " -f1
sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|awk '{print $1}'
sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|grep -oE "([0-9]{1,3}\.){3}[0-9]{1,3}"

Print out the top 20 IP addresses:

sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|grep -oE "([0-9]{1,3}\.){3}[0-9]{1,3}"|sort -n|uniq -c
sed -n '/2020:09:00/,/2020:11:00/'p access_20200228.log|grep -oE "([0-9]{1,3}\.){3}[0-9]{1,3}"|sort -n|uniq -c|sort -nr|head -20

 

2. ELK log enterprise concept:
1) According to the above statistics and analysis of the online nginx logs of the three swordsmen awk, sed and grep, it is found that the execution efficiency and results are very low and the speed is very slow. Especially when the number of logs is very large, the efficiency is very low. Real time statistics and analysis are required in the enterprise. Awk, sed and grep are obviously unavailable.
2) Common log types in enterprise production environment:
 
System log;
 
Kernel log;
 
Audit log;
 
Safety log;
 
Application log;
3) ELK is not a piece of software, but consists of three pieces of software: ElasticSearch, Logstash and Kibana. ES and Logstash are developed based on Java language, so the bottom layer needs to rely on JAVA JDK toolkit.
 
ElasticSearch
ElasticSearch is an open-source, distributed, search and storage engine based on JAVA language. It is mainly used for persistent storage of log content (on hard disk) and has real-time retrieval, analysis and statistics functions. Similar to the function of Baidu search engine.
 
Logstash
Logstash is an open-source and free log collection tool developed based on JAVA language. It is mainly used to collect the log content of the client (system, kernel, security and application logs). At the same time, it can filter the log content and finally store the log content in ElsticSearch server. Each client host needs to install the logstash log collection plug-in.
 
Kibana
Kibana is a web program (UI interface: Web front-end framework) developed based on Nodejs language. It mainly provides Web interface operation for ElasticSearch and Logstash, which is convenient for operation and maintenance personnel and developers to more intuitively configure ELK platform, log analysis, log statistics, etc.
 
4) Working principle of ELK distributed log platform:
The client installs the Logstash log collection tool, collects the log content of the client application through Logstash, filters out all logs and stores them in the ElasticSearch search search engine, and then displays them to the user at the front end of the WEB through Kibana UI. The user can query the log content in the specified ES engine.

 

 

3. ELasticsearch configuration practice (version 5.3):
To deploy and configure ES, you need to configure JDK environment. JDK (java Development Kit) is a software Development Kit (SDK) of Java language:
Download ELK packages separately:

wget   https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.tar.gz
wget   https://artifacts.elastic.co/downloads/logstash/logstash-5.3.0.tar.gz
wget   https://artifacts.elastic.co/downloads/kibana/kibana-5.3.0-linux-x86_64.tar.gz

1) ELK installation environment information: (Elasticcsearch and Kibana can be installed on one machine)

192.168.1.11  Elasticsearch
192.168.1.13  Kibana
192.168.1.14  Logstash

2) 192.168.1.12 install ES(2G is preferred for virtual machine):
Install JDK:

mkdir -p /usr/java
tar xf jdk1.8.0_131.tar.gz -C /usr/java
Configure environment variable: vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin
Make the environment variables take effect immediately, check the JAVA version and display the version information, then the installation is successful:
source /etc/prefile
java -version
Download ES and configure:
tar xf elasticsearch-5.3.0.tar.gz
mv elasticsearch-5.3.0 /usr/local/elasticsearch

Modify / usr / local / elasticsearch / config / JVM Options file
-xms minimum used memory, - xmx maximum used memory: the two need to be set to the same, or an error is reported when starting

-xms Minimum memory used,-xmx Maximum memory used: the two need to be set to the same, or start an error
-xms 1g
-xmx 1g
Modify / usr / local / elasticsearch / config / elasticsearch YML file:
Set the listening address to network Host: 0.0.0.0 address of the whole network:

 

Create elk ordinary users to start the ES service: the ES service does not allow root to start es by default for security:

useradd elk
chmod -R elk. /usr/local/elasticsearch/
su - elk
Start ES service
/usr/local/elasticserch/bin/elasticsearch -d

4. ELasticsearch configuration failure drill:
View log:

tailf /usr/local/elasticsearch/logs/elasticsearch.log
Errors may be reported after startup, and the following kernel parameter settings need to be modified:
1) SecComp function does not support: the following is the error message
ERROR: bootstrap checks failed
system call filters failed to install; check the logs and fix your 
configuration or disable system call filters at your own risk;

Because Centos6 does not support SecComp, while Es5 3.0 default bootstrap system_ call_ The filter is true for detection, so the detection fails. After the failure, the ES cannot be started directly.
Seccpmp (full name: secure computing mode) is a security mechanism supported by Linux Kernel since version 2.6.23.
In Linux system, a large number of system calls are directly exposed to normal programs. However, not all system calls are needed, and unsafe code abuse of system calls will pose a security threat to the system. Through Seccomp, we restrict the program to use some system calls, which can reduce the exposure of the system and make the program enter a "safe" state.

resolvent:

stay elasticsearch.yml Configuration in file bootstrap.system_call_filter by false,Pay attention to Memory below
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

2) Kernel parameter setting problem:
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

resolvent: /etc/security/limits.conf Add at the end of the file:
* soft nofile 65536 * hard nofile 65536

max mumder of threads [1024] for user [hadoop] is too low,increase to at least [2048]

resolvent: /etc/security/limits.d/20-nproc.conf
soft  nproc  2048
max virtual memory areas vm.max_map_count[65530]is too low, increase to at least [262144]
resolvent: /etc/sysctl.conf Modified sysctl -p take effect
vm.max_map_count=262144

 

 

 

 initial heap size [536870912] not equal to maximum heap size [1073741824];this can canuse resize pauses and prev ts mlockall from locking the entire heap

resolvent: /usr/local/elasticsearch/config/jvm.options
-xms 1g
-xmx 1g

 

Max_ map_ The count file contains a limit on the number of Vmas (virtual memory areas) a process can have. The virtual memory area is a continuous virtual address space area. During the life cycle of the reprocessing process, these areas will be created when the program attempts to re map files in memory, link to shared memory, or allocate heap space.
Tuning this value will limit the number of Vmas a process can own. Limiting the total number of Vmas owned by a process may lead to application errors, because when the process reaches the VMA online, but can only release a small amount of memory for other kernel processes, the operating system will throw the error of insufficient memory.
If your operating system uses only a small amount of memory in the NORMAL area, lowering this value can help free up memory for the kernel.
sysctl -p or exit the terminal:

 

 

So far, the ES configuration is completed. If you configure the ES cluster mode, it is also very simple. You only need to copy the ES copy, and then modify the corresponding parameters.

 

4. Kibana WEB installation configuration:
To deploy and install Kibana, you do not need to install JAVA JDK environment. Download the source code directly and unzip it:

tar xzf kibana-5.3.0-linux-x86_64.tar.gz
mv kibana-5.3.0-linux-x86_64 /usr/local/kibana/
Modify Kibana profile information and set ES address:
vim /usr/local/kibana/config/kibana.yml

 

Start Kibana service:

Background start:
cd /usr/local/kibana/bin/
nohup ./kibana &
View listening:
netstat -nutlp|grep -E "5601"

 

web browsing kibanaIP address: 5601 port:

 

 

5. Logstash client configuration practice:

because Logstash be based on JAVA Language development, Agent Deployment requires installation JDK Runtime environment Library:
mkdir -p /usr/java/ tar xf jdk-1.8.0_131 -C /usr/java/ vim /etc/profile Add the following code: export JAVA_HOME=/usr/java/jdk1.8.0_131 export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin decompression Logstash Software: tar xf logstash-5.3.0.tar.gz mv logstash-5.3.0 /usr/local/logstash

 

6. ELK collection system standard log:
#Create collection log configuration directory:
mkdir -p /usr/local/logstash/config/etc 
cd /usr/local/logstash/config/etc

 

Create ELK integration profile: VIM ELK The contents of conf are as follows:

input {
 stdin { }
}
output {
 stdout {
 codec => rubydebug {}
}
 elasticsearch {
 hosts => "192.168.1.11:9200" }
}

 

Start logstash service:

/usr/local/logstash/bin/logstash  -f  elk.conf
 Background start:
nohup /usr/local/logstash/bin/logstash -f elk.conf &
ps -ef|grep java

 

 

7. ELK-WEB log data chart:
Enter any information in the Logstah startup window, and the log information in the corresponding format will be automatically output:

 

Browser input: http://KibanaIP Address: 5601/

 

 

In order to use Kibana, you must configure at least one index mode. The index mode is used to confirm Elasticsearch index, run search and analysis, and configure fields.
index contains time-based events;
Use event times to create index names [DEPRECATED]
index name or pattern;
The mode allows you to define dynamic index names and use * as a wildcard, such as the default:
logstash-*
choice:
 
Time field name;

 

 

 
Click Discover to search and browse the data in Elasticsearch. The default search is the data in the last 15 minutes. You can customize the selection time.
 
 

 

 

Tags: ELK

Posted by nobodyk on Thu, 12 May 2022 03:42:33 +0300