Nginx reverse proxy, load balancing and building high availability clusters

Nginx reverse proxy, load balancing and building high availability clusters

Premise preparation

The first is the installation in the linux environment (the local machine is the Windows version. You can use Vmware, but you need to configure the network connection, etc. the demonstration on the virtual machine will not be shown here. Here, you can use the personal Alibaba cloud server with xftp and xshell to upload files and connect the command line input)

Note: the following commands are used by CentOS7.

Now start the installation of series dependencies:

<div align = "center"><img src= "http://maycope.cn/images/image-20200807092205021.png"></div>

gcc installation:

yum -y install gcc automake autoconf libtool make
yum install gcc gcc-c++

pcre installation

cd /usr/local/src
wget    https://netix.dl.sourceforge.net/project/pcre/pcre/8.40/pcre-8.40.tar.gz
tar -zxvf pcre-8.40.tar.gz
cd pcre-8.40
./configure
make && make install

zlib installation

cd /usr/local/src
wget http://zlib.net/zlib-1.2.11.tar.gz      wget  http://www.zlib.net/zlib-1.2.11.tar.gz
tar -zxvf zlib-1.2.11.tar.gz
cd zlib-1.2.11
./configure
make && make install
yum install -y zlib zlib-devel

openssl installation

cd /user/local/scr
wget https://www.openssl.org/source/openssl-1.0.1t.tar.gz
tar -zxvf openssl-1.0.1t.tar.gz

nginx installation

cd /user/local/scr
wget http://nginx.org/download/nginx-1.1.10.tar.gz
tar zxvf nginx-1.1.10.tar.gz
cd nginx-1.1.10
./configure
make && make install
 start-up nginx
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf

After completion, you can check which ports your server has opened:

firewall-cmd --list-all

If there is no port development, you can use the following command to open the port:

firewall-cmd --zone=public --add-port=80/tcp --permanent
# Among them, 80 can be modified to the port you want to open. Of course, the premise is that you have to open the firewall.

Firewall settings

systemctl status firewalld.service # View firewall status
systemctl stop firewalld.service   # Turn off the firewall
systemctl start firewalld.service  # Open firewall

nginx basic command

  1. After completing the above basic preparation, nginx has been started. Check the current situation of nginx:
ps -ef | grep nginx
  1. Start, stop and restart nginx.
cd /usr/local/sbin   # Note that the associated statements can only be executed after entering the relevant directory corresponding to the installed nginx.
./nginx # Indicates that nginx is started
./nginx -s stop    # Indicates to stop nginx.
./nginx - s reload # Indicates restart, which is generally used after the configuration file is modified.

After startup, you can access the ip address (because nginx is started on port 80 by default, you can access the ip address directly)

Explanation of configuration file:

The first is the configuration file address: / usr / local / nginx / conf / nginx conf.

The nginx configuration file is divided into three parts:

Global fast

From the beginning of the configuration file to the events block, some configuration instructions affecting the overall operation of the nginx server will be set, mainly including configuring the users (groups) running the nginx server, the number of worker process es allowed to be generated, the process PID storage path, log storage path and type, and the introduction of the configuration file.

For example, the configuration in the first line below:

worker_processes  1;

This is the key configuration of Nginx server concurrent processing service, worker_ The larger the processes value, the more concurrent processing can be supported, but it will be restricted by hardware, software and other devices

events block

events {
    worker_connections  1024;
}

The instructions involved in the events block mainly affect the network connection between the Nginx server and the user. Common settings include whether to enable the serialization of network connections under multiple work process es, whether to allow multiple network connections to be received at the same time, which event driven model is selected to process the connection request, the maximum number of connections that each word process can support at the same time, etc.

The above example shows that the maximum number of connections supported by each work process is 1024 The configuration of this part has a great impact on the performance of Nginx, so it should be configured flexibly in practice.

http block

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
       server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }
..............................

This is the most frequent part of Nginx server configuration. Most functions such as proxy, cache and log definition and the configuration of third-party modules are here.

It should be noted that http blocks can also include http global blocks and server blocks

http global block

http global block configuration instructions include file import, MIME-TYPE definition, log customization, connection timeout, maximum number of single link requests, etc.

server block

This is closely related to the virtual host. From the perspective of users, the virtual host is exactly the same as an independent hardware host. This technology is produced to save the hardware cost of Internet server.

Each http block can include multiple server blocks, and each server block is equivalent to a virtual host.

Each server block is also divided into global server blocks and can contain multiple locaton blocks at the same time.

1. Global server block

The most common configurations are the listening configuration of the virtual machine host and the name or IP configuration of the virtual host.

2. location block

A server block can be configured with multiple location blocks.

The main function of this part is to match the strings other than the virtual host name (or IP alias) (such as the previous / URI string) based on the request string received by the Nginx server (such as server_name / URI string), and process specific requests. Address orientation, data caching, response control and other functions, as well as the configuration of many third-party modules are also carried out here.

Reverse proxy

What is reverse proxy

Before understanding reverse proxy, first understand forward proxy:

Forward proxy: if the Internet outside the LAN is imagined as a huge resource pool, the clients in the LAN need to access the Internet through the proxy server. This proxy service is called forward proxy. In short, it plays a role in helping to reach the target network.

Reverse proxy: reverse proxy. In fact, the client is not aware of the proxy, because the client can access it without any configuration. We only need to send the request to the reverse proxy server. After the reverse proxy server selects the target server to obtain the data, it returns it to the client. At this time, the reverse proxy server and the target server are a server externally, exposing the proxy server address, Hide the real server IP address.

For example, we install a tomcat server in the local environment. For tomact, the default access is port 8080, but we don't want to enter port 80 to access directly. We need to directly access port 80 to access the nginx server, then configure the configuration file and forward the request to our tomcat server.

We can configure as follows:

Configure the http module as follows:

 server {
        listen       80;
     #  server_name  localhost;
       server_name 121.*.*.34;# Indicates that the name of the listening service is the following address
        location / {
            root   html;
           proxy_pass http://127.0.0.1:8080; #  Indicates the forwarding address. First, you need to install tomact locally and start downloading from your official website
            index  index.html index.htm;
        }
        }

Remember to restart the service after completing the configuration: you can see the following display: it means to jump from port 80 to port 8080

Example 2

In the above case, we still access the default port 80. If we want to change the port to access it, and if we want to access other ports, or access the information with path, how to operate? At this time, we can add a server, because a server listens to a unique port. Here, we can create another server and select the port information we listen to. Then configure it in location.

 server {
        listen      3308; # We monitor different port information
        server_name  localhost;
        location  ~ /edu/  {
        # Create the next directory corresponding to webpage.tomindex HTML page.
            proxy_pass http://127.0.0.1:8080;
        }
    }

At this time, we can visit the following website: we can find that the 3308 port (the port should be opened in advance when testing by ourselves) is matched with the edu path and the corresponding index HTML can access the corresponding page information under our tomact server.

load balancing

concept

Load balancing is to distribute the load to different service units, which not only ensures the availability of services, but also ensures that the response is fast enough to give users a good experience. The fast-growing traffic and data traffic have spawned a variety of load balancing products. Many professional load balancing hardware provide good functions, but the price is not cheap, which makes load balancing software very popular. Nginx is one of them. Under linux, nginx, LVS, Haproxy and other services can provide load balancing services, and nginx provides several distribution methods (strategies):

  1. Polling (default)

    Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

  2. weight

    Weight represents weight. The weight is assumed to be 1. The higher the weight, the more clients are assigned

    Specifies the polling probability. The weight is directly proportional to the access ratio. It is used in the case of uneven performance of the back-end server. For example:

    upstream server_pool
    {
    server 121.111.2.34 weight = 10;
    server 121.111.2.35 weight = 10;
    }
  3. ip_hash

    Each request is allocated according to the hash result of access ip, so that each visitor can access a back-end server regularly, which can solve the problem of session.

    upstream server_pool{
    ip_hash;
    server 121.111.2.34 weight = 10;
    server 121.111.2.35 weight = 10;
    }
  4. Fair (third party)

Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.

upstream server_pool{
fair;
server 121.111.2.34 weight = 10;
server 121.111.2.35 weight = 10;
}

example

Add the upstream keyword to http:

http{
......
upstream myserver{
# Have two servers
fair # Indicates which service is allocated according to the corresponding time to correspond to the current request.
server 121.111.2.34:8080;
server 121.111.2.35:8080;
# Note that two servers are simulated here. For those without two real servers, you can open two tomcat services and configure different ports
server 121.111.2.34:8080;
server 121.111.2.34:8081;
}

server{
        listen 80;
        server_name 121.111.2.34;
        ......
        location /{
        proxy_pass http://myserver
        # myserver corresponds to the upstream that it plays
        ...... 
        }
}

In this way, the data can be forwarded when accessing port 80 of the machine, and a specific host can be selected to receive the service accordingly

High availability

brief introduction

About this part of knowledge, I was studying before Project load balancing based on Mysql and Redis cluster Sometimes I use docker to build nginx load balancing at the back end. At the same time, the specific completion information of this part is also in my personal github github project address Reflected in the account.

Both the creation of nginx cluster and the completion of load balancing provide us with great traversal for docker. At that time, the docker configuration was mixed with some other configurations. It may take some time to understand this part. Here we will analyze and learn alone.

example

As for the specific logic information, in the project load balancing based on Mysql and Redis cluster I mentioned above, I mainly mean: in order to prevent the problem caused by only one nginx, then add two nginx for polling, but the polling work does not need to be completed by ourselves. Instead, we use the third-party tool Keepalived to help us complete a series of work, as shown in the figure below: using heartbeat detection, To specifically detect the failed machine and virtualize the ip, only one ip information needs to be recorded, and then the subsequent operations

<img src="https://img-blog.csdnimg.cn/20200501081514734.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDAxNTA0Mw==,size_16,color_FFFFFF,t_70">

  1. First, there are two servers, and then the keepalived installation is carried out on both servers

    yum install keepalived -y # install
    rpm -q -a keepalived # View the installation
    cd /etc/keepalived
    vi keepalived.conf
  2. Then modify the configuration information of the files in the two keepalived:

global_defs {

    notification_email {

        acassen@firewall.loc

        failover@firewall.loc

        sysadmin@firewall.loc

    }

    notification_email_from Alexandre.Cassen@firewall.loc

    smtp_server 121.111.2.34

    smtp_connect_timeout 30

    router_id LVS_DEVEL #only

}

vrrp_script chk_http_port {
    script "/usr/local/src/nginx_check.sh"
    interval 2    #(detect the interval between script execution) execute once every 2 s
    weight 2   # Script weight increased by 2.
}

vrrp_instance VI_1 {

    state BACKUP    # Change MASTER to BACKUP on the BACKUP server 

    interface ens33    # network card

virtual_router_id 51    # Virtual of primary and standby machines_ router_ ID must be the same

    priority 100    # The primary and standby machines have different priorities. The host value is larger and the backup machine value is smaller

    advert_int 1 # Perform heartbeat detection and send detection information every second to check whether it is alive

    authentication {

        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {

        121.111.2.20 # Virtual address
    }
}
  1. The writing of script files is different from using docker. The following is the solution using dcoker, and the configuration files are different.

As follows, we write specific information to see whether nginx survives. The storage directory is / usr/local/src /.

#!/bin/bash

A = `ps -C nginx –no-header |wc -l`

if [$A - eq 0]; then

    / usr / local / nginx / sbin / nginx
    # See if nginx is still alive

sleep 2

if [`ps -C nginx --no-header |wc -l` - eq 0]; then killall keepalived

fi

fi
  1. After completion, start nginx and keepalived to start the keepalived command: systemctl start keepalived

At this time, we access the virtual address: 121.111.2.20, and then two keepalived will seize the virtual ip and send the service request to different corresponding nginx servers, so as to ensure that when one server fails, the other can work normally.

Tags: Nginx

Posted by malcome_thompson on Tue, 24 May 2022 02:36:15 +0300