Cluster LVS-NAT mode

LVS-NAT mode

The scenarios of DR use belong to the conventional use load of the intranet, and the NAT mode is necessary to realize the NAT function, as follows.

  • The scheduler has public network and Intranet addresses, and needs to realize public network access for load
  • The IP addresses of virtualization scenarios are all used internally by virtualization, and the IP address on the host is similar to the function of the public network to realize virtualization and the load of the intranet

That is to say, the communication between the two sides of the scheduler cannot be realized, which can be solved by using NAT mode

NAT is much simpler than DR mode, because the principle is that we commonly use the SNAT and gateway configuration of firewall to forward the traffic of multiple servers through one computer. The premise is that the gateway of multiple computers needs to point to the computer that forwards the traffic, and the public network needs to access the scheduler through DNAT (the public network enters through DNAT, and the internal network sends through SNAT)


Take a look at traffic forwarding

The public network accesses the server through DNAT
If the intranet wants to access the public network, it needs SNAT

The uncomplicated NAT mode is NAT mapping, but the question is whether the scheduler and the real server can not be in the same network segment?

No, because the real server needs to forward the traffic to the scheduler, that is, through the function of the gateway. If it is not in a network segment, the server will forward the traffic to the gateway configured by itself instead of the address of the scheduler. In this way, it is impossible to establish with the public network, that is, if you want to realize the DR mode, you can't accept the traffic without forwarding the traffic


ip_forward

  • For the forwarding of data involving multiple network cards, it is necessary to turn on the routing forwarding function of the scheduler
vim /etc/sysctl.conf 
net.ipv4.ip_forward = 1
# sysctl -p

summary

  • The scheduler and the real server need to be in a broadcast domain
  • The real server needs to point to the gateway as the scheduler
  • Different support ports
  • The flow in and out passes through the scheduler, which is under great pressure

example:

lvs_vip 192.168.32.128 vip is in different network segments
lvs_dip 192.168.26.140 dip and rip are in the same network segment
Server168.in141.ngx server
server2 192.168.26.142 nginx server

Scheduler LVS to configure
[root@LVS1 ~]# ls /usr/lib/modules/3.10.0-957.el7.x86_64/kernel/net/netfilter/ipvs/ |grep  -e ip_vs                
ip_vs_dh.ko.xz
ip_vs_ftp.ko.xz
ip_vs.ko.xz
ip_vs_lblc.ko.xz
ip_vs_lblcr.ko.xz
ip_vs_lc.ko.xz
ip_vs_nq.ko.xz
ip_vs_pe_sip.ko.xz
ip_vs_rr.ko.xz
ip_vs_sed.ko.xz
ip_vs_sh.ko.xz
ip_vs_wlc.ko.xz
ip_vs_wrr.ko.xz
#Query whether the kernel integrates LVS module
lsmod |grep ip_vs
#Whether to load or not depends on whether the system calls the lvs module, as long as the kernel integration is ensured
yum install -y ipvsadm
# Install user management tools
ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm
systemctl enable ipvsadm
#Ensure that the service starts normally, otherwise the configuration cannot be saved permanently

vi /etc/sysctl.conf
net.ipv4.ip_forward=1
#sysctl -p

LVS virtual service configuration

ipvsadm -A -t 192.168.32.128:80 -s rr
ipvsadm -a -t 192.168.32.128:80 -r 192.168.26.141:80 -m
ipvsadm -a -t 192.168.32.128:80 -r 192.168.26.142:80 -m
ipvsadm --save
#Save configuration

Real host configuration

GATEWAY=192.168.26.140
 Configure gateway as lvs_dip´╝îRestart the network card

#Install nginx service for testing 	


Verify that the traffic in and out of NAT mode passes through the scheduler

Modifying the port number 8080 of server2 is also supported

Tags: cluster

Posted by meral on Sun, 15 May 2022 12:17:54 +0300