kubernetes 1.18.8 high availability installation

1. Cluster planning

role ip address
k8s-vip 192.168.109.150
master1 192.168.109.151
master2 192.168.109.152
master3 192.168.109.153
node1 192.168.109.154

2 installation requirements

Before starting, the deployment of Kubernetes cluster machines needs to meet the following conditions:

  1. One or more machines, operating system centos7 x-86_ x64
  2. Hardware configuration: 2GB or more RAM, 2 CPUs or more CPUs, 30GB or more hard disk
  3. If you can access the Internet, you need to pull the image. If the server cannot access the Internet, you need to download the image in advance and import the node
  4. Disable swap partition

3 Preparation Environment

Turn off the firewall
systemctl stop firewalld.service && systemctl disable firewalld.service

set up SELinux by disabled pattern
setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

Disable swap partition
swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab

Modify the host name of each system permanently
hostnamectl set-hostname xxx


#The chain that passes bridged IPv4 traffic to iptables
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # take effect

install ntp:
yum install -y ntp
ntpdate time.windows.com && hwclock -w  Synchronize time and write to hardware

Add in all nodes hosts
cat >> /etc/hosts << EOF
192.168.109.150 k8s-vip
192.168.109.151 master1
192.168.109.152 master2
192.168.109.153 master3
192.168.109.154 node1
EOF

4. All master nodes are deployed with keepalived

4.1 install relevant packages and kept

yum install -y conntrack-tools libseccomp libtool-ltdl

yum install -y keepalived

4.2 configuring the master node

master1, master2 and master3 node configuration

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
   router_id k8s
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER 
    interface ens33 
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ceb1b3ec013d66163d6ab
    }
    virtual_ipaddress {
        192.168.109.150
    }
    track_script {
        check_haproxy
    }

}
EOF

Note:
1,virtual_ipaddress is the address of the VIP
2. interface network please note that ifconfig view
3. priority. The standby lvs is slightly smaller than the primary lvs
4. state MASTER BACKUP

4.3 startup and inspection

# Startup kept & & set startup
systemctl start keepalived.service && systemctl enable keepalived.service

# View startup status
systemctl status keepalived.service

# restart
systemctl restart keepalived.service

Check the network card information of master1 after startup

# Query the network card information ens33 corresponding to your own network card
ip a s ens33
[root@master1 ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:28:58:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.109.151/24 brd 192.168.109.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::e119:2c13:fa0a:3953/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@master2 ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:3e:cd:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.109.152/24 brd 192.168.109.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::e119:2c13:fa0a:3953/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::9f0e:5697:e453:e0b4/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::4faf:a02d:4291:70ae/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever

[root@master3 ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:4b:35:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.109.153/24 brd 192.168.109.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.109.150/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::e119:2c13:fa0a:3953/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::9f0e:5697:e453:e0b4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Note: the three are different. There are 150 addresses on the master 3 node, but none of the others. Only after Master 3 fails will the VIP drift to master 1 or master 2

5. Deploy haproxy to all master nodes

5.1 installation

yum install -y haproxy

5.2 configuration

Note: the configurations of the three master nodes are the same. In the configuration, the three master node servers of the back-end agent are declared, and the port of haproxy running is 16443. Therefore, port 16443 is the entrance of the cluster
Modify your own IP address: server under backend kubernetes apiserver

cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon 
       
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------  
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#--------------------------------------------------------------------- 
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver    
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server      master1   192.168.109.151:6443 check
    server      master2   192.168.109.152:6443 check
    server      master3   192.168.109.153:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF

5.3 startup and inspection

All three master s are started

# Enable haproxy & & set startup 
systemctl start haproxy && systemctl enable haproxy

# View startup status
systemctl status haproxy

Check port

netstat -lntup|grep haproxy

6. Install docker / kubedm / kubelet on all nodes

By default, the CRI (container runtime) of Kubernetes is Docker, so Docker is installed first.

6.1 installing docker

Uninstall old version
yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

First step
yum install -y yum-utils device-mapper-persistent-data lvm2

Step two
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Part III
 Recommended version 19
yum install docker-ce-19.03.9-3.el7


Image acceleration
data-root Is the storage location

mkdir -p /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["xxxxx"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "storage-driver": "overlay2",
  "storage-opts": ["overlay2.override_kernel_check=true"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    },
    "insecure-registries":["127.0.0.1"],
    "data-root":"/home/docker-data"
}
EOF

# Modify docker Service file, use the - g parameter to specify the storage location
vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --graph /home/docker-data


Automatic startup after startup
systemctl start docker && systemctl enable docker

Registry mirrors = corresponding Ali acceleration address

6.2 add alicloud YUM software source

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

6.3 install kubedm, kubelet and kubectl

yum install -y kubelet-1.18.8 kubeadm-1.18.8 kubectl-1.18.8
systemctl enable kubelet

7 deploy Kubernetes Master

7.1 create kubedm configuration file

Look at the title 4.3 document above
Operate on the master with vip, here is master3
The following operation is on Master 3

$ mkdir /usr/local/kubernetes/manifests -p

$ cd /usr/local/kubernetes/manifests/

$ vi kubeadm-config.yaml

The contents are as follows
apiServer:
  certSANs:
    - master1
    - master2
    - master3
    - k8s-vip
    - 192.168.109.150
    - 192.168.109.151
    - 192.168.109.152
    - 192.168.109.153
    - 127.0.0.1
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "k8s-vip:16443"
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:    
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.8
networking: 
  dnsDomain: cluster.local  
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.1.0.0/16
scheduler: {}

Note: certSANs indicates the ip and hostname of all nodes of the master. Add the hostname and ip address of the VIP, and add 127.0.0.1

7.2 execute at the master3 node

cd /usr/local/kubernetes/manifests/

View the list of required images
kubeadm config images list --config kubeadm-config.yaml

# Pull image
kubeadm config images pull --config kubeadm-config.yaml

# Kubedm initialization
kubeadm init --config kubeadm-config.yaml

Follow the prompts to configure environment variables and use the kubectl tool:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ kubectl get nodes
$ kubectl get pods -n kube-system

Note: notReady appears in kubectl get nodes, which is normal because the network plug-in is not installed

Follow the prompts to save the following for later use:

kubeadm join k8s-vip:16443 --token fytj36.nxlv38msqco9t853 \
    --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d \
    --control-plane

kubeadm join k8s-vip:16443 --token fytj36.nxlv38msqco9t853 \
    --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d

View cluster status

kubectl get cs

kubectl get pods -n kube-system

8. Install cluster network

8.1 installing flannel (discarded)

Obtain the yaml of flannel from the official address and execute it on master3

cd /usr/local/kubernetes/manifests/
mkdir flannel
cd flannel
wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Install flannel network

cd /usr/local/kubernetes/manifests/flannel
kubectl apply -f kube-flannel.yml 

inspect

kubectl get pods -n kube-system

8.2 installing calico (recommended)

download

cd /usr/local/kubernetes/manifests/
mkdir calico
cd calico
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml

Install calico network

cd /usr/local/kubernetes/manifests/calico
kubectl apply -f calico-3.13.1.yaml

inspect

kubectl get pods -n kube-system

9 ssh password free login

Execute in master3

ssh-keygen -t rsa
 All the way back

$IPs yes master of hostname
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id $IPs Follow the prompts yes and root password

10 master1 and master2 nodes join the cluster

10.1 copy key

Copy the key and related files from master3 to master1

ssh root@master1 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@master1:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master1:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@master1:/etc/kubernetes/pki/etcd

Copy the key and related files from master3 to master2

ssh root@master2 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@master2:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master2:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@master2:/etc/kubernetes/pki/etcd

10.2 master1 joining the cluster

kubeadm join k8s-vip:16443 --token fytj36.nxlv38msqco9t853 \
    --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d \
    --control-plane

10.3 master2 joining the cluster

kubeadm join k8s-vip:16443 --token fytj36.nxlv38msqco9t853 \
    --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d \
    --control-plane

Check status

kubectl get node

kubectl get pods --all-namespaces

11 join Kubernetes Node
Execute on node1
Add a new node to the cluster and execute the kubedm join command output in kubedm init:

kubeadm join k8s-vip:16443 --token fytj36.nxlv38msqco9t853 \
    --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d

Check status

kubectl get node

kubectl get pods --all-namespaces

13 test cluster

Create a pod in the Kubernetes cluster and verify whether it works normally:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

Access address: http://NodeIP:Port

14 get join command parameters

kubeadm token create --print-join-command

Get results
[root@master3 ~]# kubeadm token create --print-join-command
W1015 17:26:12.916625  117107 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s-vip:16443 --token vqqtvv.we08sbuxqjk63uk3     --discovery-token-ca-cert-hash sha256:f6c2a0bcf1bd27c1633e77469e211a4acade487c4aadf6b98b23d116aef5695d

Effective time
The valid time of this token is 2 hours. Within 2 hours, you can use this token to initialize any number of worker nodes.

15 add node

# Execute only on the worker node
# Replace x.x.x.x with the intranet IP of the master node
export MASTER_IP=192.168.109.150
# Replace apiserver Demo is the apiserver used to initialize the master node_ NAME
export APISERVER_NAME=k8s-vip
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

obtain join Command parameters
kubeadm token create --print-join-command

Execute the obtained command


Check initialization results
 stay master Execute on node
# Execute only on the master node
kubectl get nodes -o wide

16 remove node

WARNING
Normally, you do not need to remove the worker node

At the ready to be removed worker Execute on node
kubeadm reset

    
In the first master node demo-master-a-1 Upper execution
kubectl delete node node1


take node1 Replace with the to remove worker Node name
worker The name of the node can be passed in the first master node node1 Upper execution kubectl get nodes Command acquisition

Tags: Kubernetes

Posted by T Horton on Fri, 06 May 2022 21:51:00 +0300