preface
Reference blog: https://blog.csdn.net/qq_41632602/article/details/115366909
Reference blog:
https://blog.csdn.net/mshxuyi/article/details/108425487
Based on the two articles, a series of changes have been made according to their own environment and needs
Cluster planning
- Overall planning
host name | IP address | role |
---|---|---|
master | 192.168.56.3 | master |
node 1 | 192.168.56.4 | node 1 |
node 2 | 192.168.56.5 | node 2 |
-
Using components: docker, kubelet, kubedm, kubectl
-
Note: the IP address needs to be changed according to its own virtual machine IP address
Installation of kubernetes
Set the host name (this article uses the root user, and non root users can join the sudo command)
All three hosts need to do
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
Modify the / etc/hostname file and add the corresponding relationship between host name and IP
vi /etc/hosts 192.168.56.3 matser 192.168.56.4 node1 192.168.56.5 node2
Turn off the firewall
systemctl stop firewalld systemctl disable firewalld
Ignore swap partitions and prevent booting
If the swap partition is enabled, kebelet will fail to start, so each machine should close the swap partition
swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Close SELinux and modify the configuration file to take effect permanently
Otherwise, an error Permission denied may be reported when the subsequent K8S mounts the directory:
setenforce 0 vi /etc/selinux/config (modify SELINUX=disabled)
Configure kernel parameters
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
Install docker
Remove the original docker
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
If the deletion is successful or there is no file, it means success
Install some necessary system tools
yum install -y yum-utils device-mapper-persistent-data lvm2
Add software source information
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Update and install docker CE
yum makecache fast
Start installing docker
yum -y install docker-ce
Start docker
yum -y install docker-ce
View dockers status
systemctl status docker
If active:active(running) indicates that docker is running normally
If there is a problem, enter journalctl -xe or check the system log (vim /var/log/message) to see the reason
Use accelerator
mkdir -p /etc/docker tee /etc/docker/daemon.json <<- 'EOF' { "registry-mirrors: ["https://s2q9fn53.mirror.aliyuncs.com"] } EOF systemctl daemon-reload && sudo systemctl restart docker
Add kubernetes alicloud source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Install kubelet kubedmn kubectl
yum install -y kubectl-1.22.9 && yum install -y kubelet-1.22.9 && yum install -y kubeadm-1.22.9 systemctl enable kubelet && systemctl start kubelet
Initialize k8s cluster
kubeadm init --kubernetes-version=1.22.9 --apiserver-advertise-address=192.168.56.3 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
If initialization error is reported, check the kubelet status
systemctl status kubelet
If it fails, the solution is as follows:
Execute the command to view the docker's Cgroup:
docker info |grep Cgroup
Modify the daemon of docker JSON configuration
vi /etc/docker/daemon.json { "registry-mirrors": ["https://lhx5mss7.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] }
Update installation systemd
yum update systemd
Restart docker
systemctl daemon-reload&&systemctl restart docker
Reset kubedm
kubeadm reset -f
Reinitialize
kubeadm init --kubernetes-version=1.22.9 --apiserver-advertise-address=192.168.56.3 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
Your kubernetes control plane has initialized successfully! Indicates successful installation
In addition: check the following contents, which will be used later
kubeadm join 192.168.137.3:6443 --token ys4kum.voz9oqs2048ljfdp --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0
Then execute the following commands according to the pop-up prompt:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
According to the above steps, after the kube of the other two nodes is also completed, enter the following command in the other two nodes:
vi /etc/docker/daemon.json # enter the following:
{ "registry-mirrors": ["https://lhx5mss7.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] }
Execute the following command
yum update systemd systemctl daemon-reload systemctl restart docker kubeadm reset -f
Then you can join the cluster:
kubeadm reset kubeadm join 192.168.137.3:6443 --token ys4kum.voz9oqs2048ljfdp --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0
After the other two nodes join the cluster successfully, you can see the following:
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Command to view the token: [the following commands are executed on the master node]
kubeadm token list
If the token expires in the future, you can create a new permanent token:
kubeadm token create --ttl 0 # Get the hash value of ca certificate sha256 encoding openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' The following values are obtained: 35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0
When the node exits the cluster (excluding), the following commands are executed on the master node
kubectl cordon node1 kubectl cordon node2
Then expel the pod on the adjustment point
kubectl drain node1 kubectl drain node2
If there is a problem with the node and the instruction cannot be executed, the method of forced expulsion can be adopted to delete a running pod, for example:
kubectl delete pods -n kube-system nginx-6qz6s
This is mainly to perform this: eliminate or exit nodes from the cluster
kubectl delete node node1 kubectl delete node node2
The node node is used when rejoining the cluster. [the following commands are executed on nodes slave1 and slave2]
kubeadm reset kubeadm join 192.168.137.3:6443 --token d4offl.d3mufkukeb0b6y27 --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0
Check whether the node is joined [the following commands are executed on the master node]
kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady control-plane,master 56m v1.22.9 node1 NotReady <none> 62s v1.22.9 node2 NotReady <none> 20s v1.22.9
Install network plug-in: - flannel
Obtain the yml file content of flannel at the following address:
https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
Store the contents in Kube flannel under the current root user's home directory ~ In YML file
kubectl apply -f kube-flannel.yml
Wait a moment and check the following commands:
kubectl get pods -n kube-system
Results obtained:
NAME READY STATUS RESTARTS AGE coredns-7f6cbbb7b8-jcgsx 1/1 Running 0 17m coredns-7f6cbbb7b8-wgknx 1/1 Running 0 17m etcd-master 1/1 Running 3 18m kube-apiserver-master 1/1 Running 0 18m kube-controller-manager-master 1/1 Running 0 18m kube-flannel-ds-kmkqx 1/1 Running 0 3m26s kube-flannel-ds-qp48p 1/1 Running 0 3m26s kube-flannel-ds-zp2zl 1/1 Running 0 3m26s kube-proxy-2ftbn 1/1 Running 0 17m kube-proxy-btckv 1/1 Running 0 11m kube-proxy-sz9cf 1/1 Running 0 11m kube-scheduler-master 1/1 Running 3 18m
Add a roles role label to other nodes
kubectl label node node1 node-role.kubernetes.io/slave= kubectl label node node2 node-role.kubernetes.io/slave=
As can be seen from the above, Kube flannel's pod has been running. Re execute the view node command:
kubectl get nodes
The results are as follows:
master Ready control-plane,master 36m v1.22.9 node1 Ready slave 30m v1.22.9 node2 Ready slave 29m v1.22.9
The construction of k8s has been completed.
Installation of dash board visual UI
View the specific corresponding version
https://github.com/kubernetes/dashboard/releases
Open with vpn
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
Copy page information
cd /home vi recommended.yaml #Write the copied page information to #Create pod kubectl apply -f recommended.yaml
View, successfully created
[root@master1 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-5578584966-ch9x4 1/1 Running 1 8h kube-system coredns-9d85f5447-qghnb 1/1 Running 38 6d13h kube-system coredns-9d85f5447-xqsl2 1/1 Running 37 6d13h kube-system etcd-master1 1/1 Running 8 6d13h kube-system kube-apiserver-master1 1/1 Running 9 6d13h kube-system kube-controller-manager-master1 1/1 Running 8 6d13h kube-system kube-flannel-ds-amd64-h2f4w 1/1 Running 5 6d10h kube-system kube-flannel-ds-amd64-z57qk 1/1 Running 1 10h kube-system kube-proxy-4j8pj 1/1 Running 1 10h kube-system kube-proxy-xk7gq 1/1 Running 7 6d13h kube-system kube-scheduler-master1 1/1 Running 9 6d13h kubernetes-dashboard dashboard-metrics-scraper-7b8b58dc8b-5r22j 1/1 Running 0 15m kubernetes-dashboard kubernetes-dashboard-866f987876-gv2qw 1/1 Running 0 15m
Delete the existing dashboard service. The namespace of the dashboard service is kubernetes dashboard, but the service type is ClusterIP, which is not convenient for us to access through the browser. Therefore, it needs to be changed to NodePort
# View existing services [root@master1 ~]# kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d13h default nginx NodePort 10.102.220.172 <none> 80:31863/TCP 8h kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d13h kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.246.255 <none> 8000/TCP 61s kubernetes-dashboard kubernetes-dashboard ClusterIP 10.109.210.35 <none> 443/TCP 61s
delete
kubectl delete service kubernetes-dashboard --namespace=kubernetes-dashboard
create profile
vi dashboard-svc.yaml #content kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
implement
kubectl apply -f dashboard-svc.yaml
Check the service again, success
kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d13h default nginx NodePort 10.102.220.172 <none> 80:31863/TCP 8h kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d13h kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.246.255 <none> 8000/TCP 4m32s kubernetes-dashboard kubernetes-dashboard NodePort 10.110.91.255 <none> 443:30432/TCP 10s
If you want to access the dashboard service, you must have access rights and create the kubernetes dashboard administrator role
vi dashboard-svc-account.yaml # result apiVersion: v1 kind: ServiceAccount metadata: name: dashboard-admin namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dashboard-admin subjects: - kind: ServiceAccount name: dashboard-admin namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io # implement kubectl apply -f dashboard-svc-account.yaml
Get token
kubectl get secret -n kube-system |grep admin|awk '{print $1}' dashboard-admin-token-bwgjv #A string will be generated. Copy this string and use it below kubectl describe secret dashboard-admin-token-bwgjv -n kube-system|grep '^token'|awk '{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6IkJOVUhyRElPQzJzU2t6VDNVdWpTdzhNZmZPZjV0U2s1UXBFTzctNE9uOFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYndnanYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTE5NGY5YWYtZDZlNC00ZDFmLTg4OWEtMDY4ODIyMDFlOGNmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.kEK3XvUXJGzQlBI4LIOp-puYzBBhhXSkD20vFp9ET-rGErxmMHjUuCqWxg0iawbuOndMARrpeGJKNTlD2vL81bXMaPpKb4Y2qoB6bH5ETQPUU0HPpWYmfoHl4krEXy7S95h0mWehiHLcFkrUhyKGa39cEBq0B0HRo49tjM5QzkE6PNJ5nmEYHIJMb4U62E8wKeqY9vt60AlRa_Re7IDAO9qfb5_dGEmUaIdr3tu22sa3POBsm2bhr-R3aC8vQzNuafM35s3ed8KofOTQFk8fXu4p7lquJnji4yfC77yS3yo5Jo3VPyHi3p5np_9AuSNYfI8fo1EpSeMsXOBH45hu2w
Visit the page, and the virtual machine ip is masterIP
The port is the port viewed by kubectl get SVC -- all namespaces
http://192.168.56.3: 30142
Paste the above token into the token, and then enter the page