Kubernetes cluster construction - kubeadm method
Kubeadm is a K8s deployment tool that provides kubeadm init and kubeadm join for rapid deployment of Kubernetes clusters.
Tool function:
- kubeadm init: Initialize a Master node
- kubeadm join: join the worker node to the cluster
- kubeadm upgrade: upgrade K8s version
- kubeadm token: Manage the token used by kubeadm join
- kubeadm reset: clears any changes made to the host by kubeadm init or kubeadm join
- kubeadm version: print kubeadm version
- kubeadm alpha: preview available new features
2.1 Server requirements:
- Recommended minimum hardware configuration: 2-core CPU, 2G memory, 20G hard disk
- The server should preferably be able to access the external network, and there will be a need to pull images from the Internet. If the server cannot access the Internet, you need to download the corresponding image in advance and import it to the node
Software Environment:
software | Version |
operating system | CentOS7.9_x64 (mini) |
Docker | 20-ce |
Kubernetes | 1.25 |
Server planning:
Role | IP |
k8s-master | 192.168.40.130 |
k8s-node1 | 192.168.40.131 |
k8s-node2 | 192.168.40.132 |
Architecture diagram:
# turn off the firewall
systemctl stop firewalld systemctl disable firewalld
# Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # permanent setenforce 0 # temporary
# close swap
swapoff -a # temporary sed -ri 's/.*swap.*/#&/' /etc/fstab # permanent
# Set the hostname according to the plan
hostnamectl set-hostname <hostname>
# Add hosts to the master
cat >> /etc/hosts << EOF 192.168.40.130 k8s-master 192.168.40.131 k8s-node1 192.168.40.132 k8s-node2 EOF
# Pass bridged IPv4 traffic to the iptables chain
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system # take effect
# time synchronization
yum install ntpdate -y #Install without ntpdate time.windows.com
4. Install Docker/kubeadm/kubelet [all nodes]
Install Docker
#Download docker yum source file
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#Install docker
yum -y install docker-ce
#Set docker to boot and start docker
systemctl enable docker && systemctl start docker
Configure mirror download accelerator
cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF
#Check if it is written to the file
systemctl restart docker docker info
Install cri-dockerd
Kubernetes v1.24 removes the support of docker-shim, and Docker Engine does not support the CRI standard by default, so the two can no longer be directly integrated by default. To this end, Mirantis and Docker jointly created the cri-dockerd project to provide a bridge for Docker Engine to support the CRI specification, so that Docker can be used as a Kubernetes container engine. CRI container runtime interface
as the picture shows:
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.2.5-3.el7.x86_64.rpm
Specify the dependent mirror address:
vi /usr/lib/systemd/system/cri-docker.service ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
systemctl daemon-reload #reload daemon #Set cri-docker to boot and start cri-docker systemctl enable cri-docker && systemctl start cri-docker
Add Alibaba Cloud's k8s YUM software source
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Install kubeadm, kubelet and kubectl
Download the specified version number:
yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0 systemctl enable kubelet
Deploy Kubernetes Master
Executed at 192.168.40.130 (Master).
kubeadm init \ --apiserver-advertise-address=192.168.40.130 --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.25.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket=unix:///var/run/cri-dockerd.sock \ --ignore-preflight-errors=all
Detailed parameter explanation
- --apiserver-advertise-address cluster advertisement address
- --image-repository Because the default pull image address k8s.gcr.io cannot be accessed in China, specify the address of the Alibaba Cloud image repository here
- --kubernetes-version K8s version, consistent with the one installed above
- --service-cidr cluster internal virtual network, Pod unified access entrance
- --pod-network-cidr Pod network, consistent with the CNI network component yaml deployed below
- --cri-socket specifies the cri-dockerd interface, if it is containerd, use --cri-socket unix:///run/containerd/containerd.sock
After the initialization is complete, a join command will be output at the end, remember it first, and use it below.
Copy the connection k8s authentication file used by kubectl to the default path:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
View working nodes:
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane 20s v1.25.0
Note: Since the network plug-in has not been deployed yet, it is not ready yet NotReady, continue first
Join Kubernetes Node
Execute on (Node).
To add a new node to the cluster, execute the kubeadm join command output by kubeadm init and manually add --cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm join 192.168.40.130:6443 --token xn4k4x.5ewu3lpjl6n13dh7 --discovery-token-ca-cert-hash sha256:0e0d907c485894ce2c07b7bdc1a2c939fb550f6d06ba9760cbdc7ebf127bf1b6 --cri-socket=unix:///var/run/cri-dockerd.sock
The default token is valid for 24 hours, and when it expires, the token is no longer available. At this time, the token needs to be recreated.
regenerate token
#Regenerate the token and execute it on the master node kubeadm token create --print-join-command
References: kubeadm join | Kubernetes
Deploy Container Networking (CNI)
Calico is a pure three-tier data center network solution, which is currently the mainstream network solution for Kubernetes.
Download YAML:
https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
# Prompt to add this command
After downloading, you need to modify the definition of the Pod network (CALICO_IPV4POOL_CIDR), which is the same as specified by --pod-network-cidr of kubeadm init.
vim calico.yaml
After modifying the file, deploy:
kubectl apply -f calico.yaml
# get status
kubectl get pods -n kube-system #Note: This step will create the container, it will take some time, wait five to ten minutes
After five to ten minutes, check again
Show all working
Haven't created successfully for a long time
Method 1: Select the offline import method
Upload the offline zip package and unzip it
Import the offline package after executing the following command
ls *.tar |xargs -i docker load -i {}
Reload the calico.yaml file
check status again
kubectl get pods -n kube-system
The Calico Pod s are Running and the nodes are ready.
Method 2: Download manually
First check which mirror package is missing
grep image calico.yaml
Use docker pull + image package to pull them down, etc.
E.g:
Then deploy again
Wait for five to ten minutes and check the status again to find that they have all been running
Note: In the future, all yaml files will only be executed on the Master node.
Installation directory: /etc/kubernetes/
Component configuration file directory: /etc/kubernetes/manifests/
References: Creating a cluster with kubeadm | Kubernetes
At this point, the k8s cluster is built
kubernetes cluster
Create a pod in the kubernetes cluster and verify that it is running normally
#Create an nginx container kubectl create deployment nginx --image=nginx
#Check the status of creation
kubectl get pod #The display is still being created, wait a few minutes
# Check again, the creation is complete
#Expose nginx port
kubectl expose deployment nginx --port=80 --type=NodePort
#View external ports
kubectl get pod,svc
# Then use the IP and port of any node to access the created nginx page
testing successfully
To graphically manage k8s resources
Dashboard is an official UI that can be used for basic management of K8s resources.
YAML download address:
# this connection failed
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
Download the file and upload it to the server, I use kubernetes-dashboard.yaml here
By default, the Dashboard can only be accessed within the cluster. Modify the Service to NodePort type and expose it to the outside:
vi recommended.yaml
... kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard type: NodePort ...
kubectl apply -f recommended.yaml #Note that it will take some time to generate the container
# get status
kubectl get pods -n kubernetes-dashboard
#Container status
address: https://NodeIP:30001
Create a service account and bind the default cluster-admin administrator cluster role:
# create user
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# User Authorization kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin # Get user Token
kubectl create token dashboard-admin -n kubernetes-dashboard
Use the output token to log in to Dashboard.
Create pod s graphically
At this point, the k8s installation and expansion is over