1, Kubedm deployment k8s cluster
1. Environmental preparation
centos 7.6 cpu:Binuclear mem:2G 3 individual node Node time must be synchronized
master | node1 | node2 |
---|---|---|
192.168.229.187 | 192.168.229.40 | 192.168.229.50 |
The k8s version installed here is version 1.15.0, and the docker deployment installation specifies version 18.9.0
Set host name
[root@localhost ~]# hostnamectl set-hostname master #master node operation [root@localhost ~]# hostnamectl set-hostname node1 #node1 node operation [root@localhost ~]# hostnamectl set-hostname node2 #node2 node operation
Turn off the firewall
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld
Empty iptables
[root@master ~]# iptables -F [root@master ~]# iptables-save
Disable selinux
[root@master ~]# vim /etc/selinux/config SELINUX=disabled
Disable swap
[root@master ~]# swapoff –a [root@master ~]# vim /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 [root@master ~]# free -h
Edit the corresponding domain name resolution
[root@master ~]# vim /etc/hosts 192.168.229.187 master 192.168.229.40 node1 192.168.229.50 node2
Set secret free transmission
[root@master ~]# ssh-keygen -t rsa [root@master ~]# ssh-copy-id root@node1 [root@master ~]# ssh-copy-id root@node2
Turn on iptables bridging
[root@master ~]# vim /etc/sysctl.conf ... net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 [root@master ~]# sysctl -p net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
[root@master ~]# sysctl -p #If you are prompted that there is no folder or directory, enter the following command [root@master ~]# modprobe br_netfilter [root@master ~]# sysctl -p #Two node nodes also need to be done net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
Now that the basic environment is ready, you need to prepare the yum source of docker and kubernetes on each node. It is recommended to use Alibaba cloud's Yum source to operate on the master node first.
2. Add yum source for docker
#step 1: install some necessary system tools sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #Step 2: add software source information sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Step 3: update and install docker CE sudo yum makecache fast sudo yum -y install docker-ce #Step 4: start Docker service sudo service docker start
3. Add yum source for kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
4. Copy the yum source file to the other two node nodes
[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node1:/etc/yum.repos.d/ [root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node2:/etc/yum.repos.d/
5. Configure docker accelerator
[root@master ~]# vim /etc/docker/daemon.json {"registry-mirrors": ["https://1dmptu91.mirror.aliyuncs.com"]} [root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart docker
The version of docker is 18.09.0, which is because k8s:1.15.0 recommends the highest version of docker
6. The master node deploys kubectl, kubelet and kuberadm. Other node nodes only need to deploy kubelet and kuberadm
[root@master ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 [root@node1 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 [root@node2 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 [root@master ~]# systemctl enable docker [root@master ~]# systemctl enable kubelet
So far, after the preparation work is completed, we can start initialization. However, due to the limitations of the domestic network environment, we can't download the image directly from Google's image station. This requires us to manually download the image from the docker image station and rename it. Here, we use the foot book to realize it. Note that if you want to download the image yourself, the following commands download the image version, which is not applicable to k8s1 Version 15.
7. Image import
It has been downloaded here. You only need to import the corresponding image.
[root@master ~]# cat images.sh #!/bin/bash for i in /root/images/* do docker load < $i done echo -e "\e[1;31m Import complete\e[0m"
Or you can use the command directly
[root@master ~]# for i in /root/*;do docker load < $i;done
node1 and node2 nodes only need to import three images.
[root@node1 images]# ls kube-proxy-1-15.tar flannel-0.11.0.tar pause-3-1.tar
8. Initialize the cluster
[root@master ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
Execute the corresponding command according to the prompt
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# kubectl get node
It can be seen that the status of the master is not ready. The reason for this status is the lack of an attachment flannel. Without the network, each Pod cannot communicate.
Add a network component (flannel), which can be accessed through https://github.com/coreos/flannel Get in
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@master ~]# kubectl get pod -n kube-system
Or download it in advance
[root@master ~]# kubectl apply -f kube-flannel.yml
[root@master ~]# kubectl get nodes
The above is the installation and deployment of the master node, and then the installation of each node node and joining the cluster. Note here that verify that the node node is ready for the relevant image.
[root@node1 images]# ls kube-proxy-1-15.tar flannel-0.11.0.tar pause-3-1.tar [root@node1 ~]# kubeadm join 192.168.229.187:6443 --token 3s1qqv.2yjc3r09ghz9xkge \ --discovery-token-ca-cert-hash sha256:d54805f71e054da4a2a0830e02884bf4b4e0b6a744e516a28e2b0c6ba3035c69 [root@node2 ~]# kubeadm join 192.168.229.187:6443 --token 3s1qqv.2yjc3r09ghz9xkge \ --discovery-token-ca-cert-hash sha256:d54805f71e054da4a2a0830e02884bf4b4e0b6a744e516a28e2b0c6ba3035c69 [root@master ~]# kubectl get nodes
Make sure all pod s are running
[root@master ~]# kubectl get pod --all-namespaces
9. Set the automatic completion function of kubectl command line tool
[root@master ~]# yum install -y bash-completion [root@master ~]# source /usr/share/bash-completion/bash_completion [root@master ~]# source <(kubectl completion bash) [root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
For the convenience of writing yaml files, the number of tab spaces is set here
[root@master ~]# vim .vimrc set tabstop=2 [root@master ~]# source .vimrc
10. Common commands for cluster debugging
How to install the specified version of kubenetes? Note that the version of kubernetes is consistent, which is mainly reflected in the unification of all downloaded components.
If there is a problem during cluster initialization, even after the problem is solved, you should run this command before initializing the cluster again
[root@master ~]# kubeadm reset Kube-proxy,kube-apiserver,kube-controller-manager,kube-scheduler
List the installed rpm packages
yum list installed | grep kube
Uninstall the installed rpm package
yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y
Install the specified kubedm
yum install -y kubelet-1.12.1 kubeadm-1.12.1 kubectl-1.12.1
2, k8s architecture and basic concepts
Module introduction
kubectl: k8s It is the command line side, which is used to send the operation instructions of customers. API server: yes k8s Front end interface of cluster, various client tools and k8s Other components of can be managed through it k8s Various resources of the cluster. It provides HTTP/HTTPS RESTful API,Namely K8S API. Scheduler: Responsible for deciding to Pod Where Node Run on. When scheduling, we will fully consider the topology of the cluster, the current load of each node, as well as the high availability, performance, data affinity and requirements. Controller Manager: Be responsible for managing various resources of the cluster to ensure that the resources are in the expected state. It consists of a variety of Controller Composition, including Replication Controller,Endpoints Controller,Namespace Controller,Serviceaccounts Controller wait. Etcd: Responsible for preservation k8s Cluster configuration information and status information of various resources. When the data changes, etcd Will be notified quickly k8s Related components. Third party components, which have alternative solutions:Consul,zookeeper. Pod: k8s The smallest component of the cluster. One Pod Within, one or more containers can be run. In most cases, one Pod There's only one inside Container Container. Flannel: yes k8s Cluster network scheme can ensure Pod Cross host communication. Third party solutions, there are also alternatives. Kubelet: It is Node of agent(agent),When Scheduler Identify a Node Run on Pod After that, the Pod The specific configuration information sent to the node kubelet,kubelet Based on this information, the container is created and run, and the Master Report running status. kube-proxy: Responsible for the visit service of TCP/UDP The data flow is forwarded to the container at the back end. If there are multiple copies, kube-proxy Load balancing will be achieved.
Examples
Create a deployment resource object, Pod controller.
[root@master ~]# kubectl run test-web --image=httpd --replicas=2 --port=80 [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-web-667c58c6d8-7s7vb 1/1 Running 0 2m46s 10.244.1.2 node1 <none> <none> test-web-667c58c6d8-7xl96 1/1 Running 0 2m46s 10.244.2.2 node2 <none> <none>
Analyze the role of each component and the architecture workflow:
1.kubectl Send deployment request to API server; 2.API server notice Controller Manager Create a Deployment resources; 3.Scheduler Execute the scheduling task and transfer the two copies Pod Distribute to node1 and node2 upper; 4.node1 and node2 Upper kubelet Create and run on their respective nodes Pod; Supplement: 1.The configuration and current status information of the application are saved in the etcd In, execute kubectl get pod Time,API server Will from etcd Read these data in; 2.flannel For each Pod Assign a IP,But not created at this time Service Resources, current kube-proxy Not involved yet;
Common resource object types
Replication Controller:: Used to ensure that each Pod The replica can meet the target number at any time. In short, it is used to ensure that each container or container group is always running and accessible: the old generation is stateless Pod Application controller. ReplicaSet: A new generation of stateless Pod Application controller, which is connected with RC The difference is that the supported label selectors are different, RC Only equivalent selectors are supported, RS Collection based selectors are additionally supported. StatefulSet: Used to manage stateful persistent applications, such as database Service program, which is related to Deployment The difference is that it will be for everyone Pod Create a unique persistent identifier and ensure that each Pod Order between. DaemonSet:Used to ensure that each node is running a Pod A copy of the new node will also be added Pod,When a node is removed, this class Pod Will be recycled. Job: It is used to manage applications that can be terminated after running, such as batch processing job tasks. Volume: PV PVC storageclass ConfigMap: Secret: Role: ClusterRole: RoleBinding: ClusterRoleBinding: Service account: Helm:
3, Two ways to create resources
Create in command line mode
Create Pod controller and deployment (by k8s: version 1.18, this method has changed to create Pod resources)
[root@docker ~]# kubectl run web --image=nginx --replicas=2
View controller status
[root@docker ~]# kubectl get deployments
View resource details
[root@docker ~]# kubectl describe deployments. web
Note: when viewing a resource object, no namespace is specified. The default namespace is default. You can add the - n option to view the resources of the specified namespace.
Note that the deployment resource object created by direct operation is a frequently used controller resource type. In addition to deployment, there are rc, rs and other Pod controllers. Deployment is an advanced Pod controller.
Create Service resource type
[root@docker ~]# kubectl expose deployment web --name svc --port=80 --type=NodePort
Note that if you want the external network to be able to access the service, you can expose the deployment resource and get the service resource, but the type of svc resource must be nodeport (the case must be in strict accordance with the requirements).
Expansion and contraction of services
[root@docker ~]# kubectl scale deployment web --replicas=8 [root@docker ~]# kubectl edit deployments. web
Service upgrade and rollback
[root@docker ~]# kubectl set image deployment web web=nginx:1.15 [root@docker ~]# kubectl edit deployments. web
RollBACK
[root@docker ~]# kubectl rollout undo deployment web
Common command set
kubectl run #Create a deployment or job to manage the created container; kubectl get #Display one or more resources. You can use tag filtering to view the resources in the current namespace by default; kubectl expose #Expose a resource as a service resource of a new kubernetes, including pod(po), service(svc), replication controller(rc), deployment(deploy), replicaset(rs); kubectl describe #Display the details of a specific resource or resource group; kubectl scale #You can set new values for Deployment,ReplicaSet,Replication Controller, or StatefulSet, and you can specify one or more prerequisites; kubectl set #Change existing application resources; kubectl rollout #Resource rollback management;
Configuration list (yml, yaml)
The common function of Yaml field
apiVersion: api Version information; kind: Category of resource object; metadata: Metadata name field is required; spec: The state expected by the user; status: What kind of state resources are in now;
You can use the kubectl explain command to see how the yaml file of the resource object we want to write is written. For example, if you view the deployment resource, it can be written as:
[root@docker ~]# kubectl explain deploy
Deployment
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: web1 spec: replicas: 4 template: metadata: labels: app: web1 spec: containers: - name: web1 image: nginx
Service
kind: Service apiVersion: v1 metadata: name: svc1 spec: selector: app: web1 ports: - protocol: TCP port: 80 targetPort: 80
Use the same label and label selector content to associate two resource objects with each other.
Note: the default type of the created Service resource object is ClusterIP, which means that any node in the cluster can access it. Its function is to provide a unified access interface for the Pod that really provides services at the back end. If you want to know whether the external network can access the Service, you should change the type to NodePort
kind: Service apiVersion: v1 metadata: name: svc1 spec: type: NodePort #Specify the type for Internet access selector: app: web1 ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 30033 #Specify the cluster mapping port in the range of 30000-32767
Run the yaml file using the kubectl apply command.
kubectl apply -f web.yaml kubectl apply -f web-svc.yaml
Detailed explanation of resource types
Deployment, Service and Pod are k8s the three core resource objects.
Deployment: the most common controller for stateless applications, which supports the expansion and contraction of applications, rolling updates and other operations.
Servcie: it provides a fixed access interface for Pod objects with elastic changes and life cycle, which is used for service discovery and service access.
Pod: is the smallest unit for running containers and scheduling. The same pod can run multiple containers at the same time, which share NET, UTS and IPC. In addition, there are USER, PID and MOUNT.
1.Deployment
Create a Deployment resource object named bdqn1, replicas:5, and use httpd for the image.
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: test1 spec: replicas: 5 template: metadata: labels: app: test1 spec: containers: - name: test1 image: httpd ports: - containerPort: 80
Analyze why Deployment is called advanced Pod controller?
[root@master yaml]# kubectl describe deployments. test1 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3m5s deployment-controller Scaled up replica set test1-644db4b57b to 4
Note: events is an event prompt, which describes the whole process of the whole resource from the beginning to the present. Instead of directly creating and controlling the back-end Pod, Deployment creates a new resource object: ReplicaSet(test1-644db4b57b).
View the details of the RS and you will see the whole RS Events.
[root@master yaml]# kubectl describe rs test1-644db4b57b ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 9m53s replicaset-controller Created pod: test1-644db4b57b-4688l Normal SuccessfulCreate 9m53s replicaset-controller Created pod: test1-644db4b57b-72vhc Normal SuccessfulCreate 9m53s replicaset-controller Created pod: test1-644db4b57b-rmz8v Normal SuccessfulCreate 9m53s replicaset-controller Created pod: test1-644db4b57b-w25q8
At this time, you can view the details of any Pod and see the complete workflow of this Pod.
[root@master yaml]# kubectl describe pod test1-644db4b57b-4688l ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned default/test1-644db4b57b-4688l to node1 Normal Pulled 11m kubelet, node1 Container image "192.168.229.187:5000/httpd:v1" already present on machine Normal Created 11m kubelet, node1 Created container test1 Normal Started 11m kubelet, node1 Started container test1
2.Service
To create a Service resource, it is required to be associated with the above test1.
kind: Service apiVersion: v1 metadata: name: test-svc spec: selector: app: test1 ports: - protocol: TCP port: 80 targetPort: 80
By default, the resource type of Service is Cluster IP. In the above YAML file, spec.ports Port: describes the port of the Cluster IP. It only provides a unified access portal for the back-end Pod (effective in the k8s cluster).
[root@master yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d test-svc NodePort 10.103.159.206 <none> 80:32767/TCP 6s [root@master yaml]# curl 10.103.159.206 11111
If you want the external network to access the backend Pod, you should change the resource type of the Service to NodePort.
kind: Service apiVersion: v1 metadata: name: test-svc spec: type: NodePort selector: app: test1 ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 32034 #The valid range of nodePort is 30000-32767
As we saw above, when accessing the Cluster IP, the back-end Pod will provide services for us in turn
Even if there is load balancing, KUBE-PROXY component will not take effect without Service resources, because it is responsible for load balancing. Now that there are Service resources, how does it achieve load balancing? What is the underlying principle?
On the surface, you can know the real Pod of the backend by viewing the Endpoint corresponding to the SVC resource through the describe command.
[root@master yaml]# kubectl describe svc test-svc ... Endpoints: 10.244.1.33:80,10.244.1.34:80,10.244.2.27:80 + 1 more... ...
By viewing iptables rules, you can understand the specific process of balancing.
[root@master yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d test-svc NodePort 10.103.159.206 <none> 80:32767/TCP 8m11s [root@master yaml]# iptables-save | grep 10.103.159.206 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.159.206/32 -p tcp -m comment --comment "default/test-svc: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.103.159.206/32 -p tcp -m comment --comment "default/test-svc: cluster IP" -m tcp --dport 80 -j KUBE-SVC-W3OX4ZP4Y24AQZNW #If the target address is port 80 of 10.103.159.206/32 and the TCP protocol is used, jump the traffic to KUBE-SVC-W3OX4ZP4Y24AQZNW [root@master yaml]# iptables-save | grep KUBE-SVC-W3OX4ZP4Y24AQZNW -A KUBE-SVC-W3OX4ZP4Y24AQZNW -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-UNUAETQI6W3RVNLM -A KUBE-SVC-W3OX4ZP4Y24AQZNW -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-QNYBCG5SL7K2IM5K -A KUBE-SVC-W3OX4ZP4Y24AQZNW -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-NP7BZEGCZQZXEG4R -A KUBE-SVC-W3OX4ZP4Y24AQZNW -j KUBE-SEP-5KNXDMFFQ4CLBFVI [root@master yaml]# iptables-save | grep KUBE-SEP-UNUAETQI6W3RVNLM :KUBE-SEP-UNUAETQI6W3RVNLM - [0:0] -A KUBE-SEP-UNUAETQI6W3RVNLM -s 10.244.1.33/32 -j KUBE-MARK-MASQ -A KUBE-SEP-UNUAETQI6W3RVNLM -p tcp -m tcp -j DNAT --to-destination 10.244.1.33:80 -A KUBE-SVC-W3OX4ZP4Y24AQZNW -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-UNUAETQI6W3RVNLM
SNAT:Source NAT(Source address translation); DNAT:Destination NAT(Destination address translation); MASQ:Dynamic source address translation; Service Load balancing:The default is iptables rule;
3. Rollback to the specified version
Prepare the private images used by the three versions to simulate different images for each upgrade.
[root@master yaml]# vim test1.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: test1 spec: replicas: 4 template: metadata: labels: test: test spec: containers: - name: test1 image: 192.168.229.187:5000/httpd:v1 [root@master yaml]# vim test2.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: test1 spec: replicas: 4 template: metadata: labels: test: test spec: containers: - name: test1 image: 192.168.229.187:5000/httpd:v2 [root@master yaml]# vim test3.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: test1 spec: replicas: 4 template: metadata: labels: test: test spec: containers: - name: test1 image: 192.168.229.187:5000/httpd:v3
Here, three yaml files specify different versions of images.
Run a service and record a version information.
[root@master yaml]# kubectl apply -f test1.yaml --record
See what version information is available
[root@master yaml]# kubectl rollout history deployment test1 deployment.extensions/test1 REVISION CHANGE-CAUSE 1 kubectl apply --filename=test1.yaml --record=true
Run and upgrade the deployment resource and record the version information.
[root@master yaml]# kubectl apply -f test2.yaml --record [root@master yaml]# kubectl apply -f test3.yaml --record
Upgrade the version of test1
[root@master yaml]# kubectl set image deploy test1 test1=192.168.229.187:5000/httpd:v2
At this point, you can run an associated Service resource to verify whether the upgrade is successful.
[root@master yaml]# kubectl apply -f test-svc.yaml [root@master yaml]# kubectl get deployments. test1 -o wide
Rollback to the specified version
[root@master yaml]# kubectl rollout undo deployment test1 --to-revision=1
4. Use label to control the position of pod
Add node label
[root@master yaml]# kubectl label nodes node2 disk=ssd node/node2 labeled [root@master yaml]# kubectl get nodes --show-labels
Delete node label
[root@master yaml]# kubectl label nodes node02 disk- ... spec: revisionHistoryLimit: 10 replicas: 3 template: metadata: labels: app: web spec: containers: - name: test-web image: 192.168.1.10:5000/httpd:v1 ports: - containerPort: 80 nodeSelector: #Add node selector disk: ssd #Consistent with the label content
5. Namespace
Default namespace: default
View namespaces
[root@master yaml]# kubectl get ns NAME STATUS AGE default Active 25d kube-node-lease Active 25d kube-public Active 25d kube-system Active 25d test Active 22m
View namespace details
[root@master yaml]# kubectl describe ns default Name: default Labels: <none> Annotations: <none> Status: Active No resource quota. No resource limits.
Create namespace
[root@master yaml]# kubectl create ns test [root@master yaml]# vim test.yaml apiVersion: v1 kind: Namespace metadata: name: test
Note: the namespace resource object is only used to isolate resource objects, and cannot isolate the communication between pods in different namespaces. That is the function of network policy resources.
To view resources with a specified namespace, you can use the – namespace or - n option
[root@master yaml]# kubectl get pod -n test NAME READY STATUS RESTARTS AGE test 1/1 Running 0 28m
Delete a namespace
[root@master yaml]# kubectl delete ns test
Note: when deleting a namespace, be careful not to perform this operation easily. After execution, all resources under this namespace will be deleted by default.
6.pod
Creating pod resource with yaml file
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-app image: httpd
A single pod runs multiple containers
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-app image: httpd - name: test-web image: busybox
Image acquisition strategy in Pod
Change the image Download Strategy of the above Pod resources to IfNotPresent.
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-app image: httpd imagePullPolicy: IfNotPresent
k8s by default, there are three different strategies according to the different tags of the image.
Always:The image label is"latest"Or when the image label does not exist, it is always from the specified warehouse(Default official warehouse or private warehouse)Get the latest image from. IfNotPresent:Download from the target repository only if the local image does not exist. This means that if there is a local image, you can directly use the local image without online downloading. Never:It is forbidden to download images from the warehouse, that is, only local images are used.
Note: if the label is "latest" or this label does not exist, the default image download strategy is "Always", while for the images of other labels, "IfNotPresent" is used by default.
The "local" mentioned in the above statement refers to the image that can be viewed by the docker images command.
Restart strategy of container
Add the above Pod resources to the restart policy as OnFailure.
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: restartPolicy: OnFailure containers: - name: test-app image: httpd imagePullPolicy: IfNotPresent
The strategy is as follows:
Always:Danfan Pod Restart the object when it terminates. This is the default setting; OnFailure:Only in Pod Restart the object only when it has an error; Never:Never restart;
Simple method:
Multiple resource objects can exist in the same yaml file at the same time, but it is better to have resources related to the same service. And when writing, different resources need to be isolated with "-". In fact, the default is the yaml of a resource, and there is "-" at the top, but it is usually omitted.
[root@master yaml]# vim pod.yaml --- apiVersion: v1 kind: Namespace metadata: name: test --- kind: Pod apiVersion: v1 metadata: name: test-pod namespace: test spec: restartPolicy: OnFailure containers: - name: test-app image: httpd imagePullPolicy: IfNotPresent
Default health check for Pod
According to the default restart strategy of the Pod, perform health check on the Pod
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: healthcheck namespace: test spec: restartPolicy: OnFailure containers: - name: test-app image: httpd imagePullPolicy: IfNotPresent args: - sh - -c - sleep 10; exit 1
Monitor the running status of pod
[root@master yaml]# kubectl get pod -n test -w
If we change the exit status code to 0 at this time, that is, normal exit, we still use the OnFailure policy, but it will not restart the Pod.
[root@master yaml]# vim pod.yaml kind: Pod apiVersion: v1 metadata: name: healthcheck namespace: test spec: restartPolicy: OnFailure containers: - name: test-app image: httpd imagePullPolicy: IfNotPresent args: - sh - -c - sleep 10; exit 0
Monitor the running status of pod
[root@master yaml]# kubectl get pod -n test -w