Pod (container group) is the smallest scheduling unit in Kubernetes. You can directly create a pod through yaml definition file. However, pod itself does not have the function of self-healing. If the node where a pod is located fails, or the scheduler itself has problems, and the pod is expelled due to insufficient node resources or the node enters maintenance, the pod will be deleted and cannot recover itself.
Therefore, in Kubernetes, we generally do not directly create a Pod, but manage the Pod through the Controller.
Controller can provide Pod with the following features:
- Expand horizontally to control the number of copies of the Pod
- rollout, i.e. version update
- Self healing is self-healing. When a node fails, the controller can automatically schedule a Pod with exactly the same configuration in another node to replace the Pod on the failed node.
The controllers supported in Kubernetes include:
- Replication controller: a controller used to maintain a stable set of Pod replicas
- ReplicaSet: an upgraded version of ReplicationController. It has one more feature than ReplicationController: it supports set based selectors. RollingUpdate is not supported
- Deployment: includes ReplicaSet, which can be updated by declarative and rolling updates. For stateless applications, deployment is recommended
- Stateful set: used to manage stateful applications
- DaemonSet: run a specified Pod copy as a daemon on a node. For example, you can use DaemonSet when monitoring nodes and collecting logs on nodes
- CronJob: create a Job according to a predetermined schedule, similar to the crontab of linux
- Job: use a job to execute a task, which ends after execution
In Kubernetes, although Deployment is generally used to manage Pod, ReplicaSet is also used to maintain the replica set of Pod in Deployment. Therefore, ReplicaSet is also briefly introduced here.
In the definition of ReplicaSet, there are three parts:
- Selector: label selector, which is used to specify which pods are managed by the ReplicaSet and match with the label of the Pod through matchLabels.
- Replicas: the expected number of Pod replicas. Specify how many Pod replicas the ReplicaSet should maintain. The default is 1.
- Template: Pod definition template. ReplicaSet uses the definition of the template to create Pod.
The example definition document of ReplicaSet is shown below,
apiVersion: apps/v1 # api version kind: ReplicaSet # Resource type metadata: # Meta Data Define name: nginx-ds # ReplicaSet name spec: replicas: 2 # Number of Pod copies, default 1 selector: # tag chooser matchLabels: app: nginx template: # Pod definition template metadata: # Pod metadata definition labels: app: nginx # Pod tag spec: containers: # Container definition - name: nginx image: nginx
ReplicaSet ensures that the number of pods that match the selector selector is equal to the number specified by replicas by creating and deleting Pod container groups. In the Pod created by ReplicaSet, there is a field metadata Ownerreferences is used to identify which ReplicaSet the Pod belongs to. You can view the ownerReference of the Pod through kubectl get Pod pod Pod name - O yaml.
Through the definition of the selector field, the ReplicaSet identifies which pods should be managed by it, regardless of whether the Pod is created by the ReplicaSet, that is, as long as the selector matches, the Pod created through the external definition will also be managed by the ReplicaSet. Therefore, attention should be paid spec.selector.matchLabels and spec.template. metadata. The definitions of labels are consistent, and avoid overlapping with the selectors of other controllers, resulting in confusion.
ReplicaSet does not support rolling updates, so for stateless applications, Deployment is generally used instead of using ReplicaSet directly. ReplicaSet is mainly used as a means of creating, deleting and updating Pod in Deployment.
The deployment object contains the ReplicaSet as a dependent object, and the ReplicaSet and its Pod can be updated through declarative and rolling updates. ReplicaSet is now mainly used as a means of creating, deleting and updating pods in deployment. When using deployment, you don't need to care about the ReplicaSet created by deployment. Deployment will handle all the details related to it. At the same time, deployment can also manage Pod and ReplicaSet in a "declarative" way (its essence is to solidify a series of operation and maintenance steps in some specific scenarios for fast and accurate execution), and provide version rollback function.
Deployment definition example,
apiVersion: apps/v1 kind: Deployment # Object type, fixed as Deployment metadata: name: nginx-deploy # Deployment name namespace: default # Namespace, default labels: app: nginx # label spec: replicas: 4 # Number of Pod copies, default 1 strategy: rollingUpdate: # The upgrade strategy is rolling upgrade. Since replicas is 4, the number of pod s in the whole upgrade process is between 3-5 maxSurge: 1 # The maximum number of pod s exceeding replicas during rolling upgrade can also be a percentage (percentage of replicas). The default is 1 maxUnavailable: 1 # The maximum number of pod s unavailable during rolling upgrade can also be a percentage (the percentage of replicas). The default is 1 selector: # Tag selector: select the Pod managed by the Deployment through the tag matchLabels: app: nginx template: # Pod definition template metadata: labels: app: nginx # Pod tag spec: # Define a container template that can contain multiple containers containers: - name: nginx image: nginx:latest ports: - containerPort: 80
You can view which configuration options are supported through kubectl explain xxx,
# View deployment configuration items [root@kmaster ~]# kubectl explain deployment ... # View deployment Configuration items of spec module [root@kmaster ~]# kubectl explain deployment.spec KIND: Deployment VERSION: apps/v1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the Deployment. DeploymentSpec is the specification of the desired behavior of the Deployment. FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused <boolean> Indicates that the deployment is paused. progressDeadlineSeconds <integer> The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas <integer> Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit <integer> The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector <Object> -required- Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy <Object> The deployment strategy to use to replace existing pods with new ones. template <Object> -required-
Description of other configuration items:
- . spec.minReadySeconds: used to control the speed of application upgrade. During the upgrade process, once the newly created Pod successfully responds to the ready detection, it is considered to be available, and then the next round of replacement is carried out spec.minReadySeconds defines how long a new Pod object needs to wait at least after it is created to be considered ready. During this period, the update operation will be blocked.
- . spec.progressDeadlineSeconds: used to specify the number of seconds the Deployment can wait before the system reports a Deployment failure, which is represented as type=Progressing, Status=False, Reason=ProgressDeadlineExceeded. The Deployment controller will continue to retry the Deployment. If this parameter is set, the value must be greater than spec.minReadySeconds.
- . spec.revisionHistoryLimit: used to specify the number of old replicasets or revisions that can be retained. By default, all old replicates are retained. If an old replicaset is deleted, the Deployment cannot go back to that revision. If this value is set to 0, all replicasets with 0 Pod replicas will be deleted. At this time, Deployment cannot go back because the revision history has been cleaned up.
[root@kmaster test]# kubectl apply -f nginx-deploy.yaml --record
--record will write this command to kubernetes. Of Deployment In the IO / change cause annotation. You can view the reasons for the change of a Deployment version later.
After creating a Deployment, the Deployment controller will immediately create a ReplicaSet and the ReplicaSet will create the required Pod.
# View Deployment [root@kmaster test]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx-deploy 0/2 2 0 64s # View ReplicaSet [root@kmaster test]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-59c9f8dff 2 2 1 2m16s # View the Pod and display the scheduled nodes and labels [root@kmaster test]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx-deploy-59c9f8dff-47bgd 1/1 Running 0 5m14s 10.244.1.91 knode2 <none> <none> app=nginx,pod-template-hash=59c9f8dff nginx-deploy-59c9f8dff-q4zb8 1/1 Running 0 5m14s 10.244.3.47 knode3 <none> <none> app=nginx,pod-template-hash=59c9f8dff
The Pod template hash tag is added to the ReplicaSet when the Deployment creates the ReplicaSet, which in turn adds this tag to the Pod. This tag is used to distinguish which ReplicaSet in the Deployment creates which pods. The value of this tag is Do not modify the hash value of spec.template. It can be seen from the above that the naming of ReplicaSet and Pod follows the format of < Deployment name > - < Pod template hash >, < Deployment name > - < Pod template hash > - XXX respectively.
3. Publish update (rollout)
When and only when the content in the Pod template (. spec.template) field of Deployment changes (for example, the image of the label or container is changed), the release update (rollout) of Deployment will be triggered. Changes to other fields in the Deployment (such as modifying the. spec.replicas field) will not trigger the release update of the Deployment.
Update the definition of Pod in Deployment (for example, release a new version of container image). At this time, the Deployment controller will create a new ReplicaSet for the Deployment, gradually create a Pod in the new ReplicaSet and delete the Pod in the old ReplicaSet to achieve the effect of rolling update.
For example, we modify the container image of Deployment above,
# Method 1: directly use kubectl command to set and modify [root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record deployment.apps/nginx-deploy image updated # Method 2: edit yaml with kubectl edit [root@kmaster ~]# kubectl edit deploy nginx-deploy
View the status of the roll out
[root@kmaster ~]# kubectl rollout status deploy nginx-deploy Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
View the ReplicaSet,
[root@kmaster ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-59c9f8dff 1 1 1 3d6h nginx-deploy-d47dbbb7c 4 4 2 3m41s
We can see that the update of Deployment is achieved by creating a new 4-replica ReplicaSet and reducing the number of replicas of the old ReplicaSet to 0 at the same time.
Because we set maxSurge and maxUnavailable to 1, the number of pods of two replicasets at any time during the update process is at most 5 (4 replicas +1 maxSurge), and the number of available pods is at least 3 (4 replicas - 1 maxUnavailable).
Use the kubectl describe command to view the event part of the Deployment, as shown below
[root@kmaster ~]# kubectl describe deploy nginx-deploy ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 1 Normal ScalingReplicaSet 12m deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 3 Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 2 Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 2 Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 3 Normal ScalingReplicaSet 8m56s deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 1 Normal ScalingReplicaSet 8m56s deployment-controller Scaled up replica set nginx-deploy-d47dbbb7c to 4 Normal ScalingReplicaSet 5m55s deployment-controller Scaled down replica set nginx-deploy-59c9f8dff to 0
When the Pod Template of the Deployment is updated, the Deployment Controller will create a new ReplicaSet (nginx-deploy-d47dbbb7c) and scale it up to 1 replica, and scale down the old ReplicaSet (nginx-deploy-59c9f8dff) to 3 replicas. Next, the Deployment Controller continues to scale up the new ReplicaSet and scale down the old ReplicaSet until the new ReplicaSet has a Pod of replicas and the old ReplicaSet has a Pod of 0. This process is called roll out.
Pass In the spec.strategy field, you can specify the update strategy. In addition to the RollingUpdate used above, another desirable value is Recreate. If you select Recreate, Deployment will first delete all pods in the original ReplicaSet, and then create a new ReplicaSet and new Pod. During the update process, there will be a period of application unavailability. Therefore, RollingUpdate is generally used in online environments.
By default, kubernetes will save all the rollout history of the Deployment. You can specify the number of saved historical versions by setting revision history limit (. spec.revisionHistoryLimit configuration item).
If and only if the Deployment kubernetes creates a Deployment revision only when the spec.template field is modified (for example, modifying the image of the container). Other updates to the Deployment (for example, modifying the. spec.replicas field) will not create a new Deployment revision.
View the revision of the Deployment,
[root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=nginx-deploy.yaml --record=true 2 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true
If -- record=true is not added when updating the Deployment, change-case here will be empty.
We modify the image to a nonexistent version to simulate a failed update and roll back to the previous version,
# 1. Modify the image version to a value that does not exist [root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record deployment.apps/nginx-deploy image updated # 2. View ReplicaSet [root@kmaster ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-58f69cfc57 2 2 0 2m7s nginx-deploy-59c9f8dff 0 0 0 3d7h nginx-deploy-d47dbbb7c 3 3 3 81m # 3. View Pod status [root@kmaster ~]# kubect get pod NAME READY STATUS RESTARTS AGE nginx-deploy-58f69cfc57-5968g 0/1 ContainerCreating 0 42s nginx-deploy-58f69cfc57-tk7c5 0/1 ErrImagePull 0 42s nginx-deploy-d47dbbb7c-2chgx 1/1 Running 0 77m nginx-deploy-d47dbbb7c-8fcb9 1/1 Running 0 80m nginx-deploy-d47dbbb7c-gnwjj 1/1 Running 0 78m # 4. View Deployment details [root@kmaster ~]# kubectl describe deploy nginx-deploy ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3m57s deployment-controller Scaled up replica set nginx-deploy-58f69cfc57 to 1 Normal ScalingReplicaSet 3m57s deployment-controller Scaled down replica set nginx-deploy-d47dbbb7c to 3 Normal ScalingReplicaSet 3m57s deployment-controller Scaled up replica set nginx-deploy-58f69cfc57 to 2 # 5. View the historical version of Deployment [root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=nginx-deploy.yaml --record=true 2 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true 3 kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true # 6. View the details of a version [root@kmaster ~]# kubectl rollout history deploy nginx-deploy --revision=3 deployment.apps/nginx-deploy with revision #3 Pod Template: Labels: app=nginx pod-template-hash=58f69cfc57 Annotations: kubernetes.io/change-cause: kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true Containers: nginx: Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> # 7. Rollback to the previous version [root@kmaster ~]# kubectl rollout undo deploy nginx-deploy deployment.apps/nginx-deploy rolled back # 8. Rollback to the specified version [root@kmaster ~]# kubectl rollout undo deploy nginx-deploy --to-revision=1 deployment.apps/nginx-deploy rolled back # 9. View historical version information [root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true 5 kubectl apply --filename=nginx-deploy.yaml --record=true
The kubectl rollback undo command can be used to roll back to the previous version or the specified version. The above example also shows that rolling back to the historical version will set the serial number of the historical version to the latest serial number. As mentioned earlier, we can set the Deployment spec.revisionHistoryLimit to specify how many old replicasets (or revisions) to keep. Those exceeding this number will be garbage collected in the background. If this field is set to 0, Kubernetes will clean up all historical versions of the Deployment. At this time, the Deployment cannot be rolled back.
5. Expansion and contraction
You can scale the Deployment by modifying the definition with kubectl scale command or kubectl edit to increase or decrease the number of copies of Pod,
# Reduce the number of pods to 2 [root@kmaster ~]# kubectl scale deploy nginx-deploy --replicas=2 deployment.apps/nginx-deploy scaled # View Pod [root@kmaster ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deploy-59c9f8dff-7bpjp 1/1 Running 0 9m48s nginx-deploy-59c9f8dff-tpxzf 0/1 Terminating 0 8m57s nginx-deploy-59c9f8dff-v8fgz 0/1 Terminating 0 10m nginx-deploy-59c9f8dff-w8s9z 1/1 Running 0 10m # When looking at the ReplicaSet, the specified is changed to 2 [root@kmaster ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-58f69cfc57 0 0 0 22m nginx-deploy-59c9f8dff 2 2 2 3d8h nginx-deploy-d47dbbb7c 0 0 0 102m
6. Automatic telescopic (HPA)
If Auto scaling is enabled for cluster based on HPA and HPA, automatic scaling can be realized in one cluster,
# Create an HPA [root@kmaster ~]# kubectl autoscale deploy nginx-deploy --min=2 --max=4 --cpu-percent=80 horizontalpodautoscaler.autoscaling/nginx-deploy autoscaled # View HPA [root@kmaster ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx-deploy Deployment/nginx-deploy <unknown>/80% 2 4 2 16s # Delete HPA [root@kmaster ~]# kubectl delete hpa nginx-deploy horizontalpodautoscaler.autoscaling "nginx-deploy" deleted
7. Suspension and resumption
We can pause a Deployment and make one or more updates on it. At this time, the Deployment will not trigger the update. Only by resuming the Deployment, will all updates in the time period be executed. This method can update the Deployment multiple times between pause and resume without triggering unnecessary rolling updates.
# 1. Suspend Deployment [root@kmaster ~]# kubectl rollout pause deploy nginx-deploy deployment.apps/nginx-deploy paused # 2. Update container image [root@kmaster ~]# kubectl set image deploy nginx-deploy nginx=nginx:1.9.1 --record deployment.apps/nginx-deploy image updated # 3. Check the version history. The update is not triggered at this time [root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true 5 kubectl apply --filename=nginx-deploy.yaml --record=true # 4. Updating the Resource limit will not trigger the update [root@kmaster ~]# kubectl set resources deploy nginx-deploy -c=nginx --limits=memory=512Mi,cpu=500m deployment.apps/nginx-deploy resource requirements updated # 5. View and modify. The Pod definition has been updated [root@kmaster ~]# kubectl describe deploy nginx-deploy Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.9.1 Port: 80/TCP Host Port: 0/TCP Limits: cpu: 500m memory: 512Mi # 6. Resume Deployment [root@kmaster ~]# kubectl rollout resume deploy nginx-deploy deployment.apps/nginx-deploy resumed # 7. Check the version history. It can be seen that only one rollout has been made for the two modifications [root@kmaster ~]# kubectl rollout history deploy nginx-deploy deployment.apps/nginx-deploy REVISION CHANGE-CAUSE 3 kubectl set image deploy nginx-deploy nginx=nginx:1.161 --record=true 4 kubectl set image deploy nginx-deploy nginx=nginx:1.16.1 --record=true 5 kubectl apply --filename=nginx-deploy.yaml --record=true 6 kubectl set image deploy nginx-deploy nginx=nginx:1.9.1 --record=true
When updating the container image, because the Deployment is in the suspended state, a new version will not be generated. When the Deployment is resumed, the update during this period will take effect, and the rolling update will be executed to generate a new version. The update made on the suspended Deployment cannot be rolled back because no version is generated. You cannot roll back a Deployment that is in a suspended state. You can only roll back after resuming.
8. Canary release
Canary release is also called gray release. When we need to release a new version, we can create a new Deployment for the new version and hang it under a Service together with the Deployment of the old version (through label match), distribute the user request traffic to the Pod of the new Deployment through the load balancing of the Service, and observe the operation of the new version. If there is no problem, update the version of the old Deployment to the new version to complete the rolling update, Finally, delete the new Deployment. Obviously, this Canary release has certain limitations and can not be shunted according to users or regions. If Canary release is to be fully realized, Istio and others may need to be introduced.
Origin of Canary's name: in the past, an important danger faced by absenteeism when going down the mine was the poisonous gas in the mine. They thought of a way to identify whether there was poisonous gas in the mine. Miners took a canary with them to the mine. Canary's resistance to poisonous gas was weaker than human beings, and it would hang up first in the poisonous gas environment, so as to play a role of early warning. The principle behind it is: try and make mistakes at a low cost. Even if there is a serious error (toxic gas), the overall loss of the system is tolerable or very small (losing a canary).
The smallest scheduling unit in Kubernetes is Pod. The load creates Pod and controls it to run according to a certain number of replicas. Deployment can manage Pod and ReplicaSet in a "declarative" way and provide rolling update and revision fallback functions. Therefore, deployment is generally used to deploy applications instead of directly operating ReplicaSet or Pod.
[reprint, please indicate the source]
Author: rain song
Welcome to follow the author's official account: half way rain song, check out more technical dry goods articles