If the cloud is a water drop, Kubernetes is a water drop management platform

Abstract: cloud is composed of many small water droplets. Imagine each computer as a small water droplet and combine to form a cloud. Generally, water droplets appear first, and then the platform for managing water droplets appears (such as OpenStack and Kubernetes).

1, Cloud computing – independent universe

1. Cloud is composed of many small water droplets. Imagine each computer as a small water droplet and combine to form a cloud; The traditional water drop is VM; The appearance of Docker changed the particle size of small water droplets

2. Water droplets can operate independently and have complete internal (such as VM, Docker container)

3. Generally, water droplets appear first, and then the platform for managing water droplets appears (such as OpenStack, Kubernetes)

2, Introduction to Kubernetes

1.Kubernetes is an open source application used to manage containerized applications on multiple hosts in the cloud platform. Kubernetes aims to make the deployment of containerized applications simple and efficient. Kubernetes provides a mechanism for application deployment, planning, updating and maintenance

2. A core feature of kubernetes is that it can independently manage containers to ensure that the containers in the cloud platform run according to the user's expected state (for example, if the user wants dlcatalog to run all the time, the user does not need to care about what to do. Kubernetes will automatically monitor, restart and create new ones. In short, let dlcatalog provide services all the time)

3. In Kubenetes, all containers operate in a Pod. A Pod can carry one or more related containers

3, Typical nouns of Kubernetes

1.Pod

In Kubernetes, the smallest management element is not an independent container, but a Pod; A Pod is a "logical host" in a container environment, and a Pod is composed of multiple related containers that share disks; In the same Pod, the ports between containers cannot be repeated, otherwise the Pod will fail to function, or it will restart indefinitely

2. Node

Node is the real host of Pod, which can be physical machine or virtual machine; In order to manage Pod, each node must run at least container runtime (such as Docker), kubelet and Kube proxy services; Node is not created by kubernetes in essence. Kubernetes only manages the resources on the node; Although you can create a node object through the manifest (as shown in json below), kubernetes only checks whether there is such a node. If the check fails, it will not schedule the Pod upward

{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
            "name": "10.63.90.18",
            "labels": {
                        "name": "my-first-k8s-node"
                       }
             }
}

3. Service

Service is an abstract concept and the essence of k8s; Each App on k8s can apply for a "name" within the cluster to represent itself; K8s will assign a service license to your App with a "fake IP" on it. Any cluster that accesses this IP will access your App

Suppose we have some pods, each of which has port 9083 open and has a label app=MyApp; The following json code will create a new Service object named my dlcatalog Metastore Service and connect to the target port 9083; Moreover, the Pod with the tag app=MyApp will be assigned an ip address, which is used by Kube proxy. As long as the cluster accesses this ip, it is equivalent to accessing your App; It should be noted that the actual ip address of Pod in K8s is generally useless

kind: Service,
apiVersion: v1,
metadata:
    name: my-dlcatalog-metastore-service
spec:
    selector:
    app: MyApp
ports:
- protocol: TCP,
  port: 20403,
  targetPort: 9083

4. ConfigMap

ConfigMap is used to save key value pairs of configuration data. It can be used to save a single attribute or a configuration file; ConfigMap is very similar to secret, but it can more easily handle strings that do not contain sensitive information;

Mount ConfigMap directly as a file or directory using volume

The following shows that the created ConfigMap is directly mounted to the / etc/config directory of the Pod

apiVersion: v1
kind: Pod
metadata:
    name: vol-test-pod
spec:
    containers:
        - name: test-container
          image: 10.63.30.148:20202/ei_cnnroth7a/jwsdlcatalog-x86_64:1.0.1.20200918144530
          command: [ "/bin/sh", "bin/start_server.sh" ]
          volumeMounts:
          - name: config-volume
            mountPath: /etc/config
    volumes:
        - name: config-volume
          configMap:
            name: special-config
    restartPolicy: Never

4, Kubernetes resource fancy scheduling

Specify Node scheduling

There are three ways to specify that the Pod only runs on the specified Node

Mode 1:

Node selector: only the nodes that match the specified label are scheduled

Mode 2:

nodeAffinity: Node selectors with more functions, such as supporting collection operations

nodeAffinity currently supports two types: requiredduringschedulingignored duringexecution and preferredduringschedulingignored duringexecution, which respectively represent the required and preferred conditions

For example, the following example represents scheduling to include tags http://kubernetes.io/e2e-az-name And on a Node with a value of e2e-az1 or e2e-az2, and preferably a Node with a label other Node label key = other Node label value

apiVersion: v1
kind: Pod
metadata:
    name: with-node-affinity
spec:
    affinity:
        nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                    nodeSelectorTerms:
                    - matchExpressions:
                        - key: kubernetes.io/e2e-az-name
                          operator: In
                          values:
                          - e2e-az1
                          - e2e-az2
                preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 1
                  preference:
                    matchExpressions:
                    - key: another-node-label-key
                      operator: In
                      values:
                      - another-node-label-value
    containers:
    - name: with-node-affinity
      image: 10.63.30.148:20202/ei_cnnroth7a/jwsdlcatalog-x86_64:1.0.1.20200918144530

Mode 3:

podAffinity: dispatch to the Node where the qualified Pod is located

Only the affinity and affinity tags on the affinity Node can be used to select the affinity Node, and only the affinity and affinity tags on the Node can be used for scheduling

This function is relatively flexible. Take the following two examples as an explanation:

The first example shows:

If a "Zone where a Node is located contains at least one running Pod with security=S1 tag", it can be scheduled to the Node; It is not scheduled to "contain at least one running Pod with security=S2 tag"

apiVersion: v1
kind: Pod
metadata:
    name: with-pod-affinity
spec:
    affinity:
        podAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                - key: security
                  operator: In
                  values:
                  - S1
              topologyKey: failure-domain.beta.kubernetes.io/zone
        podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
                podAffinityTerm:
                    labelSelector:
                        matchExpressions:
                        - key: security
                          operator: In
                          values:
                          - S2
                    topologyKey: kubernetes.io/hostname
        containers:
        - name: with-node-affinity
          image: 10.63.30.148:20202/ei_cnnroth7a/jwsdlcatalog-x86_64:1.0.1.20200918144530

The second example shows:

If a "Zone where a Node is located contains at least one running Pod with appVersion= jwsdlcatalog-x86_64-1.0.1.20200918144530 tag", it is recommended not to schedule to the Node; It is not scheduled to "contain at least one running Pod with app= jwsdlcatalog-x86_64 tag"

spec:
  restartPolicy: Always         #pod restart strategy
  securityContext:
    runAsUser: 2000
    fsGroup: 2000
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: appVersion
                  operator: In
                  values:
                    - concat:
                        - get_input: IMAGE_NAME
                        - '-'
                        - get_input: IMAGE_VERSION
            #numOfMatchingPods: "2"   #This field must not be added. This field is Huawei's own implementation and is not accepted by the community
            topologyKey: "failure-domain.beta.kubernetes.io/zone"
          weight: 100
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - get_input: IMAGE_NAME
          numOfMatchingPods: "1"
          topologyKey: "kubernetes.io/hostname"
  containers:
    - image:
        concat:
          - get_input: IMAGE_ADDR           #Address of spliced image(#Solving the problem of digital parameters with splicing function)
          - "/"
          - get_input: IMAGE_NAME           #Address of spliced image(#Solving the problem of digital parameters with splicing function)
          - ":"
          - get_input: IMAGE_VERSION        #Address of spliced image(#Solving the problem of digital parameters with splicing function)
      name: jwsdlcatalog

Note: This article is purely a personal point of view. If some pictures are the same, it is purely an accident

 

Click follow to learn about Huawei's new cloud technology for the first time~

Tags: Docker Kubernetes OpenStack kube

Posted by Xil3 on Sat, 30 Apr 2022 11:56:03 +0300