Containerized Deployment - Kubernetes(K8S) Quick Start

Containerized deployment

With the popularity of Docker technology, containerized deployment of projects is becoming more and more popular. The advantages of containerized deployment are as follows:

① It can ensure that each container has its own file system, CPU, memory, process space, etc.

②The resources required to run the application are packaged by the container and decoupled from the underlying infrastructure

③ Containerized applications can be deployed across cloud service providers and across Linux operating system distributions

Although containerized deployment can bring a lot of convenience, there are also some problems, such as:

①A container fails and shuts down, how to make another container start immediately to replace the shut down container

②When the amount of concurrent access increases, how to scale the number of containers horizontally

These container management problems are collectively referred to as container orchestration problems. In order to solve these container orchestration problems, some container orchestration technologies have been developed:

①Swarm: Docker's own container orchestration tool

②Mesos: A tool for unified resource management and control of Apache, which needs to be used in conjunction with Marathon

③Kubernetes: Google's open source container orchestration tool

Kubernetes is by far the most popular container orchestration technology.

k8s

kubernetes, because there are 8 characters between k and s, so called k8s for short, is a brand-new leading solution of distributed architecture based on container technology. version, the first official version was released in July 2015. Its essence is a group of server clusters, which can run specific programs on each node of the cluster to manage the containers in the nodes. It mainly provides the following main Function:

①Self-repair: Once a container crashes, it can quickly start a new container in about 1 second

② Elastic scaling: The number of containers running in the cluster can be automatically adjusted as needed

③Service discovery: A service can find the services it depends on through automatic discovery

④Load balancing: If a service starts multiple containers, it can automatically achieve load balancing of requests

⑤Version rollback: If you find a problem with the newly released version of the program, you can roll back to the original version immediately

⑥Storage Orchestration: Storage volumes can be automatically created according to the needs of the container itself

components

A k8s cluster is mainly composed of control nodes (master) and worker nodes (nodes), and different components are installed on each node.

master: the control plane of the cluster, responsible for the decision-making (management) of the cluster

ApiServer : The only entry for resource operations, receives commands entered by users, and provides mechanisms such as authentication, authorization, API registration and discovery

Scheduler: Responsible for cluster resource scheduling, scheduling Pod s to the corresponding node nodes according to the predetermined scheduling strategy

ControllerManager: Responsible for maintaining the state of the cluster, such as program deployment arrangements, fault detection, automatic expansion, rolling updates, etc.

Etcd: responsible for storing information about various resource objects in the cluster

node: the data plane of the cluster, responsible for providing the running environment for the container (work)

Kubelet: Responsible for maintaining the life cycle of the container, that is, by controlling docker, to create, update, and destroy the container

KubeProxy: Responsible for providing service discovery and load balancing within the cluster

Docker: Responsible for various operations of containers on nodes

Next, deploy an nginx service to illustrate the calling relationship of each component of the kubernetes system:

①First of all, it must be clear that once the kubernetes environment is started, both the master and the node will store their own information in the etcd database

② An installation request for an nginx service will first be sent to the apiServer component of the master node

3. The apiServer component will call the scheduler component to decide on which node the service should be installed

At this time, it will read the information of each node from etcd, then select it according to a certain algorithm, and inform apiServer of the result

④ apiServer calls controller-manager to schedule Node node to install nginx service

After kubelet receives the instruction, it will notify docker, and then docker will start a nginx pod

5 Pod is the smallest unit of operation of kubernetes, the container must run in the pod so far,

6. An nginx service is running. If you need to access nginx, you need to use kube-proxy to generate access to the pod.

In this way, external users can access the nginx service in the cluster

Core idea

Master: cluster control node, each cluster needs at least one master node responsible for cluster management and control

Node: Workload node, the master assigns containers to these node worker nodes, and then the docker on the node node is responsible for the running of the container

Pod: The smallest control unit of kubernetes, containers are all running in pods, and a pod can have one or more containers

Controller: The controller, through which the management of pods is realized, such as starting pods, stopping pods, scaling the number of pods, etc.

Service: The unified entry of pods to external services, the following can maintain multiple pods of the same class

Label: label, used to classify pods, the same type of pod will have the same label

NameSpace: Namespace used to isolate the running environment of pod s

Environment construction

host preparation

This time, a cluster consisting of a Master node and multiple Node nodes is built.

effectIP addressoperating systemconfigure
Master192.168.109.101Centos7.5 infrastructure server2 CPUs 2G memory 50G hard disk
Node1192.168.109.102Centos7.5 infrastructure server2 CPUs 2G memory 50G hard disk
Node2192.168.109.103Centos7.5 infrastructure server2 CPUs 2G memory 50G hard disk

environment initialization

1) Check the version of the operating system

# Installing kubernetes cluster in this way requires Centos version to be 7.5 or above
[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

2) Host name resolution

In order to facilitate direct calls between cluster nodes, configure hostname resolution here. It is recommended to use an internal DNS server in enterprises.

# Host name resolution Edit the /etc/hosts file of the three servers and add the following
192.168.109.100  master
192.168.109.101  node1
192.168.109.102  node2

3) Time synchronization

kubernetes requires that the node time in the cluster must be accurate and consistent. Here, the chronyd service is directly used to synchronize the time from the network.

It is recommended to configure an internal time synchronization server in the enterprise

# Start the chronyd service
[root@master ~]# systemctl start chronyd
# Set the chronyd service to start automatically at boot
[root@master ~]# systemctl enable chronyd
# The chronyd service starts and waits for a few seconds, you can use the date command to verify the time
[root@master ~]# date

4) Disable iptables and firewalld services

kubernetes and docker will generate a large number of iptables rules during operation. In order not to confuse the system rules with them, directly close the system rules

# 1 Turn off the firewalld service
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
# 2 Close the iptables service
[root@master ~]# systemctl stop iptables
[root@master ~]# systemctl disable iptables

5) Disable selinux

selinux is a security service under the linux system. If it is not turned off, various strange problems will occur in the installation cluster.

# Edit the /etc/selinux/config file and modify the value of SELINUX to disabled
# Note that you need to restart the linux service after the modification is completed
SELINUX=disabled

6) Disable swap partition

The swap partition refers to the virtual memory partition. Its function is to virtualize the disk space into memory for use after the physical memory is used up.

Enabling the swap device can have a very negative impact on the performance of the system, so kubernetes requires each node to disable the swap device

However, if the swap partition cannot be closed for some reason, it needs to be configured with clear parameters during the cluster installation process.

# Edit the partition configuration file /etc/fstab and comment out the swap partition line
# Note that you need to restart the linux service after the modification is completed
 UUID=455cc753-7a60-4c17-a424-7741728c44a1 /boot    xfs     defaults        0 0
 /dev/mapper/centos-home /home                      xfs     defaults        0 0
# /dev/mapper/centos-swap swap                      swap    defaults        0 0

7) Modify the kernel parameters of linux

# Modify linux kernel parameters, add bridge filtering and address forwarding functions
# Edit the /etc/sysctl.d/kubernetes.conf file and add the following configuration:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

# reload configuration
[root@master ~]# sysctl -p

# Load the bridge filter module
[root@master ~]# modprobe br_netfilter

# Check whether the bridge filter module is loaded successfully
[root@master ~]# lsmod | grep br_netfilter

8) Configure ipvs function

There are two proxy models for service in kubernetes, one is based on iptables and the other is based on ipvs

Comparing the two, the performance of ipvs is obviously higher, but if you want to use it, you need to manually load the ipvs module

# 1 Install ipset and ipvsadm
[root@master ~]# yum install ipset ipvsadmin -y
​
# 2 Add the modules that need to be loaded into the script file
[root@master ~]# cat < /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
​
# 3 Add execute permission to the script file
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
​
# 4 Execute the script file
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
​
# 5 Check whether the corresponding module is loaded successfully
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4

9) Restart the server

After the above steps are completed, you need to restart the linux system

[root@master ~]# reboot

install docker

# 1 Switch mirror source
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 2 Check the docker version supported in the current image source
[root@master ~]# yum list docker-ce --showduplicates

# 3 Install a specific version of docker-ce
# --setopt=obsoletes=0 must be specified, otherwise yum will automatically install a higher version
[root@master ~]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y

# 4 Add a configuration file
# The Cgroup Driver used by Docker by default is cgroupfs, and kubernetes recommends using systemd instead of cgroupfs
[root@master ~]# mkdir /etc/docker
[root@master ~]# cat <  /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF

# 5 Start docker
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker

# 6 Check docker status and version
[root@master ~]# docker version

install k8s

# Since the mirror source of kubernetes is abroad, the speed is relatively slow, so switch to the domestic mirror source here
# Edit /etc/yum.repos.d/kubernetes.repo and add the following configuration 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# Install kubeadm, kubelet and kubectl
[root@master ~]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y

# Configure the cgroup of the kubelet
# Edit /etc/sysconfig/kubelet and add the following configuration
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

# 4 Set kubelet to start automatically
[root@master ~]# systemctl enable kubelet

Preparing for cluster mirroring

# Before installing a kubernetes cluster, you must prepare the images required by the cluster in advance. The required images can be viewed with the following command
[root@master ~]# kubeadm config images list

# Download mirror
# This image is in the kubernetes warehouse, and cannot be connected due to network reasons. An alternative is provided below
images=(
    kube-apiserver:v1.17.4
    kube-controller-manager:v1.17.4
    kube-scheduler:v1.17.4
    kube-proxy:v1.17.4
    pause:3.1
    etcd:3.4.3-0
    coredns:1.6.5
)

for imageName in ${images[@]} ; do
	docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
	docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 		k8s.gcr.io/$imageName
	docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

Cluster initialization

Let's start to initialize the cluster and add the node nodes to the cluster

The following operations only need to be performed on the master node

# Create a cluster
[root@master ~]# kubeadm init 
	--kubernetes-version=v1.17.4 
    --pod-network-cidr=10.244.0.0/16 
    --service-cidr=10.96.0.0/12 
    --apiserver-advertise-address=192.168.109.100

# Create necessary files
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

The following operations only need to be performed on the node node

# Add the node node to the cluster
[root@master ~]# kubeadm join 192.168.109.100:6443  
	--token 8507uc.o0knircuri8etnw2 
	--discovery-token-ca-cert-hash 
	sha256:acc37967fb5b0acf39d7598f8a439cc7dc88f439a3f4d0c9cae88e7901b9d3f
	
# View the cluster status The cluster status at this time is NotReady, this is because the network plug-in has not been configured
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   6m43s   v1.17.4
node1    NotReady      22s     v1.17.4
node2    NotReady      19s     v1.17.4

Install network plugin

kubernetes supports a variety of network plug-ins, such as flannel, calico, canal, etc., you can choose one to use, this time choose flannel

The following operations can still only be performed on the master node. The plugin uses the DaemonSet controller, which will run on each node.

# Get fannel's configuration file
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Modify the quay.io warehouse in the file to quay-mirror.qiniu.com

# start fannel with config file
[root@master ~]# kubectl apply -f kube-flannel.yml

# Wait a moment and check the status of the cluster nodes again
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   15m     v1.17.4
node1    Ready       8m53s   v1.17.4
node2    Ready       8m50s   v1.17.4

So far, the cluster environment of kubernetes has been built

Service deployment

Next, deploy an nginx program in the kubernetes cluster to test whether the cluster is working normally.

# Deploy nginx
[root@master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine

# exposed port
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort

# View service status
[root@master ~]# kubectl get pods,service
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-86c57db685-fdc2k   1/1     Running   0          18m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1               443/TCP        82m
service/nginx        NodePort    10.104.121.45           80:30073/TCP   17m

# 4 Finally, access the deployed nginx service on the computer

Resource management

In kubernetes, all content is abstracted into resources, and users need to manage kubernetes by manipulating resources.

The essence of kubernetes is a cluster system. Users can deploy various services in the cluster, that is, run containers one by one in the kubernetes cluster, and run the specified program in the container.

The smallest management unit of kubernetes is a pod rather than a container, so the container can only be placed in the pod, and kubernetes generally does not directly manage the pod, but manages the pod through the pod controller.

After the Pod can provide services, it is necessary to consider how to access the services in the Pod. kubernetes provides Service resources to implement this function.

k8s provides three resource management methods

Imperative object management: use commands directly to operate kubernetes resources

kubectl run nginx-pod --image=nginx:1.17.1 --port=80

Imperative Object Configuration: Manipulate kubernetes resources through command configuration and configuration files

kubectl create/patch -f nginx-pod.yaml

Declarative object configuration: operate kubernetes resources through apply commands and configuration files

kubectl apply -f nginx-pod.yaml

imperative object management

kubectl command

kubectl is a command-line tool for kubernetes clusters, through which the cluster itself can be managed, and containerized applications can be installed and deployed on the cluster. The syntax of the kubectl command is as follows:

kubectl [command] [type] [name] [flags]

comand: specifies the operation to be performed on the resource, such as create, get, delete

type: Specify the resource type, such as deployment, pod, service

name: Specifies the name of the resource, the name is case-sensitive

flags: specify additional optional parameters

# view all pod s
kubectl get pod 

# View a pod
kubectl get pod pod_name

# View a pod and display the results in yaml format
kubectl get pod pod_name -o yaml

The following is a simple demonstration of the use of the following commands with the creation and deletion of a namespace / pod:

# create a namespace
[root@master ~]# kubectl create namespace dev
namespace/dev created

# get namespace
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   21h
dev               Active   21s
kube-node-lease   Active   21h
kube-public       Active   21h
kube-system       Active   21h

# Create and run a nginx Pod under this namespace
[root@master ~]# kubectl run pod --image=nginx -n dev
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/pod created

# View newly created pod s
[root@master ~]# kubectl get pod -n dev
NAME                   READY   STATUS    RESTARTS   AGE
pod-864f9875b9-pcw7x   1/1     Running   0          21s

# delete the specified pod
[root@master ~]# kubectl delete pod pod-864f9875b9-pcw7x
pod "pod-864f9875b9-pcw7x" deleted

# delete the specified namespace
[root@master ~]# kubectl delete ns dev
namespace "dev" deleted

Imperative Object Configuration

Imperative object configuration is to use commands with configuration files to operate kubernetes resources.

1) Create an nginxpod.yaml with the following content:

apiVersion: v1
kind: Namespace
metadata:
name: dev

---

apiVersion: v1
kind: Pod
metadata:
name: nginxpod
namespace: dev
spec:
containers:
- name: nginx-containers
image: nginx:1.17.1

2) Execute the create command to create a resource:

[root@master ~]# kubectl create -f nginxpod.yaml
namespace/dev created
pod/nginxpod created

At this point, it was found that two resource objects were created, namely namespace and pod.

3) Execute the get command to view the resources:

[root@master ~]#  kubectl get -f nginxpod.yaml
NAME            STATUS   AGE
namespace/dev   Active   18s

NAME            READY   STATUS    RESTARTS   AGE
pod/nginxpod    1/1     Running   0          17s

This displays the information of the two resource objects

4) Execute the delete command to delete the resource:

[root@master ~]# kubectl delete -f nginxpod.yaml
namespace "dev" deleted
pod "nginxpod" deleted

At this point, it is found that two resource objects have been deleted

Summarize:
Manipulating resources in the way of imperative object configuration can be simply thought of as: command + yaml Configuration file (there are various parameters required by the command)

Declarative Object Configuration

Declarative object configuration is similar to imperative object configuration, but it has only one command, apply.

# First execute the kubectl apply -f yaml file once and find that the resource is created
[root@master ~]#  kubectl apply -f nginxpod.yaml
namespace/dev created
pod/nginxpod created

# Execute the kubectl apply -f yaml file again and find that the resource has not changed
[root@master ~]#  kubectl apply -f nginxpod.yaml
namespace/dev unchanged
pod/nginxpod unchanged

actual combat

This chapter will introduce how to deploy an nginx service in a kubernetes cluster and be able to access it.

Namespace

Namespace is a very important resource in the kubernetes system. Its main function is to achieve resource isolation of multiple environments or multi-tenant resource isolation.

By default, all pods in a kubernetes cluster are mutually accessible. But in practice, you may not want to allow the two Pods to access each other, then you can divide the two Pods into different namespace s. kubernetes can form logical "groups" by allocating resources within the cluster to different Namespace s to facilitate the isolated use and management of resources in different groups.

Different namespace s can be handed over to different tenants for management through the authorization mechanism of kubernetes, thus realizing multi-tenant resource isolation. At this time, it can also combine the resource quota mechanism of kubernetes to limit the resources that different tenants can occupy, such as CPU usage, memory usage, etc., to manage the resources available to tenants.

After the cluster is started, kubernetes will create several namespace s by default

[root@master ~]# kubectl  get namespace
NAME              STATUS   AGE
default           Active   45h     #  All objects that do not specify a Namespace will be allocated in the default namespace
kube-node-lease   Active   45h     #  Heartbeat maintenance between cluster nodes, introduced in v1.13
kube-public       Active   45h     #  Resources under this namespace can be accessed by everyone (including unauthenticated users)
kube-system       Active   45h     #  All resources created by the Kubernetes system are in this namespace

Let's take a look at the specific operations of the namespace resource:

Check

# 1 View all ns commands: kubectl get ns
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   45h
kube-node-lease   Active   45h
kube-public       Active   45h     
kube-system       Active   45h     

# 2 View the specified ns command: kubectl get ns ns name
[root@master ~]# kubectl get ns default
NAME      STATUS   AGE
default   Active   45h

# 3 Specify the output format Command: kubectl get ns ns name -o format parameter
# There are many formats supported by kubernetes, the more common ones are wide, json, yaml
[root@master ~]# kubectl get ns default -o yaml
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2020-04-05T04:44:16Z"
  name: default
  resourceVersion: "151"
  selfLink: /api/v1/namespaces/default
  uid: 7405f73a-e486-43d4-9db6-145f1409f090
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
  
# 4 View ns details Command: kubectl describe ns ns name
[root@master ~]# kubectl describe ns default
Name:         default
Labels:       
Annotations:  
Status:       Active  # Active namespace is in use Terminating is deleting namespace

# ResourceQuota resource limit for namespace
# LimitRange resource limit for each component in the namespace
No resource quota.
No LimitRange resource.

create

# create namespace
[root@master ~]# kubectl create ns dev
namespace/dev created

delete

# delete namespace
[root@master ~]# kubectl delete ns dev
namespace "dev" deleted

Configuration method

First prepare a yaml file: ns-dev.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: dev

Then you can execute the corresponding create and delete commands:

Create: kubectl create -f ns-dev.yaml

Delete: kubectl delete -f ns-dev.yaml

Pod

A Pod is the smallest unit of management in a kubernetes cluster. To run a program, it must be deployed in a container, and the container must exist in a Pod.

A Pod can be thought of as an encapsulation of a container, and one or more containers can exist in a Pod.

After the kubernetes cluster is started, each component in the cluster also runs in Pod mode. You can view it with the following command:

[root@master ~]# kubectl get pod -n kube-system
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-68g6v         1/1     Running   0          2d1h
kube-system   coredns-6955765f44-cs5r8         1/1     Running   0          2d1h
kube-system   etcd-master                      1/1     Running   0          2d1h
kube-system   kube-apiserver-master            1/1     Running   0          2d1h
kube-system   kube-controller-manager-master   1/1     Running   0          2d1h
kube-system   kube-flannel-ds-amd64-47r25      1/1     Running   0          2d1h
kube-system   kube-flannel-ds-amd64-ls5lh      1/1     Running   0          2d1h
kube-system   kube-proxy-685tk                 1/1     Running   0          2d1h
kube-system   kube-proxy-87spt                 1/1     Running   0          2d1h
kube-system   kube-scheduler-master            1/1     Running   0          2d1h

create and run

kubernetes does not provide commands to run pods individually, they are all implemented through Pod controllers

# Command format: kubectl run (pod controller name) [parameters] 
# --image specifies the image of the Pod
# --port specifies the port
# --namespace specifies the namespace
[root@master ~]# kubectl run nginx --image=nginx:1.17.1 --port=80 --namespace dev 
deployment.apps/nginx created

View pod information

# View basic Pod information
[root@master ~]# kubectl get pods -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5ff7956ff6-fg2db   1/1     Running   0          43s

# View Pod Details
[root@master ~]# kubectl describe pod nginx-5ff7956ff6-fg2db -n dev
Name:         nginx-5ff7956ff6-fg2db
Namespace:    dev
Priority:     0
Node:         node1/192.168.109.101
Start Time:   Wed, 08 Apr 2020 09:29:24 +0800
Labels:       pod-template-hash=5ff7956ff6
              run=nginx
Annotations:  
Status:       Running
IP:           10.244.1.23
IPs:
  IP:           10.244.1.23
Controlled By:  ReplicaSet/nginx-5ff7956ff6
Containers:
  nginx:
    Container ID:   docker://4c62b8c0648d2512380f4ffa5da2c99d16e05634979973449c98e9b829f6253c
    Image:          nginx:1.17.1
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 08 Apr 2020 09:30:01 +0800
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hwvvw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-hwvvw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hwvvw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled    default-scheduler  Successfully assigned dev/nginx-5ff7956ff6-fg2db to node1
  Normal  Pulling    4m11s      kubelet, node1     Pulling image "nginx:1.17.1"
  Normal  Pulled     3m36s      kubelet, node1     Successfully pulled image "nginx:1.17.1"
  Normal  Created    3m36s      kubelet, node1     Created container nginx
  Normal  Started    3m36s      kubelet, node1     Started container nginx

Access Pod s

# get podIP
[root@master ~]# kubectl get pods -n dev -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP             NODE    ... 
nginx-5ff7956ff6-fg2db   1/1     Running   0          190s   10.244.1.23   node1   ...

#Visit POD
[root@master ~]# curl http://10.244.1.23:80



	Welcome to nginx!

Thank you for using nginx.

Delete the specified Pod

# Delete the specified Pod
[root@master ~]# kubectl delete pod nginx-5ff7956ff6-fg2db -n dev
pod "nginx-5ff7956ff6-fg2db" deleted

# At this point, it is displayed that the deletion of the Pod was successful, but after querying again, it was found that a new one was created. 
[root@master ~]# kubectl get pods -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5ff7956ff6-jj4ng   1/1     Running   0          21s

# This is because the current Pod is created by the Pod Controller, which monitors the Pod status and rebuilds it immediately once it finds that the Pod is dead
# To delete a Pod at this point, you must delete the Pod Controller

# Let's first query the Pod controller under the current namespace
[root@master ~]# kubectl get deploy -n  dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           9m7s

# Next, delete this PodPod controller
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted

# Wait for a while, then query the Pod and find that the Pod has been deleted
[root@master ~]# kubectl get pods -n dev
No resources found in dev namespace.

configure action

Create a pod-nginx.yaml with the following content:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: dev
spec:
  containers:
  - image: nginx:1.17.1
    name: pod
    ports:
    - name: nginx-port
      containerPort: 80
      protocol: TCP

Then you can execute the corresponding create and delete commands:

Create: kubectl create -f pod-nginx.yaml

Delete: kubectl delete -f pod-nginx.yaml

Label

Label is an important concept in the kubernetes system. Its role is to add identifiers to resources to distinguish and select them.

Features of Label:

①. A Label will be attached to various objects in the form of key/value key-value pairs, such as Node, Pod, Service, etc.

②.A resource object can define any number of Labels, and the same Label can also be added to any number of resource objects

③.Label is usually determined when the resource object is defined, of course, it can also be dynamically added or deleted after the object is created

Multi-dimensional grouping of resources can be realized through Label, so as to manage resources such as resource allocation, scheduling, configuration, and deployment flexibly and conveniently.

Some commonly used Label examples are as follows:

①Version label: "version":"release", "version":"stable"......

②Environment label: "environment":"dev", "environment":"test", "environment":"pro"

③Architecture label: "tier":"frontend", "tier":"backend"

After the label is defined, the selection of the label should also be considered, which requires the use of the Label Selector, namely:

Label is used to define an identifier for a resource object

Label Selector is used to query and filter resource objects with certain labels

There are currently two Label Selector s:

①Equation-based Label Selector

name = slave: select all objects that contain key="name" and value="slave" in Label

env != production: select all objects including key="env" in Label and value not equal to "production"

②Set-based Label Selector

name in (master, slave): select all objects that contain key="name" and value="master" or "slave" in Label

name not in (frontend): selects all objects that contain key="name" in Label and whose value is not equal to "frontend"

Multiple label selection conditions can be used. In this case, multiple Label Selector s can be combined and separated by commas ",". E.g:

name=slave,env!=production

name not in (frontend),env!=production

command mode

# Label pod resources
[root@master ~]# kubectl label pod nginx-pod version=1.0 -n dev
pod/nginx-pod labeled

# Update labels for pod resources
[root@master ~]# kubectl label pod nginx-pod version=2.0 -n dev --overwrite
pod/nginx-pod labeled

# View tags
[root@master ~]# kubectl get pod nginx-pod  -n dev --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          10m   version=2.0

# Filter tags
[root@master ~]# kubectl get pod -n dev -l version=2.0  --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          17m   version=2.0
[root@master ~]# kubectl get pod -n dev -l version!=2.0 --show-labels
No resources found in dev namespace.

#remove tag
[root@master ~]# kubectl label pod nginx-pod version- -n dev
pod/nginx-pod labeled

Configuration method

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: dev
  labels:
    version: "3.0" 
    env: "test"
spec:
  containers:
  - image: nginx:1.17.1
    name: pod
    ports:
    - name: nginx-port
      containerPort: 80
      protocol: TCP

Then you can execute the corresponding update command: kubectl apply -f pod-nginx.yaml

Deployment

In kubernetes, Pod is the smallest control unit, but kubernetes rarely controls Pod directly, usually through Pod controller. Pod controllers are used for pod management to ensure that pod resources are in the expected state. When pod resources fail, it will try to restart or rebuild pods.

There are many types of Pod controllers in kubernetes, this chapter only introduces one: Deployment.

Command operation

# Command format: kubectl run deployment name [parameters] 
# --image specifies the image of the pod
# --port specifies the port
# --replicas specifies the number of pod s to create
# --namespace specifies the namespace
[root@master ~]# kubectl run nginx --image=nginx:1.17.1 --port=80 --replicas=3 -n dev
deployment.apps/nginx created

# View created Pod s
[root@master ~]# kubectl get pods -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5ff7956ff6-6k8cb   1/1     Running   0          19s
nginx-5ff7956ff6-jxfjt   1/1     Running   0          19s
nginx-5ff7956ff6-v6jqw   1/1     Running   0          19s

# View deployment information
[root@master ~]# kubectl get deploy -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           2m42s

# UP-TO-DATE: Number of replicas upgraded successfully
# AVAILABLE: Number of available replicas
[root@master ~]# kubectl get deploy -n dev -o wide
NAME    READY UP-TO-DATE  AVAILABLE   AGE     CONTAINERS   IMAGES              SELECTOR
nginx   3/3     3         3           2m51s   nginx        nginx:1.17.1        run=nginx

# View deployment details
[root@master ~]# kubectl describe deploy nginx -n dev
Name:                   nginx
Namespace:              dev
CreationTimestamp:      Wed, 08 Apr 2020 11:14:14 +0800
Labels:                 run=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               run=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        nginx:1.17.1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  
    Mounts:       
  Volumes:        
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  
NewReplicaSet:   nginx-5ff7956ff6 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m43s  deployment-controller  Scaled up replicaset nginx-5ff7956ff6 to 3
  
# delete 
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted

configure action

Create a deploy-nginx.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx:1.17.1
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP

Then you can execute the corresponding create and delete commands:

Create: kubectl create -f deploy-nginx.yaml

Delete: kubectl delete -f deploy-nginx.yaml

Service

Through the learning of the previous lesson, we have been able to use Deployment to create a set of Pod s to provide services with high availability.

Although each Pod will be assigned a separate Pod IP, there are two problems:

①Pod IP will change with the reconstruction of Pod

②Pod IP is only a virtual IP visible in the cluster and cannot be accessed from outside

This makes it difficult to access the service. Therefore, kubernetes designed Service to solve this problem.

Service can be regarded as a set of external access interfaces for pods of the same type. With Service, applications can easily realize service discovery and load balancing.

Operation 1: Create a Service accessible inside the cluster

# Expose Service
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=ClusterIP --port=80 --target-port=80 -n dev
service/svc-nginx1 exposed

# View service
[root@master ~]# kubectl get svc svc-nginx -n dev -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE     SELECTOR
svc-nginx1   ClusterIP   10.109.179.231           80/TCP    3m51s   run=nginx

# A CLUSTER-IP is generated here, which is the IP of the service. In the life cycle of the service, this address will not change
# You can access the POD corresponding to the current service through this IP
[root@master ~]# curl 10.109.179.231:80



Welcome to nginx!

Welcome to nginx!
.......

Operation 2: Create a Service that can also be accessed outside the cluster

# The type of the Service created above is ClusterIP, and this ip address can only be accessed inside the cluster
# If you need to create a Service that can also be accessed externally, you need to modify the type to NodePort
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx2 --type=NodePort --port=80 --target-port=80 -n dev
service/svc-nginx2 exposed

# At this point, you will find that there is a NodePort type Service, and there is a pair of Ports (80:31928/TC)
[root@master ~]# kubectl get svc  svc-nginx-1  -n dev -o wide
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
svc-nginx2    NodePort    10.100.94.0              80:31928/TCP   9s     run=nginx

# Next, you can access the node IP:31928 through the host outside the cluster to access the service
# For example, access the following address through a browser on the computer host of
http://192.168.109.100:31928/

delete Service

[root@master ~]# kubectl delete svc svc-nginx-1 -n dev                                   service "svc-nginx-1" deleted

Configuration method

Create a svc-nginx.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: svc-nginx
  namespace: dev
spec:
  clusterIP: 10.109.179.231
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  type: ClusterIP

Then you can execute the corresponding create and delete commands:

Create: kubectl create -f svc-nginx.yaml

Delete: kubectl delete -f svc-nginx.yaml

summary

So far, you have mastered the basic operations of Namespace, Pod, Deployment, and Service resources. With these operations, you can implement simple deployment and access of a service in the kubernetes cluster, but if you want to use kubernetes better, you need to In-depth study of the details and principles of these resources.

Tags: Java Docker Kubernetes Container IDE

Posted by Hatch on Sat, 22 Oct 2022 02:38:03 +0300