Container choreography series

Create SERVICE

Installation of three sets iptables: 
[root@kub-k8s-master prome]# yum install -y iptables iptables-services
1.Create a depl
[root@kub-k8s-master prome]# kubectl delete -f deployment.yaml
[root@kub-k8s-master prome]# vim nginx-depl.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep01
spec:
  selector:
    matchLabels:
      app: web
  replicas: 2
  template:
      metadata:
        name: testnginx9
        labels:
          app: web
      spec:
        containers:
          - name: testnginx9
            image: daocloud.io/library/nginx
            ports:
              - containerPort: 80
[root@kub-k8s-master prome]# kubectl apply -f nginx-depl.yml 
deployment.apps/nginx-deployment created
2. establish service And with nodePort Expose the port to the external network in the following way:
[root@kub-k8s-master prome]# vim nginx_svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  type: NodePort  #type
  ports:
    - port: 8080
      nodePort: 30001
      targetPort: 80
  selector:   #selector
    app: web
    
[root@kub-k8s-master prome]# kubectl apply -f nginx_svc.yaml 
service/mysvc created
​
3.test
[root@kub-k8s-master prome]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          5d18h
mysvc        NodePort    10.100.166.208   <none>        8080:30001/TCP   21s

Port details

Install iptables (but close iptables). After k8s creating the service, rules will be automatically added to iptables and take effect (although iptables is closed)

3 port settings in the service
 These port The concept of is easy to be confused. For example, create the following service: 
apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30001
      targetPort: 80
  selector:
    app: web
port
 there port express: service Exposed to cluster ip Port on, cluster ip:port It provides access to customers inside the cluster service The entrance to the.
nodePort
 first, nodePort yes kubernetes Provide access to external customers of the cluster service One way of entrance (the other way is LoadBalancer),So,<nodeIP>:nodePort It provides access to external customers of the cluster service The entrance to the.
targetPort
targetPort Well understood, targetPort yes pod Port on, from port and nodePort Finally, the data coming from kube-proxy Flow to back end pod of targetPort Enter the container on the.
port,nodePort summary
 in general, port and nodePort All service The former is exposed to the access services of customers in the cluster, and the latter is exposed to the access services of customers outside the cluster. The data coming from these two ports need to go through reverse proxy kube-proxy Inflow back end pod of targetPod,So as to arrive pod In the container on the.

Kube proxy reverse proxy

Kube proxy and iptables

When the service has port and nodePort, it can provide internal / external services. So what is the specific principle to achieve it? The reason lies in the iptables rule created by Kube proxy on the local node.

Kube proxy maps the access to this service address to the local Kube proxy port (random port) by configuring DNAT rules (access from the container and access from the local host). Then Kube proxy will listen to the corresponding local port and send the access to this port to the real pod address of the remote end.

Whether through the cluster internal service portal < cluster IP >: port or through the cluster external service portal < node IP >: nodeport, the request will be redirected to the mapping of the local Kube proxy port (random port), and then the access to this Kube proxy port will be sent to the remote real pod address.

RC resources (understand)

Replication Controller(abbreviation rc)Used to manage Pod To ensure that there are a specified number of replicas in the cluster Pod Copy. If the number of replicas in the cluster is greater than the specified number, the number of redundant containers other than the specified number will be stopped. On the contrary, containers less than the specified number will be started to ensure that the number remains unchanged. Replication Controller It is the core of elastic expansion, dynamic capacity expansion and rolling upgrade.

RC Main function points of:
ensure pod Quantity: specify a service in Kubernetes There are a corresponding number of Pod In operation;
ensure pod Health: when pod Unhealthy, running wrong or unable to provide services will kill unhealthy people pod And recreate, keep pod Consistent quantity;
Elastic scaling: it can be set during business peak period pod The quantity can be automatically expanded and retracted with the monitoring;
Rolling upgrade: that is, blue-green release, when a pod The image update used adopts the rolling upgrade mode, RC Will automatically one by one pod Upgrade and close one pod Upgrade at the same time, and create a new image based on the original image pod,Be a pod When the update is complete, close an old mirror pod. 

1. Use yaml to create and start the replica set

k8s adopt Replication Controller To create and manage different sets of duplicate containers (actually duplicate containers) pods). 
Replication Controller Will ensure pod The number of will be kept at a special number during operation, that is replicas Settings for.
[root@kub-k8s-master ~]# cd prome/
[root@kub-k8s-master prome]# vim nginx-rc.yml
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: daocloud.io/library/nginx
        ports:
        - containerPort: 80
 And define a pod of YAML Compared with the file, the only difference is kind The value of is ReplicationController,replicas The value of needs to be specified, pod Relevant definitions of template In, pod The names of do not need to be specified explicitly because they will be in the rc Create and give a name in

Create rc:

[root@kub-k8s-master prome]# kubectl apply -f nginx-rc.yml 
replicationcontroller/my-nginx created

Unlike creating a pod directly, rc will replace the pod that has been deleted or stopped for any reason. For example, the node on which the pod depends hangs. Therefore, we recommend using rc to create and manage complex applications. Even if your application only uses a pod, you can ignore the setting of the replica field in the configuration file

2. View the status of Replication Controller

[root@kub-k8s-master prome]# kubectl get rc
NAME       DESIRED   CURRENT   READY   AGE
my-nginx   2         2         2       11

This status indicates that the rc you create will ensure that you always have two copies of nginx.

You can also view the created Pod status information just as you can create a Pod directly:

[root@kub-k8s-master prome]# kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
dep01-58f6d4d4cb-g6vtg              1/1     Running            0          3h8m
dep01-58f6d4d4cb-k6z47              1/1     Running            0          3h8m
my-nginx-7kbwz                      1/1     Running            0          2m49s
my-nginx-jkn8l                      1/1     Running            0          2m49s

3. Delete Replication Controller

When you want to stop your app, delete your rc,You can use:
[root@kub-k8s-master prome]# kubectl delete rc my-nginx
replicationcontroller "my-nginx" deleted

By default, this will delete all pods managed by this rc. If the number of pods is large, it will take some time to complete the whole deletion. If you want to stop these pods, please specify -- cascade=false.

If you try to delete a pod before deleting rc, rc will immediately start a new pod to replace the deleted pod

Posted by grazzman on Wed, 18 May 2022 09:43:59 +0300