Introduction to k8s installation and use

1. Content arrangement and kubernetes introduction

  • kubernetes, a tool supporting content arrangement in cluster environment, is abbreviated as k8s
  • k8s is officially provided by Google. The bottom layer is based on docker, which is competitive with docker swarm.
  • Cluster container management is almost always adopted k8s.

Responsibilities of k8s

  • Automate container deployment and replication

  • Expand or shrink the size of the container at any time

  • Containers are grouped into groups, and load balancing between containers is provided

  • Real time monitoring, real-time fault detection and automatic replacement

2.k8s basic concepts

  • k8s Master node
  • Node node
  • Service Service
  • Replication Controller replication controller
  • Label label
  • Container container
  • Pod k8s minimum control unit


Master is the gateway and hub of the Cluster. Its main functions are to expose API interfaces, track the health status of other servers, schedule loads in an optimal way, and arrange the communication between other components. A single master node can complete all functions, but considering the pain point of single point of failure, multiple master nodes are usually deployed in the production environment to form a Cluster


Node is k8s's work node, which is responsible for receiving work instructions from the Master, creating and destroying Pod objects accordingly according to the instructions, and adjusting network rules for reasonable routing and traffic forwarding. In the production environment, there can be N nodes.


  • pod is a Container of containers, which can contain multiple containers
  • It is k8s the smallest deployable unit, and a Pod is a process
  • The network interworking of the internal container of the pod, and each pod has an independent virtual ip
  • pod is to deploy complete applications or modules

kubelet kube-proxy docker

3. k8s installation

Domestic installation k8s approach

  • Install offline using kubeadmin
  • Using Alibaba public cloud platform k8s
  • yum official warehouse
  • Binary package installation, kubeasz

3.1 install kubeadmin load k8s image

# The following commands are executed on three virtual machines YZ10 YZ20 yz21
mkdir /usr/local/k8s-install
cd /usr/local/k8s-install

4. k8s offline deployment  yz10   Master node  yz20   Node node  yz21   Node node
# 1. Adjust time zone
timedatectl set-timezone Asia/Shanghai
# 2. Close selinux and firewall
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld
# 3. The k8sadmin image download comes with the corresponding docker
# 4. Upload the image to each node
mkdir -p /usr/local/k8s-install
scp -r kubernetes-1.14 root@yz10:/usr/local/k8s-install

# 5. Install docker and remember to configure the accelerator
tar -xf docker-ce-18.09.tar.gz
cd docker 
yum localinstall -y *.rpm

# 6. Confirm that cgroup is cgroupfs
docker info|grep cgroup
# 7. Install kubedm
tar -xf kube114-rpm.tar.gz
cd kube114-rpm
yum localinstall -y *.rpm

# 8. Close the exchange area
swapoff -a  #close
vi /etc/fstab
# The comment line is permanently closed
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 9. Configure Bridge
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

# 10. Install k8s through image
docker load -i k8s-114-images.tar.gz
docker load -i flannel-dashboard.tar.gz

5. Build k8s clusters

Make sure the above nodes are installed k8s.

# Master master server network settings
kubeadm init --kubernetes-version v1.14.1 --pod-network-cidr

# After running, check the commands that need to be run manually
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Viewing the node, you can see that there is only one master node
kubectl get nodes
yz10   NotReady   master   5m21s   v1.14.1

# View problematic pod s
kubectl get pod --all-namespaces

# pending is the need to install the flannel network component
kubectl create -f /usr/local/k8s-install/kubernetes-1.14/kube-flannel.yml

# Check again. It is already in running status
kubectl get pod --all-namespaces

# View the token of the master 
kubeadm token list

# Other nodes join the master node cluster
kubeadm join --token 63zvtd.rej4gqrhselysqsb --discovery-token-unsafe-skip-ca-verification 

master node running: kubectl get nodes

You can see that the cluster has been deployed.

6. k8s set restart service

  • Kubedm is a k8s cluster quick build tool
  • kubelet runs on all nodes and is responsible for starting the pod and container in the form of system services
  • kubectl is a k8s command line tool that provides instructions

systemctl start kubelet

Set startup systemctl enable kubelet

7. Open WebUI Dashboard

# master start dashboard
kubectl apply -f kubernetes-dashboard.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

kubectl -n kube-system get svc

# Check the condition of pod and dashboard running is normal
kubectl get pods --all-namespaces

# If you encounter problems, you can delete the pod and reconfigure it
kubectl -n kube-system delete pod/{podName}

Access http: / / host ip:32000 / to access dashbord

8. Deploy tomcat cluster in dashbord

Workload - create

9.deployment script deploy tomcat cluster

  • The node creation instruction refers to the process of sending 8ks to the container
  • k8s supports deployment scripts in yml format
  • kubectl create -f deployment file yml

Write the first k8s deployment script file:

apiVersion: extensions/v1beta1
kind: Deployment
  name: tomcat-deploy
  replicas: 2
        app: tomcat-cluster
      - name: tomcat-cluster
        image: tomcat
        - containerPort: 8080

Common deployment related commands:

  • kubectl create -f deployment file yml - create deployment
  • kubectl apply -f deployment file yml - update deployment configuration
  • Kubectl get pod [-o widget] - view deployed pods
  • kubectl describe pod pod name - view pod details
  • kubectl logs [-f] pod name - "view pod output log"
# Create tomcat container
kubectl create -f tomcat-deploy.yml

# Deployment view
kubectl get deployment

10. External access to tomcat cluster

Service service is used to expose applications.

Write Tomcat service yml

apiVersion: v1
kind: Service
  name: tomcat-service
    app: tomcat-service
  type: NodePort
    app: tomcat-cluster
  - port: 18010
    targetPort: 8080
    nodePort: 32500
# Create load balancing service
kubectl create -f tomcat-service.yml

# View services
kubectl get service

11. Cluster file sharing based on NFS

  • nfs mainly uses remote procedure call RPC mechanism to realize file transfer
  • yum install -y nfs-utils rpcbind

# Edit nfs shared file settings
vi /etc/exports

systemctl start nfs.service
systemctl start rpcbind.service

systemctl enable nfs.service
systemctl enable rpcbind.service

# exportfs view
/usr/local/data/www-data  # Description configuration effective

# Node installation tool
yum install -y nfs-utils

# After node installation
showmount -e yz10

# Mount file
mkdir -p /mnt/www-data
mount yz10:/usr/local/data/www-data /mnt/www-data

12. Deploy and configure mount points

# View deployment
kubectl get deployment

# Delete deployment service
kubectl delete deployment tomcat-deploy
kubectl delete service tomcat-service

# Redeploy mount
vi tomcat-deploy.yml # Modify deployment file
apiVersion: extensions/v1beta1
kind: Deployment
  name: tomcat-deploy
  replicas: 2
        app: tomcat-cluster
      - name: webapp
          path: /mnt/www-data
      - name: tomcat-cluster
        image: tomcat
        - containerPort: 8080
        - name: webapp
          mountPath: /usr/local/tomcat/webapps
# Enter the pod to check whether the mount is successful
kubectl exec -it tomcat-deploy-6dcc5c59c-hg5z7 bash

13. Use Rinetd to provide external Service and load balancing support

vi tomcat-service.yml modify service file

apiVersion: v1
kind: Service
  name: tomcat-service
    app: tomcat-service
#  type: NodePort
    app: tomcat-cluster
  - port: 18010
    targetPort: 8080
#    nodePort: 32500
# Create service
kubectl create -f tomcat-service.yml
# Create a test directory and create a file in www data
vi index.jsp

# visit

The effect of random load balancing can be observed.

Port forwarding tool Rinetd

  • Rinetd is a transport control protocol tool for redirection in Linux operating system
  • The source ip port data can be forwarded to the destination ip port
  • In k8s, it is used to expose the service
# Host installation Rinted
cd /usr/local
cd rinetd
sed -i 's/65536/65535/g' rinetd.c
mkdir -p /usr/man/
yum install -y gcc
make && make install

# Write port mapping configuration
vi /etc/rinetd.conf 18010 18010

# load configuration
rinetd -c /etc/rinetd.conf

# Test the external access. It's ready

14. Update cluster configuration and resource limit

k8s deployment adjustment command

  • Update cluster configuration: kubectl apply -f yml file
  • Delete deployment service pod
  • kubectl delete deployment | service | pod name

Resource limitation

- name: tomcat-cluster
  image: tomcat
    requests:  # Required resources
      cpu: 1
      memory: 500Mi
    limits:    # Restricted resources
      cpu: 2   # cpu is not necessarily an integer
      memory: 1024Mi

This article is published by running coarse grain pancakes~

Posted by angryjohnny on Thu, 19 May 2022 14:51:59 +0300