Kubernetes actual combat Summary - Alibaba cloud ECS self built K8S cluster

1, Overview

For details, please refer to Alibaba cloud's Description: https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9bpVKOry

Project resource allocation (except database and Middleware):

 

2, Deploy image warehouse

1) Deploy docker compose, and then refer to the following to deploy docker.

$ sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
$ docker-compose --version
docker-compose version 1.26.2, build 1110ad01

2) Create a mirror warehouse domain name certificate.

mkdir -p /data/cert && chmod -R 777 /data/cert && cd /data/cert
openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048-keyout harbor.key -out harbor.crt -subj "/CN=hub.jhmy.com"

3) Download harbor offline package and edit harbor YML, modify the host address, certificate path and warehouse password.

4) Execute install SH deployment, access after completion https://hostip Just.

  Deployment process: check environment - > import image - > prepare environment - > prepare configuration - > start

 

3, System initialization

1) Set host name and domain name resolution

hostnamectl set-hostname k8s101
cat >> /etc/hosts <<EOF
172.1.1.114 hub.jhmy.com
172.1.1.101 k8s101
172.1.1.102 k8s102
172.1.1.103 k8s103
172.1.1.104 k8s104
...... 172.1.1.99 k8sapi EOF

2) Establish a non secret login before the node

ssh-keygen
ssh-copy-id -i .ssh/id_rsa.pub  root@k8s-node1

3) Install dependent packages, common software, and synchronize time zones

yum -y install vim curl wget unzip ntpdate net-tools ipvsadm ipset sysstat conntrack libseccomp
ntpdate ntp1.aliyun.com && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

4) Close swap, selinux, firewalld

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
systemctl stop firewalld && systemctl disable firewalld

5) Adjust system kernel parameters

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
fs.file-max=2000000
fs.nr_open=2000000
fs.inotify.max_user_instances=512
fs.inotify.max_user_watches=1280000
net.netfilter.nf_conntrack_max=524288
EOF
 
modprobe br_netfilter && sysctl -p /etc/sysctl.d/kubernetes.conf

6) Loading system ipvs related modules

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
 
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_

7) Install nfs file sharing service

yum -y install nfs-common nfs-utils rpcbind
systemctl start nfs && systemctl enable nfs
systemctl start rpcbind && systemctl enable rpcbind

 

4, Deploy high availability cluster

1) Installing and deploying docker

# Set image source and install docker And components
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-19.03.5 docker-ce-cli-19.03.5
 
# Set image acceleration, warehouse address and log mode
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://jc3y13r3.mirror.aliyuncs.com"],
"insecure-registries":["hub.jhmy.com"],
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" } } EOF # restart docker,Set start mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload && systemctl restart docker && systemctl enable docker

 

2) Installing and deploying kubernetes

# set up kubernetes Mirror source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# install kubeadm,kebelet,kubectl
yum -y install kubeadm-1.17.5 kubelet-1.17.5 kubectl-1.17.5 --setopt=obsoletes=0
systemctl enable kubelet.service

 

3) Initialize management node

Select any master node, modify the current master node / etc/hosts, and change the k8sapi corresponding resolution address to the current node address (we uniformly configured the slb load address during system initialization).

Although we plan to use the SLB of Alibaba cloud for Kube apiserver load, the cluster is not started at this time and cannot listen to the k8sapi port, that is, we cannot access the port of SLB load,

Then the cluster initialization will fail, so we will use the current node address as the load address for the time being, that is, to initialize the cluster first.

Note: because it is a formal environment, we try to modify some default values, such as token, apiserver port, etcd data path, pop network segment, etc.

# kubeadm config print init-defaults > kubeadm-config.yaml
# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: token0.123456789kubeadm
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.1.1.101
  bindPort: 6333
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s101
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "k8sapi:6333"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /data/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.5
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.233.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

After the k8s master node is initialized, open alicloud load balancing configuration and add the SLB intranet to Kube apiserver load configuration (only four layers of TCP can be used here).

For the time being, only configure the current master address and wait until the other master nodes join successfully, because the other two masters have not joined. At this time, if you configure other master addresses, the SLB load balancing status will be abnormal, and other nodes will fail to join the cluster.

 

4) Add other management nodes and work nodes

# Execute according to the initialization log prompt kubeadm join Add other commandsManagement node. 
kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:56d53268517... \
   --experimental-control-plane --certificate-key c4d1525b6cce4....

# According to the log prompt, all management nodes execute the following commands to give users command permission.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 

# Execute according to the initialization log prompt kubeadm join Add other commandsWork node. 
kubeadm join 192.168.17.100:6444 --token abcdef.0123456789abcdef \
          --discovery-token-ca-cert-hash sha256:260796226d............ 
Note: the valid period of the token is 24 hours. After it expires, please use the following command to regenerate it on the primary node
kubeadm token create --print-join-command

Modify the apiserver port of the newly added master node and complete the load address of alicloud SLB apiserver.

# modify kube-apiserver Listening port
sed -i 's/6443/6333/g' /etc/kubernetes/manifests/kube-apiserver.yaml 
# restart kube-apiserver container
docker restart `docker ps | grep k8s_kube-apiserver | awk '{print $1}'`
# see kube-apiserver Listening port
ss -anp | grep "apiserver" | grep 'LISTEN'

Note: if you forget to modify, errors may occur in later deployment, such as Kube Prometheus

[root@ymt-130 manifests]# kubectl -n monitoring logs pod/prometheus-operator-5bd99d6457-8dv29
ts=2020-08-27T07:00:51.38650537Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."
ts=2020-08-27T07:00:51.38962086Z caller=main.go:96 msg="Staring insecure server on :8080"
ts=2020-08-27T07:00:51.39038717Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: connect: connection refused"

 

5) Deploy the network and check the health of the cluster

# Execute prepared yaml Deployment file
kubectl apply -f kube-flannel.yaml

# Check cluster deployment
kubectl get cs && kubectl get nodes && kubectl get pod --all-namespaces

# inspect etcd Cluster health status (upload required) etcdctl Binary files)
[root@k8s101 ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --endpoints https://172.1.1.101:2379,https://172.1.1.102:2379,https://172.1.1.103:2379 --insecure-skip-tls-verify endpoint health
https://172.1.1.101:2379 is healthy: successfully committed proposal: took = 12.396169ms
https://172.1.1.102:2379 is healthy: successfully committed proposal: took = 12.718211ms
https://172.1.1.103:2379 is healthy: successfully committed proposal: took = 13.174164ms

 

6) Kubelet expulsion strategy optimization

#Modify the kubelet startup parameters of the work node and change the Pod expulsion policy
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf 
Environment="EVICTION_HARD=--eviction-hard=memory.available<2Gi,nodefs.available<5Gi,imagefs.available<100Gi"
Environment="EVICTION_RECLAIM=--eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=1Gi,imagefs.available=2Gi"

 

 

#Restart the kubelet container and check the kubelet process startup parameters
[root@k8s104 ~]# systemctl daemon-reload && systemctl restart kubelet
[root@k8s104 ~]# ps -ef | grep kubelet | grep -v grep
[root@k8s104 ~]# ps -ef | grep "/usr/bin/kubelet" | grep -v grep
root 24941 1 2 Aug27 ? 03:00:12 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf 
--config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1
--eviction-hard=memory.available<2Gi,nodefs.available<5Gi,imagefs.available<100Gi --eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=1Gi,imagefs.available=2Gi

More information: Kubelet's response to resource shortage

 

5, Deploy functional components

1) Deploy layer 7 routing Ingress

#Deploy Ingress routing and basic component forwarding rules
kubectl apply -f nginx-ingress
#Configure the load address and the maximum number of connections by modifying nginx config
kubectl edit cm nginx-config -n nginx-ingress

 

#You can adjust the Ingress open port appropriately, and then configure the Alibaba cloud SLB extranet workload (all work nodes)

More details: Nginx global configuration

 

2) Deploy page tool Dashboard

#Execute the prepared yaml deployment file
kubectl apply -f kube-dashboard.yml
#Wait for deployment to complete
kubectl get pod -n kubernetes-dashboard

#Through the domain name login control page, the Token needs to be viewed with the command (domain name resolution needs to be configured locally)
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}')
https://k8s.dashboard.com:IngressPort

 

3) Deployment log collection Filebeat

#Modify the matching log, logstash address and host directory

 

#Then execute the deployment
kubectl apply -f others/kube-filebeat.yml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /home/ymt/logs/appdatamonitor/warn.log
    output.logstash:
      hosts: ["10.88.88.169:5044"]
---
    # filebeat.config:
      # inputs:
        # # Mounted `filebeat-inputs` configmap:
        # path: ${path.config}/inputs.d/*.yml
        # # Reload inputs configs as they change:
        # reload.enabled: false
      # modules:
        # path: ${path.config}/modules.d/*.yml
        # # Reload module configs as they change:
        # reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true

    # processors:
      # - add_cloud_metadata:

    # cloud.id: ${ELASTIC_CLOUD_ID}
    # cloud.auth: ${ELASTIC_CLOUD_AUTH}

    # output.elasticsearch:
      # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      # username: ${ELASTICSEARCH_USERNAME}
      # password: ${ELASTICSEARCH_PASSWORD}
---
# apiVersion: v1
# kind: ConfigMap
# metadata:
  # name: filebeat-inputs
  # namespace: kube-system
  # labels:
    # k8s-app: filebeat
# data:
  # kubernetes.yml: |-
    # - type: docker
      # containers.ids:
      # - "*"
      # processors:
        # - add_kubernetes_metadata:
            # in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        # image: docker.elastic.co/beats/filebeat:6.7.2
        image: registry.cn-shanghai.aliyuncs.com/leozhanggg/elastic/filebeat:6.7.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        # env:
        # - name: ELASTICSEARCH_HOST
          # value: elasticsearch
        # - name: ELASTICSEARCH_PORT
          # value: "9200"
        # - name: ELASTICSEARCH_USERNAME
          # value: elastic
        # - name: ELASTICSEARCH_PASSWORD
          # value: changeme
        # - name: ELASTIC_CLOUD_ID
          # value:
        # - name: ELASTIC_CLOUD_AUTH
          # value:
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        # - name: inputs
          # mountPath: /usr/share/filebeat/inputs.d
          # readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: ymtlogs
          mountPath: /home/ymt/logs
          readOnly: true
        # - name: varlibdockercontainers
          # mountPath: /var/lib/docker/containers
          # readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: ymtlogs
        hostPath:
          path: /home/ymt/logs
      # - name: varlibdockercontainers
        # hostPath:
          # path: /var/lib/docker/containers
      # - name: inputs
        # configMap:
          # defaultMode: 0600
          # name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---
Note: since both logstash and ES are deployed externally, only filebeat is deployed in this k8s cluster to collect logs and transfer them to logstash outside the cluster.

 

4) Deployment monitoring platform Prometheus

#Deploy default components first
cd kube-prometheus-0.3.0/manifests
kubectl create -f setup && sleep 5 && kubectl create -f .
#Wait for deployment to complete
kubectl get pod -n monitoring

#Then modify the custom monitoring configuration and execute the upgrade script
cd custom && sh upgrade.sh
*Alarm configuration: alertmanager yaml
*Default alarm rule: Prometheus rules yaml
*New alarm rules: Prometheus additional rules yaml
*Configuration of new monitoring items: Prometheus additional Yaml # adjust monitoring items and addresses
*Monitoring configuration: Prometheus Prometheus Yaml # adjusts the number of replicas and resource limits
#Log in to the monitoring page through the domain name (domain name resolution needs to be configured locally)
    http://k8s.grafana.com:IngressPort # the default user and password are admin
   http://k8s.prometheus.com:IngressPort
   http://k8s.alertmanager.com:IngressPort
#Click Add - > Import - > upload JSON file to import the monitoring dashboard.
*   k8s-model.json
*   node-model.json

For details, please refer to: Kubernetes actual combat Summary - Custom Prometheus

 

5, Description of other problems

1) The Kubectl command uses

# Command auto deployment settings
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Official documents: Kubernetes kubectl command table

Online Blog: Collation of common commands of kubernetes

 

2) Extension of certificate validity

# View certificate validity
kubeadm alpha certs check-expiration
# Regenerate all certificates
kubeadm alpha certs renew all
# Restart all master node component containers separately
docker ps | \
grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd' | \
awk -F ' ' '{print $1}' |xargs docker restart

3) Uninstall k8s cluster nodes

# Mark the node to be unloaded as non schedulable
kubectl cordon k8s-node1
# Smoothly migrate containers running on this node to other nodes
kubectl drain nodeA --delete-local-data --force
# Remove the node from the cluster
kubectl delete node k8s-node1

# Reset configuration on deleted nodes
kubeadm reset
# Delete the corresponding file manually according to the prompt
rm -rf /etc/cni/net.d
ipvsadm --clear
rm -rf /root/.kube/
# stop it kubelet service
systemctl stop kubelet

# View installed k8s Software package for
yum list installed | grep 'kube'
# uninstall k8s Related installation package
yum remove kubeadm.x86_64 kubectl.x86_64 cri-tools.x86_64 kubernetes-cni.x86_64 kubelet.x86_64

4) Completely clean up the node network

# Reset node
kubeadm reset -f 
# Clear configuration
rm -rf $HOME/.kube/config /etc/cni/net.d && ipvsadm --clear
# stop it docker
systemctl stop kubelet && systemctl stop docker
# Delete network configuration and routing records
rm -rf /var/lib/cni/
ip link delete cni0
ip link delete flannel.1 
ip link delete dummy0
ip link delete kube-ipvs0
# restart docker and network
systemctl restart docker && systemctl restart kubelet && systemctl restart network
# Sometimes replacing network plug-ins may appear podcidr Error, can be changed manually
kubectl describe node k8s112 | grep PodCIDR
kubectl patch node k8s112 -p '{"spec":{"podCIDR":"10.233.0.0/16"}}'

5) Deploy and apply to the master node

#Increase non schedulability tolerance and master node affinity
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: node-role.kubernetes.io/master
               operator: Exists
Note: when deploying k8s dashboard, we sometimes find that we use the master node address to open a special card, but we use the deployed node to open it very smoothly,
Then we only need to add this configuration to the dashboard, that is, deploy the dashboard on the master node, so that the opening using the master node will be very smooth.

6) Modify k8s node name

# Alibaba cloud self built K8S Cluster connections may occur apiserver Failure is usually due to K8S Doing DNS A long resolution request occurs during name resolution, which can be modified node Name of the solution.
hostname ymt-140
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_HOSTNAME=--hostname-override=ymt-140"
$KUBELET_HOSTNAME
systemctl daemon-reload && systemctl restart kubelet && ps -ef | grep /usr/bin/kubelet | grep -v grep journalctl -xe -u kubelet

7) Deployment logging

[root@k8s101 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0819 09:24:09.326568   28880 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0819 09:24:09.326626   28880 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8sapi] and IPs [10.96.0.1 172.1.1.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0819 09:24:14.028737   28880 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0819 09:24:14.029728   28880 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502551 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01
[mark-control-plane] Marking the node k8s101 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zwx051.085210868chiscdc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \
    --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2 \
    --control-plane --certificate-key 8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \
    --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2



[root@k8s102 ~]#  kubeadm join k8sapi:6333 --token zwx051.085210868chiscdc \
>     --discovery-token-ca-cert-hash sha256:de4d9a37423fecd5313a76d99ad60324cdb0ca6a38254de549394afa658c98b2 \
>     --control-plane --certificate-key 8782750a5ffd83f0fdbe635eced5e6b1fc4acd73a2a13721664494170a154a01
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8sapi] and IPs [10.96.0.1 172.1.1.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s101 localhost] and IPs [172.1.1.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0819 10:31:17.604671    4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0819 10:31:17.612645    4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0819 10:31:17.613524    4058 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-08-19T10:31:31.039+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.1.1.102:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s101 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s101 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 

 

Author: Leozhanggg

source: https://www.cnblogs.com/leozhanggg/p/13522155.html

The copyright of this article belongs to the author and the blog park. Reprint is welcome, but this statement must be retained without the consent of the author, and the original text connection must be given in an obvious position on the article page, otherwise the right to investigate legal responsibility is reserved.


 

Tags: Kubernetes

Posted by skripty on Thu, 19 May 2022 09:33:47 +0300