Link: https://github.com/easzlab/kubeasz
Deployment steps
Follow the example example / hosts Multi node node node configuration, prepare 4 virtual machines and build a multi master high availability cluster.
1. Basic system configuration
- Recommended memory 2G / hard disk 30G or more
- Minimize installation of CentOS 7 Minimal
- Configure basic network, update source, SSH login, etc
2. Install dependent tools on each node
- yum makecache fast
- yum update
- yum install python -y
3. Install and prepare ansible at ansible control end
- ssh-keygen
- ssh-copy-id -i /root/.ssh/id_rsa.pub root@IP
- yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- yum install ansible -y
- yum install git python-pip -y
- pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/
- pip install ansible==2.6.18 netaddr==0.7.19 -i https://mirrors.aliyun.com/pypi/simple/
4. Install at ansible control end
Download the tool script easzup. For example, use kubeasz version 2.0.2
export release=2.0.2 curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup chmod +x ./easzup # Use the tool script to download/ easzup -D cd /etc/ansible && cp example/hosts.multi-node hosts
Change the hosts content according to the actual situation
Verify ansible: ansible all -m ping
5. Arrange k8s installation at ansible control end
#Step by step installation
ansible-playbook 01.prepare.yml ansible-playbook 02.etcd.yml ansible-playbook 03.containerd.yml ansible-playbook 03.docker.yml ansible-playbook 04.kube-master.yml ansible-playbook 05.kube-node.yml ansible-playbook 06.network.yml ansible-playbook 07.cluster-addon.yml # One step installation # ansible-playbook 90.setup.yml
dashboard
Installation deployment
# Deploy dashboard master yaml profile $ kubectl apply -f /etc/ansible/manifests/dashboard/kubernetes-dashboard.yaml # Create a readable and writable admin Service Account $ kubectl apply -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yaml # Create read-only read Service Account $ kubectl apply -f /etc/ansible/manifests/dashboard/read-user-sa-rbac.yaml
verification
# View the running status of pod kubectl get pod -n kube-system | grep dashboard kubernetes-dashboard-7c74685c48-9qdpn 1/1 Running 0 22s # View dashboard service kubectl get svc -n kube-system|grep dashboard kubernetes-dashboard NodePort 10.68.219.38 <none> 443:24108/TCP 53s # View cluster services kubectl cluster-info|grep dashboard kubernetes-dashboard is running at https://192.168.1.1:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy # View pod running log kubectl logs kubernetes-dashboard-7c74685c48-9qdpn -n kube-system
Sign in
use https://NodeIP:NodePort To access the dashboard, two login methods are supported: Kubeconfig and token
Select "token" to log in, and copy the admin token field output below to the input box (admin)
# Create Service Account and ClusterRoleBinding $ kubectl apply -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yaml # Get the Bearer Token and find the line beginning with 'token:' in the output $ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Select "token" to log in, and copy the read token field output below to the input box (read-only)
# Create Service Account and ClusterRoleBinding $ kubectl apply -f /etc/ansible/manifests/dashboard/read-user-sa-rbac.yaml # Get the Bearer Token and find the line beginning with 'token:' in the output $ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep read-user | awk '{print $1}')
Metrics Server
From V1 8. From the beginning, the measurement of resource usage (such as the CPU and memory usage of the container) can be obtained through the Metrics API; The premise is to deploy the Metrics Server in the cluster, which collects indicator information from the Summary API exposed by Kubelet
install
Successfully installed on the above, ansible play / etc / ansible / 07 cluster-addon. yml
verification
[root@zxl0 tasks]# kubectl get apiservice |grep metrics v1beta1.metrics.k8s.io kube-system/metrics-server True 35m [root@zxl0 tasks]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 10.2.16.101 57m 1% 1658Mi 53% 10.2.16.102 85m 2% 1731Mi 56% 10.2.16.103 58m 1% 1193Mi 38% 10.2.16.104 37m 0% 1248Mi 40%
Install KubeSphere
prerequisite
- Kubernetes version: 1.13.0 ≤ k8s version < 1.16;
- Helm version: 2.10.0 ≤ helm < 3.0.0, and Tiller has been installed (v3.0 supports Helm v3); Reference How to install and configure Helm;
- The available CPU of the cluster is > 1C, and the available memory is > 2G; And the cluster can access the external network
- The cluster has a default storage type (StorageClass);
Install helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 777 get_helm.sh ./get_helm.sh #If the package is not successfully downloaded online https://download.csdn.net/download/zhangxueleishamo/12846302 tar -zxvf helm-v3.3.1-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/ helm completion bash > .hermrc ;echo "source .helmrc" >> .bashrc helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm install nginx stable/nginx-ingress
Install tiller
kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller --skip-refresh --tiller-image registry.cn-shanghai.aliyuncs.com/rancher/tiller:v2.15.1 helm list kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
Other commands
Delete pod
#View all pod s: kubectl get pods --all-namespaces #View the pod name of the specified namespace: kubectl get pod -n kubesphere-system #Delete the specified pod: kubectl delete pod (podname) -n (namespace)