Cloud-native Java Architecture Actual combat K8s+Docker+KubeSphere+DevOps (middle)
KubeSphere
Platform installation
Introduction
Kubernetes Install KubeSphere on
installation steps
- Select 4-core 8G (master), 8-core 16G (node1), 8-core 16G (node2) three machines, pay-as-you-go for experiments, CentOS7.9
- Install Docker
- Install Kubernetes
- Install KubeSphere pre-environment
- Install KubeSphere
Install Docker
sudo yum remove docker* sudo yum install -y yum-utils #Configure the yum address of docker sudo yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Install the specified version sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 # start & start docker systemctl enable docker --now # docker accelerated configuration sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://vgcihl1j.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF sudo systemctl daemon-reload sudo systemctl restart docker

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
Install Kubernetes
1. Basic environment
Each machine uses the intranet ip to communicate with each other
Each machine is configured with its own hostname, not localhost
#Set each machine's own hostname hostnamectl set-hostname k8s-master # Set SELinux to permissive mode (equivalent to disabling it) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config #close swap swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab #Allow iptables to inspect bridged traffic cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
2. Install kubelet, kubeadm, kubectl
#Configure the yum source address of k8s cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #Install kubelet, kubeadm, kubectl sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 #start kubelet sudo systemctl enable --now kubelet #All machines are configured with master domain name echo "172.31.0.2 k8s-master" >> /etc/hosts

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
Initialize the master node
1. Perform initialization on the master
kubeadm init \ --apiserver-advertise-address=172.31.0.2 \ --control-plane-endpoint=k8s-master \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.20.9 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16
- 1
- 2
- 3
- 4
- 5
- 6
- 7
2. Record key information
Record the log after the master execution is completed
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s-master:6443 --token 9db9x5.gvopaqx44fck5irh \ --discovery-token-ca-cert-hash sha256:ade53e08667d16ff2866118d15b2e384c1c1dd721afcb9340e13133f15571861 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-master:6443 --token 9db9x5.gvopaqx44fck5irh \ --discovery-token-ca-cert-hash sha256:ade53e08667d16ff2866118d15b2e384c1c1dd721afcb9340e13133f15571861

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
3. Install the Calico network plugin
curl https://docs.projectcalico.org/manifests/calico.yaml -O kubectl apply -f calico.yaml
- 1
- 2
- 3
4. Join the worker node
Execute initialization on the child node to return the content of the log
Install KubeSphere pre-environment
1. nfs file system
Install nfs-server
# on each machine. yum install -y nfs-utils # Execute the following command on the master echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports # Execute the following commands to start the nfs service; create a shared directory mkdir -p /nfs/data # Execute on master systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server # make the configuration take effect exportfs -r #Check if the configuration takes effect exportfs

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
Configure nfs-client on child nodes
# masterIP showmount -e 172.31.0.2 mkdir -p /nfs/data mount -t nfs 172.31.0.2:/nfs/data /nfs/data
- 1
- 2
- 3
- 4
- 5
- 6
3. Configure the default storage
Configure the default storage class for dynamic provisioning, pay attention to change to your own master's IP
Execute on master
## created a storage class apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## When deleting pv, should the content of pv be backed up? --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 172.31.0.2 ## Specify your own nfs server address - name: NFS_PATH value: /nfs/data ## directory shared by nfs server volumes: - name: nfs-client-root nfs: server: 172.31.0.2 path: /nfs/data --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
kubectl apply -f
#Confirm whether the configuration takes effect kubectl get sc
- 1
- 2
To test the creation of pvc, it is not necessary to create pv as before, before creating pvc, with dynamic supply, create pvc directly, pv is automatically created, and the size is specified
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi storageClassName: nfs
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
2,metrics-server
Cluster Metrics Monitoring Components
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --kubelet-insecure-tls - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
Install KubeSphere
1. Download the core file
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
- 1
- 2
- 3
2. Modify cluster-configuration
Specify the functions we need to enable in cluster-configuration.yaml
Refer to the official website "Enable Pluggable Components"
We only cancel basicAuth and metrics-server here, and set them to false
Network connection is set to ippool:calico
3. Execute the installation
kubectl apply -f kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml
- 1
- 2
If the pod has not been successfully started, view the details of the image
kubectl describe pod -n namespace name
4. Check the installation progress
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
- 1
Solve the problem that the etcd monitoring certificate cannot be found
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
- 1
One-click installation of kubernetes and kubesphere on multiple nodes
Prepare three servers
- 4c8g (master)
- 8c16g * 2(worker)
- centos7.9
- Intranet communication
- Each machine has its own domain name
- Firewall opens ports 30000~32767
Create a cluster with KubeKey
1. Download KubeKey
export KKZONE=cn curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh - chmod +x kk
- 1
- 2
- 3
- 4
- 5
2. Create a cluster configuration file
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
- 1
3. Create a cluster
./kk create cluster -f config-sample.yaml
- 1
4. Check the progress
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
- 1
config-sample.yaml sample file
The name in hosts is the local hostname set, nothing else needs to be changed, only hosts and roleGroups
hostnamectl set-hostname master
hostnamectl set-hostname node1
apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: master, address: 172.31.0.2, internalAddress: 172.31.0.2, user: root, password: aA6675732} - {name: node1, address: 172.31.0.3, internalAddress: 172.31.0.3, user: root, password: aA6675732} - {name: node2, address: 172.31.0.4, internalAddress: 172.31.0.4, user: root, password: aA6675732} roleGroups: etcd: - master master: - master worker: - node1 - node2 controlPlaneEndpoint: domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.20.4 imageRepo: kubesphere clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 registry: registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.1.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" zone: "" local_registry: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: redis: enabled: false redisVolumSize: 2Gi openldap: enabled: false openldapVolumeSize: 2Gi minioVolumeSize: 20Gi monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 es: elasticsearchMasterVolumeSize: 4Gi elasticsearchDataVolumeSize: 20Gi logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchUrl: "" externalElasticsearchPort: "" console: enableMultiLogin: true port: 30880 alerting: enabled: false # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false ruler: enabled: true replicas: 2 logging: enabled: false logsidecar: enabled: true replicas: 2 metrics_server: enabled: false monitoring: storageClass: "" prometheusMemoryRequest: 400Mi prometheusVolumeSize: 20Gi multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false kubeedge: enabled: false cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: - "" nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: []

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
multi-tenancy
Middleware deployment
deploy mysql stateful replica set
1. Create a configuration file in the configuration center of the created project
Configure test cases
[client] default-character-set=utf8mb4 [mysql] default-character-set=utf8mb4 [mysqld] init_connect='SET collation_connection = utf8mb4_unicode_ci' init_connect='SET NAMES utf8mb4' character-set-server=utf8mb4 collation-server=utf8mb4_unicode_ci skip-character-set-client-handshake skip-name-resolve
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
2. Storage Management -> Storage Volume -> Create Storage Volume
The creation is completed, the basic information is filled in casually, and stateful applications are mostly single-node read and write
3. Application Load -> Workload -> Stateful Replica Set -> Create
Set up container images
Enter the version of the container image that needs to be pulled in dockerhub, specify the resource limit, do not reserve resources, and use the default port
The environment variable is the parameter to start the container, refer to the official start command of docker, set the password here
Set up mount storage
Also refer to the mount command in the startup of docker, select the storage volume and configuration file created in steps 1 and 2, and fill in the mount directory in the container
After the creation is complete, you can view the mounted files directly in the container
docker start reference command
docker run -p 3306:3306 --name mysql-01 \ -v /mydata/mysql/log:/var/log/mysql \ -v /mydata/mysql/data:/var/lib/mysql \ -v /mydata/mysql/conf:/etc/mysql/conf.d \ -e MYSQL_ROOT_PASSWORD=root \ --restart=always \ -d mysql:5.7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
Deploy mysql load balancing network
The application created by default uses the cluster IP type, which can only be accessed within the cluster
Application load -> service, delete the default service (don't delete the replica set)
Select the access type to correspond to the previously set cluster IP mode and nodeport mode. The clusterip mode ensures that only the internal access of the cluster ensures the security of the application.
You can now connect to the Internet
Even if the cluster IP type service is not created, the created nodeport type service also has its own DNS for internal access to the cluster
You can use the cluster internal access connection like this
mysql -uroot -h mall-mysql-node.mall -p
Deploy redis & set up network
#Create configuration file ## 1. Prepare the content of the redis configuration file mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.conf ##Configuration example appendonly yes port 6379 bind 0.0.0.0 #docker start redis docker run -d -p 6379:6379 --restart=always \ -v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \ -v /mydata/redis-01/data:/data \ --name redis-01 redis:6.2.5 \ redis-server /etc/redis/redis.conf

- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
Because the configuration file needs to be specified when redis is started, check the start command when creating the container, refer to the docker start redis command
Storage volumes are no longer created in advance, and the storage volume template is mounted during creation, so that when the container is scaled, a separate storage volume can be automatically created and mounted to form a multi-storage volume backup
Create two services separately, in-cluster access and external access
Deploy ElasticSearch
1. docker es container startup reference, check its default configuration after startup
# Create data directory mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01 # container start docker run --restart=always -d -p 9200:9200 -p 9300:9300 \ -e "discovery.type=single-node" \ -e ES_JAVA_OPTS="-Xms512m -Xmx512m" \ -v es-config:/usr/share/elasticsearch/config \ -v /mydata/es-01/data:/usr/share/elasticsearch/data \ --name es-01 \ elasticsearch:7.13.4
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
Enter the docker container to view the default configuration of es, and create jvm.option and elasticsearch.yml as the configuration
Create a replica set, refer to the docker startup command to enter two ports and two environment variables
Mount configuration, since only two accompanying files are mounted, the full file path name plus sub-path must be added
elasticsearch.yml same as above
app Store
Deploy RabbitMQ
Application repository HELM
Equivalent to dockerhub of docker
In application management, add application repository
Deploy zookeeper from app market
more options