Alibaba cloud proprietary cloud container service elastic scaling best practices

Introduction: Alibaba cloud proprietary cloud container service elastic scaling best practice

1. Introduction to elastic expansion of container service

This section will briefly describe the elastic scaling of container services based on the principle of use.
This practice is based on the K8s business cluster running on the private cloud to conduct stress test on the test business, which is mainly based on the following three products and capabilities:

  • Use Alibaba cloud's cloud enterprise network dedicated line to connect the private cloud and the public cloud to realize the interoperability of VPC networks on the two clouds.
  • Use the HPA capability of K8s (Kubernetes) to realize the horizontal expansion and contraction of the container.
  • The Cluster Autoscaler of K8s and the ESS capability of Alibaba cloud elastic scaling group are used to realize the automatic scaling of nodes.

When the tested service index reaches the upper limit, trigger the HPA automatic capacity expansion service pod; When the business cluster cannot carry more pods, it triggers the ESS service of the public cloud, expands the ECS in the public cloud and automatically adds it to the K8s cluster of the private cloud.

Figure 1: schematic diagram of elastic expansion of container service

2. Software environment

The software environment requirements of this best practice are as follows:
Application environment:

  • Container service ACK is based on VPC v3 Version 10.0.
  • Public cloud enterprise network service CEN.
  • Public cloud elastic scaling group service ESS.

Configuration conditions:

  • Use the container service of the private cloud or manually deploy agile PaaS on ECS.
  • Open a cloud dedicated line to connect the VPC where the container service is located with the VPC on the public cloud.
  • Open the public cloud elastic scaling group service (ESS).

3. Configuration guidance

3.1 configure HPA

This section describes the detailed steps for configuring HPA.
HPA (Horizontal Pod Autoscaler) is a resource object of K8s. It can dynamically scale the number of pods in statefullset, deployment and other objects according to CPU, memory and other indicators, so that the services running on it can adapt to the changes of indicators.
This example creates an nginx application supporting HPA. After successful creation, when the utilization rate of pod exceeds the 20% utilization rate set in this example, it will be expanded horizontally, and when it is lower than 20%, it will be reduced. The specific operation steps are as follows.

3.1.1 if self built K8s cluster is used, configure HPA through yaml file

① To create an nginx application, you must set the request value for the application, otherwise the HPA will not take effect.

apiVersion: app/v1beta2
kind: Deployment
      creationTimestamp: null
        app: hpa-test
      dnsPolicy: ClusterFirst
        image: '192.168.**.***:5000/admin/hpa-example:v1'
        imagePullPolicy: IfNotPresent
        name: hpa-test
            cpu: //The request value must be set
        securityContext: {}
  replicas: 1
        app: hpa-test
  revisionHistoryLimit: 10
    type: RollingUpdate
        maxSurge: 25%
        maxUnavailable: 25%
  progressDeadlineSeconds: 600

② Create HPA.

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
  annotations:'[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"ScaleDownStabilized","message":"recent recommendations were higher than current one, applying the highest recent recommendation"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"ValidMetricFound","message":"theHPA was able to successfully calculate a replica count from cpu resource utilization(percentage of request)"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"DesiredWithinRange","message":"thedesired count is within the acceptable range"}]''[{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":0,"currentAverageValue":"0"}}]'
  creationTimestamp: 2020-04-29T06:57:13Z
  name: hpa-test
  namespace: default
  resourceVersion: "3092268"
  selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/hpa01
  uid: a770ca26-89e6-11ea-a7d7-00163e0106e9
  maxReplicas: //Set the number of pod s
  minReplicas: 1
    apiVersion: apps/v1beta2
    kind: Deployment
    name: centos
  targetCPUUtilizationPercentage://Set CPU threshold
3.1.2 if Alibaba cloud container service is used, you need to choose to configure HPA when deploying applications

Figure 2: Alibaba cloud container service configuration HPA

3.2 configuring Cluster Autoscaler

This section describes the detailed steps for configuring Cluster Autoscaler.
The node automatic scaling component judges the scaling based on the allocation of K8s resource scheduling, and the allocation of resources in the node is calculated through resource Request.
When the pod cannot meet the resource Request and enters the Pending state, the node automatic scaling component will calculate the required number of nodes according to the resource specification and constraint configuration in the elastic scaling group configuration information.
If the scaling conditions can be met, the node joining of the scaling group will be triggered. When a node is in the elastic scaling group and the resource request of the pod on the node is lower than the threshold, the node automatic scaling component will shrink the node.
Therefore, the correct and reasonable setting of resource Request is a prerequisite for elastic scaling.

3.2.1 configure elastic telescopic group ESS

① Create ESS elastic expansion group and record the minimum and maximum number of instances.

Figure 3: creating a flex group and configuring - 1

Figure 4: creating a flex group and configuring - 2

② Create a scaling configuration and record the id of the scaling configuration.

Figure 5: creating a scaling configuration

yum install -y ntpdate && ntpdate -u && curl http:// | bash -s -- --docker-version 17.06.2-ce-3 --token 9s92co.y2gkocbumal4fz1z --endpoint 192.168.**.***:6443 --cluster-dns 10.254.**.** --region cn-huhehaote
echo "{" > /etc/docker/daemon.json
echo "\"registry-mirrors\": [" >> /etc/docker/daemon.json
echo "\"\"" >> /etc/docker/daemon.json
echo "]," >> /etc/docker/daemon.json
echo "\"insecure-registries\": [\"https://192.168.**.***:5000\"]" >> /etc/docker/daemon.json
echo "}" >> /etc/docker/daemon.json
systemctl restart docker
3.2.2 K8s cluster deployment autoscaler
kubectl apply -f ca.yml

Refer to ca.yml to create autoscaler. Note that the following configuration should be modified to correspond to the actual environment.

access-key-id: "TFRB********************"
access-key-secret: "bGIy********************W***************"
region-id: "Y24t************"

4. Pressure test

This section describes the application scenarios of best practices.

4.1 simulated service access

Start the busybox image, execute the following commands in the pod to access the service s of the above applications, and you can start multiple pods at the same time to increase the business load.

while true;do wget  -q -O-  http://hpa-test/index.html;done

4.2 observe HPA

Before pressurization:

Figure 6: index value before pressurization

After pressurization, when the CPU value reaches the threshold, the horizontal expansion of pod will be triggered:

Figure 7: index value after pressurization

Figure 8: trigger pod horizontal expansion

4.3 observation pod

When the cluster resources are insufficient, the newly expanded pod is in pending status. At this time, cluster autoscaler will be triggered to automatically expand the node.

Figure 9: trigger cluster autoscaler automatic capacity expansion node

Author: Liu Weiye

Senior Solution Engineer of Alibaba cloud intelligent hybrid cloud PDSA team

Once worked in Xinhua Sanyun as a software defined data center solution, and was responsible for the architecture design and implementation of several provincial cloud platforms. Now he works in Alibaba cloud intelligent hybrid cloud PDSA team and is responsible for the scheme design, POC and best practices of containers and cloud native products.

We are Alibaba cloud intelligent global technology service SRE team. We are committed to becoming a technology-based, service-oriented and high availability engineer team; Provide professional and systematic SRE services to help customers better use the cloud, build more stable and reliable business systems based on the cloud, and improve business stability. We look forward to sharing more technologies that can help enterprise customers get on the cloud, make good use of the cloud, and make their business on the cloud run more stable and reliable. You can scan the QR code below with nails, join the nail circle of Alibaba cloud SRE Institute of technology, and communicate with more people on the cloud about the cloud platform.

Original link:

Copyright notice: the content of this article is spontaneously contributed by Alibaba cloud real name registered users, and the copyright belongs to the original author. Alibaba cloud developer community does not own its copyright or bear corresponding legal liabilities. Please refer to Alibaba cloud community developer and user protection rules for Alibaba cloud community. If you find any content suspected of plagiarism in the community, fill in the infringement complaint form to report. Once verified, the community will immediately delete the content suspected of infringement.

Tags: Docker Kubernetes Nginx shell Container Cloud Native perl

Posted by jv2222 on Sat, 14 May 2022 01:05:01 +0300