Introduction to Docker Basics

1, Installation

Environment: Centos7

1. Uninstall old version

The older version of docker is called docker or docker engine. If you have installed these programs, uninstall them and their related dependencies.

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

2. Installing using a repository

Note: there are three official installation methods. Here we choose the most commonly used repository installation

The docker repository needs to be set before the Docker Engine is installed on the new host for the first time. Docker can then be installed and updated from the repository.

sudo yum install -y yum-utils

# Note: This is a big pit. You can replace it with Ali's source
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
# We use Ali's    
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 
# Installation (if the source is not changed, the speed will be very slow)
sudo yum install docker-ce docker-ce-cli containerd.io

3. Start up and verification

systemctl start docker   # start-up
systemctl status docker  #  View status
docker version  # View version
docker run hello-world   #  test

4. Set accelerator (alicloud)

Note: everyone has his own address.

Open Alibaba cloud official website - > console - > search container image service - > find the image accelerator in the lower right corner

sudo mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://isch1uhg.mirror.aliyuncs.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

docker run hello-world

5. Unload

# Uninstall Docker Engine, CLI and container packages
sudo yum remove docker-ce docker-ce-cli containerd.io

# Images, containers, volumes, or custom profiles on the host are not automatically deleted. To delete all images, containers, and volumes
sudo rm -rf /var/lib/docker

2, Docker three elements

1. Repository

Warehouse: a place for centralized storage of images.

Note: there is a difference between a Repository and a Repository registration server. The Repository registration server often stores multiple warehouses, and each warehouse contains multiple images. Each image has a different tag

The warehouse is divided into two forms: Public warehouse and Private warehouse.

The largest public warehouse is Docker Hub( https://hub.docker.com ), a large number of images are stored for users to download. Domestic public warehouses include Alibaba cloud and Netease cloud.

2. Image

A read-only template is used to create Docker containers. One image can create many containers.

The relationship between container and image is similar to that of objects and classes in object-oriented programming

Docker object-oriented
container object
image class

3. Container

An application or group of applications that run independently.

The container uses the running instance created by the image.

It can be started, started, stopped and deleted. Each container is isolated from each other to ensure a safe platform.

You can think of the container as a simple version of the Linux environment (including root user permissions, process space, user space, cyberspace, etc.) and an application running on the boot.

The definition of a container is almost the same as that of an image. It is also a unified view of a stack of layers. The only difference is that the top layer of a container is readable and writable.

3, Simple understanding of underlying principles

1. How does Docker work

Docker is a system with client server structure. The docker daemon runs on the host and then accesses from the client through Socket connection. The daemon receives commands from the client and manages the containers running on the host.

2. Why is Docker faster than VM?

(1) Docker has less abstraction layer than virtual machine. Because docker does not need Hypervisor to realize hardware resource virtualization, programs running on docker container directly use the hardware resources of actual physical machine. Therefore, docker will have obvious advantages in CPU and memory utilization.

(2) Docker uses the kernel of the host computer instead of the Guest OS. Therefore, when creating a new container, docker does not need to load an operating system kernel overlapped with the virtual machine, so as to avoid the process of searching and loading the operating system kernel and returning to a time-consuming and resource-consuming process. When creating a new virtual machine, the virtual machine software needs to load the Guest OS, which is a minute level process. Because docker directly uses the operating system of the host, it only takes a few seconds to create a docker container.

Docker container Virtual machine (VM)
operating system Share OS with host Host OS running on virtual machine
Storage size Small image, convenient for storage and transmission Image giant (vmdk,vdi, etc.)
Operational performance Almost no additional performance loss Additional CPU and memory consumption of operating system
Transplantability Lightweight and flexible, suitable for Linux Bulky and highly coupled with virtualization technology
Hardware affinity For software developers For hardware operators

4, Related commands

1. Help command

docker version  # View docker version information
docker info  # detailed description
docker --help  # Command help

2. Mirror command

(1) Local mirror list
docker images [OPTIONS] [REPOSITORY[:TAG]]

# OPTIONS Description:
	-a: List all local mirrors (including intermediate image layer)
    -q: Show only mirrors ID
    --digests: Displays summary information for the mirror
    --no-trunc: Display complete image information

# Description of each option:
# REPOSITORY: represents the warehouse source of the image
# TAG: the label of the image
# IMAGE ID: IMAGE ID
# CREATED: image creation time
# SIZE: mirror SIZE	

The same warehouse source can have multiple tags, representing different versions of the warehouse source. We use REPOSITORY:TAG to define different images. If we do not specify the version label of an image, for example, we only use ubuntu,docker will use ubuntu:latest image by default.

(2) Find mirror
docker search [options] Some xxx Mirror name  # Will be https://hub.docker.com Go up and find it

docker search mysql --filter=STARS=3000  # Search for images with 3000 stars or more

# OPTIONS Description:
# --No TRUNC: displays the full image description	
# -s: List the images whose likes are not less than the specified value
# --Automated: only images of type automated build are listed
(3) Get image
docker pull Mirror name[:TAG]  # If you do not write the TAG, you will get latest by default. At this time, you will get the image from alicloud configured by us

docker pull mysql  # Get the latest version of mysql
Using default tag: latest
latest: Pulling from library/mysql
852e50cd189d: Pull complete   # Sub volume Download
29969ddb0ffb: Pull complete 
a43f41a44c48: Pull complete 
5cdd802543a3: Pull complete 
b79b040de953: Pull complete 
938c64119969: Pull complete 
7689ec51a0d9: Pull complete 
a880ba7c411f: Pull complete 
984f656ec6ca: Pull complete 
9f497bce458a: Pull complete 
b9940f97694b: Pull complete 
2f069358dc96: Pull complete 
Digest: sha256:4bb2e81a40e9d0d59bd8e3dc2ba5e1f2197696f6de39a91e90798dd27299b093
Status: Downloaded newer image for mysql:latest
docker.io/library/mysql:latest  
# docker pull mysql is equivalent to docker pull docker io/library/mysql:latest 

docker pull mysql:5.7  # Download the image of the specified version
5.7: Pulling from library/mysql
852e50cd189d: Already exists  # Federated file system. Existing files will not be downloaded repeatedly
29969ddb0ffb: Already exists 
a43f41a44c48: Already exists 
5cdd802543a3: Already exists 
b79b040de953: Already exists 
938c64119969: Already exists 
7689ec51a0d9: Already exists 
36bd6224d58f: Pull complete 
cab9d3fa4c8c: Pull complete 
1b741e1c47de: Pull complete 
aac9d11987ac: Pull complete 
Digest: sha256:8e2004f9fe43df06c3030090f593021a5f283d028b5ed5765cc24236c2c4d88e
Status: Downloaded newer image for mysql:5.7
docker.io/library/mysql:5.7
(4) Delete mirror
# Delete a single mirror
docker rmi -f Image name/ID  # If the image name is not followed by TAG, the latest is deleted by default

# Delete multiple mirrors
docker rmi -f Image name 1 image name 2 ...
docker rmi -f id1 id2 ...     // Note: the two methods cannot be mixed

# Delete all mirrors
docker rmi -f $(docker images -aq)   

3. Container command

​ A container can only be created with an image, which is the fundamental premise (download a Centos image demo)

(1) New and start container (Interactive)
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

docker run -it --name mycentos centos  // If you do not specify an alias, the system automatically assigns it

# OPTIONS Description:
# --Name: specifies a name for the container
# -d: Run the container in the background and return the container ID, that is, start the daemon container
# -i: Run the container in interactive mode, usually with - t
# -t: Reassign a pseudo input terminal to the container, usually in conjunction with - i
# -P: Random port mapping
# -p: The specified port mapping has the following four formats
		ip:hostPort:containerPort
		ip::containerPort
		hostPort:containerPort
		containerPort

# test
[root@iz2zeaj5c9isqt1zj9elpbz ~]# docker run -it centos /bin/bash  # Start and enter the container
[root@783cb2f26230 /]# ls   # View in container
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@783cb2f26230 /]# exit   # Exit container
exit
[root@iz2zeaj5c9isqt1zj9elpbz ~]# 

(2) Lists all currently running containers
docker ps [OPTIONS]  # Without OPTIONS, only running containers are listed

# OPTIONS description (common):
# -a: Lists all currently running containers and those that have been running in history
# -l: Displays recently created containers
# -n: Displays the last n containers created
# -q: In silent mode, only the container number is displayed
# --No TRUNC: do not truncate output
(3) Exit container
exit  // Close and exit the container directly

Reopen a terminal, execute docker ps -l, the container information we just created will be returned, and the STATUS will prompt that it has exited.

So can you quit temporarily and come back later without closing the container interactively?

Ctrl + P + Q

# After execution, we will exit the container and return to the host. Using docker ps -l, we will find that the STATUS of the container we just exited is Up.
(4) Start container
docker start [OPTIONS] CONTAINER [CONTAINER...]

# Multiple containers can be started at the same time, and the container name and ID can be mixed

# OPTION description (common):
# -i: enter interactive, and only one container can be entered at this time

Enter the container above which we exit but still live

docker start -i 186ae928f07c
(5) Restart container
docker restart [OPTIONS] CONTAINER [CONTAINER...]

# OPTIONS Description:
# -t: the time to wait to stop before killing the container (the default is 10)
(6) Stop container
docker stop [OPTIONS] CONTAINER [CONTAINER...]

# OPTIONS Description:
# -t: waiting time to stop before killing (default is 10)

docker kill [OPTIONS] CONTAINER [CONTAINER...]  // Forced shutdown (equivalent to unplugging)

# OPTIONS Description:
# -s: Signal sent to container (default is "KILL")
(7) Delete container
docker rm [OPTIONS] CONTAINER [CONTAINER...]

# OPTIONS Description:
# -f: Force deletion, whether running or not

# The above command can delete one or more containers, but if you want to delete all containers, should we write all the container names or ID S?

docker rm -f $(docker ps -aq)  # Delete all
docker ps -aq | xargs docker rm -f
(8) Start daemon container
docker run -d Container name/container ID

Note: when we use the docker ps -a command to check, we will find that the container just started and exited. Why?

It is important to note that when the Docker container runs in the background, there must be a foreground process. If the commands run by the container are not those that have been suspended (such as running top and tail), they will exit automatically.

This is the mechanism of Docker. For example, we now run the WEB container. Take Nginx as an example. Under normal circumstances, we only need to start the responding service to configure the startup service, such as service nginx start. However, in this way, Nginx runs in the background process mode, resulting in no running application in the foreground of Docker. After such a container is started in the background, it will commit suicide immediately because it thinks it has nothing to do. So the best solution is to run the program in the form of a previous process.

So how to make the guarded container not exit automatically? We can execute commands that have been suspended.

docker run -d centos /bin/sh -c "while true;do echo hello Negan;sleep 2;done"
(9) View container log
docker logs [OPTIONS] CONTAINER

# OPTION Description:
# -t add timestamp
# -f follow the latest log print
# --tail displays the last number
(10) View the processes in the container
docker top CONTAINER 
(11) View container interior details
docker inspect CONTAINER
(12) Enter the running container and interact with it on the command line
  • exec opens a new terminal in the container and can start a new process
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]  

docker exec -it CONTAINER /bin/bash   // Enter the container and interact

# Beat cattle across the mountain
docker exec -it 3d00a0a2877e ls -l /tmp 

# 3d00a0a2877e is the guardian container running above me
# Enter the container, execute the ls -l /tmp command and return the result to my host computer, but the user interface still stays in the host computer and does not enter the container
  • attach directly enters the terminal of the container startup command and will not start a new process
docker attach CONTAINER

# Note that after entering the above guard container, we will see that hello Negan is still printed every two seconds, and we can't exit. We can only reopen a terminal and execute the docker kill Command
(13) File copy between container and host
docker cp CONTAINER:SRC_PATH DEST_PATH  # Copy the files in the container to the host
docker cp 90bd03598dd4:123.txt ~

docker cp SRC_PATH CONTAINER:DEST_PATH  # Copy the files on the host to the container
docker cp 12345.txt 90bd03598dd4:~

practice

Exercise 1: deploying Docker

# Find an nginx image
docker search nginx
# Download Image
docker pull nginx
# start-up
docker run -d --name nginx01 -p 3344:80 nginx  # -P 3344 (host): 80 (container port)
# Test (browser access)
123.56.243.64:3344 

5, Visualization

1,portainer

Docker graphical interface management tool provides a background panel for us to operate.

docker run -d -p 8088:9000 \
--restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true protainer/portainer

6, Docker image

1. What is mirroring

Image is a lightweight and executable independent software package, which is used to package the software running environment and the software developed based on the running environment. It contains all the contents required to run a software, including code, runtime, library, environment variables and configuration files.

All applications can run directly by directly packaging the docker image.

How to obtain images
  • Download from remote operation
  • Friend copy
  • Make an image DockerFile by yourself

2. Mirror loading principle

(1) Federated file system

Union fs (Federated file system): Union file system is a layered, lightweight and high-performance file system. It supports the superposition of file system modifications as one submission, and can hang different directories under the same virtual file system. The federated file system is the foundation of Docker image. The image can be inherited through layering. Based on the basic image (without parent image), various specific application images can be made.

Features: multiple file systems can be loaded at the same time, but from the outside, only one file system can be seen. Joint loading will overlay all layers of file systems, so that the final file system will contain all underlying files and directories.

(2)Docker image loading principle

The image of Docker is actually composed of file systems layer by layer, that is, the federated file system mentioned above.

bootfs(boot file system) mainly includes bootloader and kernel. Bootloader is mainly used to boot and load the kernel. Bootfs file system will be loaded when Linux starts up. Bootfs is at the bottom of Docker image. This layer is the same as our typical Linux/Unix system, including boot loader and kernel. After the boot is loaded, the whole kernel is in memory. At this time, the right to use the memory has been transferred from bootfs to the kernel. At this time, the system will also unload bootfs.

rootfs(root file system), on top of bootfs, contains standard directories and files such as / dev,/proc,/bin,/etc in a typical Linux system. rootfs is a variety of operating system distributions, such as Ubuntu,CentOS and so on.

For a streamlined OS,rootfs can be very small. You only need to include the latest commands, tools and program libraries. Because the underlying layer directly uses the Host kernel, you only need to provide rootfs. Therefore, for different Linux distributions, bootfs are basically the same, and rootfs will be different. Therefore, different distributions can share bootfs.

3. Hierarchical understanding

(1) Tiered mirroring

We download an image and pay attention to the log output of the download. We can see that it is downloading layer by layer.

Why does Docker image adopt this hierarchical structure?

The biggest advantage, I think, is resource sharing. For example, if multiple images are built from the same Base image, the host only needs to keep one Base image on the disk, and only one Base image needs to be loaded in the memory, so that it can serve all containers, and each layer of the image can be shared.

(2) Understand

All Docker images begin with a basic image layer. When modifying or adding new content, a new image layer will be created on top of the current image layer. If you create a new image based on Ubuntu Linux 16.04, this is the first layer of the new image; If you add a Python package to the image, a second image layer will be created on top of the basic image. If you continue to add a security patch, a third image layer will be created.

Add an additional mirror layer, while the mirror always remains the combination of all current mirrors.

Note: Docker images are read-only. When the container is started, a new writable layer is loaded to the top of the image.

​ This layer is what we usually call the container layer. What is under the container is called the mirror layer.

4,commit

docker commit The commit container becomes a new image

docker commit -m="Description information submitted" -a="author" container id Target image name:[TAG]

7, Container data volume

If the data is in the container, we will delete the container and lose the data!

Requirements: data needs to be persistent.

MySQL, delete the container, delete the database and run away? The data generated in the Docker container needs to be synchronized locally.

This is volume technology, directory mounting. Mount the directory in our container to Linux.

1. Using data volumes

  • Mount directly using the command
docker run -it -v Host Directory:In container directory  # Two way binding, one party changes and the other automatically changes

docker run -it -v /home/ceshi:/home centos /bin/bash

docker inspect d2343e9d338a  # View

/*
...
 "Mounts": [   # Mount - v 
            {
                "Type": "bind",
                "Source": "/home/ceshi",  # Path within host
                "Destination": "/home",   # Path within container
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
...
*/

2. Actual combat: install MySQL

Thinking: data persistence of MySQL

docker run -d -p 13306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

3. Named mount and anonymous mount

(1) . anonymous mount
docker run -d -P --name nginx01 -v /etc/nginx nginx  # Only paths within the container are specified

docker volume ls  # View all volume s
DRIVER              VOLUME NAME
local               55050407d8fd052403bbf6ee349aa6268c2ec5c1054dafa678ac5dd31c59217a  # Anonymous mount
local               fd074ffbcea60b7fe65025ebe146e0903e90d9df5122c1e8874130ced5311049

# This is anonymous mount. In -v, we only write the path inside the container, not the path outside the container.
(2) . named mount
docker run -d -P -v juming:/etc/nginx --name nginx02 nginx   # Named mount

docker volume ls
DRIVER              VOLUME NAME
local               55050407d8fd052403bbf6ee349aa6268c2ec5c1054dafa678ac5dd31c59217a
local               fd074ffbcea60b7fe65025ebe146e0903e90d9df5122c1e8874130ced5311049
local               juming  # Our name on it

# Pass -v volume name: path in container

# All the volumes in the docker container are in the / var/lib/docker/volumes directory if no directory is specified.
docker inspect juming  
[
    {
        "CreatedAt": "2020-12-09T00:22:54+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/juming/_data",
        "Name": "juming",
        "Options": null,
        "Scope": "local"
    }
]

We can easily find the volume we mount through named mount. In most cases, we will use named mount.

(3) , expand
-v Path in container  # Anonymous mount
-v Volume name: path inside container # Named mount
-v /Host path: path within container  # Specified path mount

# Change the read and write permissions through the path in the - V container: ro rw
ro readonly # read-only
rw readwrite # Readable and writable

# Once read-only is set, the container limits the content we mount. The file can only be operated through the host, and the container cannot be operated.
docker run -d -P --name nginx01 -v juming:/etc/nginx:ro nginx 

4. Data volume container

Information synchronization between containers.

docker run -it -v /home --name c1 centos /bin/bash   # Start c1 as a parent container (specify the mount directory in the container)
docker run -it --name c2 --volumes-from c1 centos /bin/bash  # Start c2 and mount c1. At this time, the operations in / home in the two containers will be synchronized


# The following containers are created by ourselves. You can refer to the following dockerfile for construction
docker run -it --name docker01 negan/centos   # Start a container as the parent container (mounted container)  
docker run -it --name docker02 --volumes-from docker01 negan/centos  #Start container 2 and mount container 1 
docker run -it --name docker03 --volumes-from docker02 negan/centos  #Start container 3 and mount container 2
# The above container 3 can also be directly mounted on container 1, and then we enter any container and mount volumes volume01/volume02 for operation, and the data between containers will be synchronized automatically.

Conclusion:

For the transfer of configuration information between containers, the declaration cycle of data volume containers continues until no containers are used.

However, once persistent to the local, the local data will not be deleted.

8, DockerFile

1. First acquaintance with DockerFile

DockerFile is the build file used to construct the docker image, including the command parameter script. Through this script, you can generate an image. Mirroring is layer by layer. Scripts are commands, and each command is a layer.

# dockerfile1 content, all commands are capitalized

FROM centos   # centos based

VOLUME ["volume01","volume02"]   # Mount data volume (anonymous mount)

CMD echo "----end-----"

CMD /bin/bash


# structure
docker build -f dockerfile1 -t negan/centos .  # -f specify path - t specify name, without tag, default to the latest
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM centos
 ---> 0d120b6ccaa8
Step 2/4 : VOLUME ["volume01","volume02"]
 ---> Running in 0cfe6b5be6bf
Removing intermediate container 0cfe6b5be6bf
 ---> 396a4a7cfe15
Step 3/4 : CMD echo "----end-----"
 ---> Running in fa535b5581fa
Removing intermediate container fa535b5581fa
 ---> 110d9f93f827
Step 4/4 : CMD /bin/bash
 ---> Running in 557a2bb87d97
Removing intermediate container 557a2bb87d97
 ---> c2c9b92d50ad
Successfully built c2c9b92d50ad
Successfully tagged negan/centos:latest

docker images  # see

2. DokcerFile build process

(1) Basic knowledge

Each reserved keyword (instruction) must be uppercase

Execute from top to bottom

#Indicates a comment

Each instruction creates and commits a new mirror layer

Dockerfile is development oriented. We need to write dockerfile files if we want to publish projects and make images in the future.

Docker image has gradually become the standard of enterprise delivery.

(2) . basic commands
FROM  #Who is the mother of this mirror? (basic image, everything starts from here)
MAINTAINER # Tell others who is responsible for raising him? (who wrote the image, designated maintainer information, name + email)
RUN  # What do you want him to do? (commands to be run during image construction)
ADD  # Give him some venture capital (copy the file and it will be automatically decompressed)
WORKDIR # Mirrored working directory
VOLUME  # Give him a place to store his luggage (set up a volume, mount it in the container to the directory of the host, mount it anonymously)
EXPOSE  # What's the house number? (specify external port)
CMD   # Specify the command to run when the container starts. Only the last one will take effect and can be replaced
ENTRYPOINT # Specify the command to be run when the container is started, and you can append the command
ONBUILD  # When building an inherited DockerFile, the ONBUILD instruction will be run
COPY    # Similar to ADD, copy our files to the image
ENV    # Setting environment variables during construction

3. Actual operation

(1) Create your own CentOS
# vim Dockerfile

FROM centos
MAINTAINER Negan<huiyichanmian@yeah.net>
ENV MYPATH /usr/local
WORKDIR $MYPATH
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATH
CMD echo "---end---"
CMD /bin/bash

# structure
docker build -f Dockerfile -t negan/centos .

# test
docker run -it negan/centos
[root@ffae1f9eb97e local]# pwd
/usr/local   # Enter the working directory we set in dockerfile

# View the image construction process
docker history Image name/ID
(2) Difference between CMD and ENTRYPOINT

Both of them are commands to be executed when the container is started. Only the last CMD command will take effect. Additional commands are not supported later and will be replaced. ENTRYPOINT will not be replaced, and commands can be appended.

CMD

# vim cmd
FROM centos
CMD ["ls","-a"]

# structure
docker build -f cmd -t cmd_test .

# Run and find that our ls -a command takes effect
docker run cmd_test
.
..
.dockerenv
bin
dev
etc
......

# Append command run
docker run cmd_test -l
# An error is thrown and the command cannot be appended. The original ls -a command is replaced by - l, but - l is not a valid command
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-l\": executable file not found in $PATH": unknown.

# Append complete command
docker run cmd_test ls -al

total 56
drwxr-xr-x  1 root root 4096 Dec 10 14:36 .
drwxr-xr-x  1 root root 4096 Dec 10 14:36 ..
-rwxr-xr-x  1 root root    0 Dec 10 14:36 .dockerenv
lrwxrwxrwx  1 root root    7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x  5 root root  340 Dec 10 14:36 dev
......

ENTRYPOINT

# vim entrypoint
FROM centos
ENTRYPOINT ["ls","-a"]

# structure
docker build -f entrypoint -t entrypoint_test .

# function
docker run entrypoint_test
.
..
.dockerenv
bin
dev
etc

# Append command run
docker run entrypoint_test -l

total 56
drwxr-xr-x  1 root root 4096 Dec 10 14:41 .
drwxr-xr-x  1 root root 4096 Dec 10 14:41 ..
-rwxr-xr-x  1 root root    0 Dec 10 14:41 .dockerenv
lrwxrwxrwx  1 root root    7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x  5 root root  340 Dec 10 14:41 dev
drwxr-xr-x  1 root root 4096 Dec 10 14:41 etc
......

4. Practical construction of tomcat

(1) Environmental preparation
 ll
total 166472
-rw-r--r-- 1 root root  11437266 Dec  9 16:22 apache-tomcat-9.0.40.tar.gz
-rw-r--r-- 1 root root       641 Dec 10 23:26 Dockerfile
-rw-r--r-- 1 root root 159019376 Dec  9 17:39 jdk-8u11-linux-x64.tar.gz
-rw-r--r-- 1 root root         0 Dec 10 22:48 readme.txt
(2) Build image
# vim Dockerfile (Dockerfile is the official recommended name)

FROM centos
MAINTAINER Negan<huiyichanmian@yeah.net>

COPY readme.txt /usr/local/readme.txt

ADD jdk-8u271-linux-aarch64.tar.gz /usr/local/
ADD apache-tomcat-9.0.40.tar.gz /usr/local/

RUN yum -y install vim

ENV MYPATH /usr/local
WORKDIR $MYPATH

ENV JAVA_HOME /usr/local/jdk1.8.0_11
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.40
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.40
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin

EXPOSE 8080

CMD /usr/local/apache-tomcat-9.0.40/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.40/bin/logs/catalina.out

# structure
docker -t tomcat .
(3) Start container
docker run -d -P --name tomcat01 -v /home/Negan/tomcat/test:/usr/local/apache-tomcat-9.0.40/webapps/test -v /home/Negan/tomcat/logs:/usr/local/apache-tomcat-9.0.40/logs tomcat

9, Publish your own image

1,docker hub

First, you need to register your account on DockerHub and make sure that this account can log in.

Log in to our server and submit our image after successful login.

# Sign in
docker login [OPTIONS] [SERVER]

Log in to a Docker registry.
If no server is specified, the default is defined by the daemon.

Options:
  -p, --password string   Password
      --password-stdin    Take the password from stdin
  -u, --username string   Username


# Push our image after successful login
docker push [OPTIONS] NAME[:TAG]  

Push an image or a repository to a registry

Options:
      --disable-content-trust   Skip image signing (default true)

docker tag tomcat huiyichanmian/tomcat  # If you need to change your name, push the changed name (add your own user name in front)
docker push huiyichanmian/tomcat

2. Alibaba cloud

Log in to alicloud, find the container image service and use the image warehouse. Create a namespace and create a mirror warehouse. Select a local warehouse.

There are particularly detailed steps on Alibaba cloud, which will not be repeated here.

10, Docker network

1. Understand docker0

(1) View local network card information
ip addr 

# Local loopback address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
       
# Alibaba cloud intranet address
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:0c:7b:cb brd ff:ff:ff:ff:ff:ff
    inet 172.24.14.32/18 brd 172.24.63.255 scope global dynamic eth0
       valid_lft 286793195sec preferred_lft 286793195sec
    
# docker0 address
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:5e:2b:4c:05 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

(2) View container network card information

We get a tomcat image for testing.

docker run -d -P --name t1 tomcat 

docker exec -it t1 ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
# We found that when the container starts, we will get a eth0@ifxxx And the ip address is in the same network segment as that in docker0 above.
233: eth0@if234: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
(3) . view the local network card information again
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:0c:7b:cb brd ff:ff:ff:ff:ff:ff
    inet 172.24.14.32/18 brd 172.24.63.255 scope global dynamic eth0
       valid_lft 286792020sec preferred_lft 286792020sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:5e:2b:4c:05 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
# We found an additional piece of network card information, which has a certain correspondence with the network card in the container. (233,234)
234: veth284c2d9@if233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default

We repeat the above operations. It is not difficult to find that as long as we install docker, there will be one more docker0 in the local network card. Moreover, every time we start a container, docker will assign a network card to the container, and there will be one more network card information locally, which corresponds to the information in the container. This is veth_pair technology is a pair of virtual device interfaces, which appear in pairs, one connected to the protocol and the other connected to each other. Because of this feature, we usually veth_pair acts as a bridge to connect various virtual network devices.

When all containers do not specify a network, they are routed by docker0. Docker will assign a default ip to our containers.

Docker uses the bridge of Linux, and docker0 in the host is a bridge of docker container. All network interfaces are virtual.

As long as the container is deleted, the corresponding bridge is also deleted in response.

2,--link

Question: every time we restart the container, the ip address of the container will change, and the fixed ip used in some configurations in our project also needs to be changed accordingly. Can we directly set the service name? When we restart the next time, the configuration will directly find the service name.

Let's start two tomcat, test them, ping their corresponding names to see if they can pass?

docker exec -it t1 ping t2
ping: t2: Name or service not known
# The answer is yes. If t2 cannot be recognized, how to solve it?
# We use -- link to connect
docker run -d -P --name t3 --link t2 tomcat

# We try to ping t2 with t3
docker exec -it t3 ping t2
# We found that it was connected
PING t2 (172.17.0.3) 56(84) bytes of data.
64 bytes from t2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.099 ms
64 bytes from t2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.066 ms
......

So -- what did link do?

docker exec -it t3 cat /etc/hosts  # Let's look at the hosts file of t3

127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	t2 6bf3c12674c8  # Here's the reason. t2 is marked here. When ping t2, it will automatically go to 172.17.0.3
172.17.0.4	b6dae0572f93

3. Custom network

(1) View docker network
docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
10684d1bfac9        bridge              bridge              local
19f4854793d7        host                host                local
afc0c673386f        none                null                local

# bridge bridging (docker default)
# Host share with host
# none not configured
(2) Default network at container startup

docker0 is our default network and does not support domain name access. You can use -- link to get through.

 # Generally, we start the container like this, using the default network, and the default network is bridge, so the following two commands are the same
docker run -d -P --name t1 tomcat  

docker run -d -P --name t1 --net bridge tomcat 
(3) Create network
# --driver bridge is default and can be left blank
# --subnet 192.168.0.0/16 subnet mask
# --gateway 192.168.0.1 default gateway
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet

docker network ls

docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
10684d1bfac9        bridge              bridge              local
19f4854793d7        host                host                local
0e98462f3e8e        mynet               bridge              local  # Our own network
afc0c673386f        none                null                local


ip addr

.....
# Our own network
239: br-0e98462f3e8e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:b6:a7:b1:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/16 brd 192.168.255.255 scope global br-0e98462f3e8e
       valid_lft forever preferred_lft forever
.....

Start two containers and use the network we created ourselves

docker run -P -d --name t1 --net mynet tomcat
docker run -P -d --name t2 --net mynet tomcat

# View the network information created by ourselves
docker network mynet inspect

# We found that the two containers we just started use the network we just created
......
"Containers": {
            "1993703e0d0234006e1f95e964344d5ce01c90fe114f58addbd426255f686382": {
                "Name": "t2",
                "EndpointID": "f814ccc94232e5bbc4aaed35022dde879743ad9ac3f370600fb1845a862ed3b0",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "8283df6e894eeee8742ca6341bf928df53bee482ab8a6de0a34db8c73fb2a5fb": {
                "Name": "t1",
                "EndpointID": "e462941f0103b99f696ebe2ab93c1bb7d1edfbf6d799aeaf9a32b4f0f2f08e01",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
.......

So what are the benefits of using your own network?

Let's go back to our previous problem, that is, the domain name ping is not available.

docker exec -it t1 ping t2
PING t2 (192.168.0.3) 56(84) bytes of data.
64 bytes from t2.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.063 ms
......

docker exec -it t2 ping t1
PING t1 (192.168.0.2) 56(84) bytes of data.
64 bytes from t1.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.045 ms
......

We found that the domain name can be ping ed, which means that our customized network has helped us maintain the corresponding relationship.

In this way, different clusters use different networks, which can also ensure the safety and health of the cluster.

4. Network connectivity

Now there is a requirement that t1 and t2 use our own defined network, and t3 and T4 use the default docker0 network. Can t3 and t1 or t2 communicate now?

We know that the default gateway of docker0 is 172.17.0.1 and that of mynet is 192.168.0.1. They directly belong to different network segments and cannot communicate. So how to solve the above problems now?

So can mynet assign an ip address to t3? If it can be allocated, the problem should be solved.

docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container
   
  # One container two ip addresses 
 docker network connect mynet t3   # Join t3 to mynet network
 
 # View mynet information
 docker network inspect mynet
 
 "Containers": {
			......
            "d8ecec77f7c1e6d26ad0fcf9107cf31bed4b6dd553321b737d14eb2b497794e0": {
                "Name": "t3",  # We found t3
                "EndpointID": "8796d63c1dd1969549a2d1d46808981a2b0ad725745d794bd3b824f278cec28c",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            }
        },
        ......

At this time, t3 can communicate with t1 and t2.

5. Deploy Redis cluster

# Create network
docker network create redis --subnet 172.38.0.0/16

# Create six redis configurations through scripts
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379 
bind 0.0.0.0
cluster-enabled yes 
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

# Start container
vim redis.py

import os
for i in range(1, 7):
    str = "docker run -p 637{}:6379 -p 1637{}:16379 --name redis-{} \
    -v /mydata/redis/node-{}/data:/data \
    -v /mydata/redis/node-{}/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.1{} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf".format(i,i,i,i,i,i)
    os.system(str)

python reidis.py

# Create cluster
docker exec -it redis-1 /bash/sh  # Enter redis-1 container

redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

# Create a cluster....
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 9d1d33301aea7e4cc9eb41ec5404e2199258e94e 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: d63e90423a034f9c42e72cc562706919fd9fc418 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: a89026d4ea211d36ee04f2f3762c6e3cd9692a28 172.38.0.14:6379
   replicates d63e90423a034f9c42e72cc562706919fd9fc418
S: bee27443cd5eb6f031115f19968625eb86c8440b 172.38.0.15:6379
   replicates 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49
S: 53d6196c160385181ff23b15e7bda7d4387b2b17 172.38.0.16:6379
   replicates 9d1d33301aea7e4cc9eb41ec5404e2199258e94e
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: a89026d4ea211d36ee04f2f3762c6e3cd9692a28 172.38.0.14:6379
   slots: (0 slots) slave
   replicates d63e90423a034f9c42e72cc562706919fd9fc418
S: 53d6196c160385181ff23b15e7bda7d4387b2b17 172.38.0.16:6379
   slots: (0 slots) slave
   replicates 9d1d33301aea7e4cc9eb41ec5404e2199258e94e
M: 9d1d33301aea7e4cc9eb41ec5404e2199258e94e 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: bee27443cd5eb6f031115f19968625eb86c8440b 172.38.0.15:6379
   slots: (0 slots) slave
   replicates 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49
M: d63e90423a034f9c42e72cc562706919fd9fc418 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


# Test
redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:160
cluster_stats_messages_pong_sent:164
cluster_stats_messages_sent:324
cluster_stats_messages_ping_received:159
cluster_stats_messages_pong_received:160
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:324

11, Docker Compose

1. Introduction

Compose is an official open source project of Docker and needs to be installed.

Compose is a tool for defining and running multi container Docker applications. With compose, you can use YAML files to configure the services of your application. Then, with one command, you can create and start all services from the configuration.

Using Compose is basically a three-step process:

  1. Dockerfile ensures that our projects run anywhere
  2. Docker compose file
  3. start-up

2. Quick start

(1) Installation
curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

docker-compose --version
# Installation succeeded
# docker-compose version 1.27.4, build 40524192
(2) Use
# Create a directory for the project
mkdir composetest
cd composetest

# Write a flash program
vim app.py

import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)  # Here, the host name directly uses "redis" instead of the ip address

def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)

@app.route('/')
def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)
    
# Write requirements Txt file (download the latest version without specifying the version)
flask
redis

# Write Dockerfile file
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]


# Write docker compose YML file
# The file defines two services, web and redis. The web is built from Dockerfile, and redis uses a public image
version: "3.9"
services:
  web:
    build: .
    ports:
      - "5000:5000"
    volumes:
      - .:/code
  redis:
    image: "redis:alpine"
    
# function
docker-compose up 

3. Build a blog

# Create and enter directory
mkdir wordpress && cd wordpress

# Write docker compose YML file to start and create a separate Mysql instance with volume mount for data persistence
version: "3.3"

services:
  db:
    image: mysql:5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
      
  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - "8000:80"
    restart: always
    environment:
      W0RDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
  volumes:
    db_data: {}
    

# Start operation
docker-compose up -d

12, Docker Swarm

1. Environmental preparation

Prepare four servers. Install docker.

2. swarm cluster construction

docker swarm COMMAND
Commands:
  ca          Display and rotate the root CA
  init        Initialize a swarm   # Initialize a node (management node)
  join        Join a swarm as a node and/or manager  # Join node
  join-token  Manage join tokens   # Join the node through token
  leave       Leave the swarm    # Leave node
  unlock      Unlock swarm
  unlock-key  Manage the unlock key
  update      Update the swarm  #to update

# First, we initialize a node
docker swarm init --advertise-addr + own ip(The intranet address is used here to save money)
docker swarm init --advertise-addr 172.27.0.4

# Prompt us if the node is created successfully
Swarm initialized: current node (yamss133bil4gb59fyangtdmm) is now a manager.

To add a worker to this swarm, run the following command:
	
	# Execute on other machines and join this node
    docker swarm join --token SWMTKN-1-3f8p9pq2gp36s6ei0bs9pepqya24n274msin701j9kdt7h3v2z-4yyba7377uz9mfabak6pwu4ci 172.27.0.4:2377

# Generate a token of the management node,
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# We add the node command on the uplink of machine 2
docker swarm join --token SWMTKN-1-3f8p9pq2gp36s6ei0bs9pepqya24n274msin701j9kdt7h3v2z-4yyba7377uz9mfabak6pwu4ci 172.27.0.4:2377

# We view the node information on machine 1
docker node ls
# We found that the states of a management node and a work node are ready
ID                            HOSTNAME        STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
yamss133bil4gb59fyangtdmm *   VM-0-4-centos   Ready     Active         Leader           20.10.0
mfxdgj1pobj0idbl9cesm2xnp     VM-0-7-centos   Ready     Active                          20.10.0

# Now add machine 3, which is also the work node
# So far, only machine 4 has not joined. At this time, we want to set it as the management node.
# Execute the command to generate the management node on machine 1 and on machine 4
docker swarm join-token manager
# The generated command is executed on machine 4
docker swarm join --token SWMTKN-1-3f8p9pq2gp36s6ei0bs9pepqya24n274msin701j9kdt7h3v2z-82pamju7b37aq8e1dcf1xmmng 172.27.0.4:2377
# At this time, node 4 is also managed
This node joined a swarm as a manager.

# At this point, we can carry out other operations on machine 1 or machine 4. (you can only operate on the management node)

3. Raft protocol

In the previous steps, we have completed the construction of dual master and dual slave clusters.

Raft protocol: ensure that most nodes survive before they can be used. It must be greater than 1, and the cluster must be greater than 3 at least.

Experiment 1

Stop the docker on machine 1 and stop it. Now there is only one management node in the cluster. Whether the cluster is available.

#We check the node information on machine 4
docker node ls
# It is found that our cluster is no longer available
Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded

# We restart the docker on machine 1 and find that the cluster can be used again, but at this time, the docker on machine 1 is no longer the leader, and the leader is automatically transferred to machine 4

Experiment 2

We leave the work node from the cluster and view the cluster information

# We execute on machine 2
docker swarm leave

# View node information on machine 1
docker node ls
# We found that the state of machine 2 is Down
ID                            HOSTNAME         STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
yamss133bil4gb59fyangtdmm *   VM-0-4-centos    Ready     Active         Reachable        20.10.0
mfxdgj1pobj0idbl9cesm2xnp     VM-0-7-centos    Down      Active                          20.10.0
u3rlqynazrdiz6oaubnuuyqod     VM-0-11-centos   Ready     Active                          20.10.0
im6kk7qd2a3s9g98lydni6udi     VM-0-13-centos   Ready     Active         Leader           20.10.0

Experiment 3

Now let's also set machine 2 as the management node (at this time, the cluster has three management nodes), randomly down one management node, and check whether the cluster is running normally.

# Run on machine 2
docker swarm join --token SWMTKN-1-3f8p9pq2gp36s6ei0bs9pepqya24n274msin701j9kdt7h3v2z-82pamju7b37aq8e1dcf1xmmng 172.27.0.4:2377

# Shutdown docker of machine 1
systemctl stop docker

# View node information on machine 2
docker node ls
# The cluster is normal, and you can see that machine 1 is down
ID                            HOSTNAME         STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
yamss133bil4gb59fyangtdmm     VM-0-4-centos    Ready     Active         Unreachable      20.10.0
mfxdgj1pobj0idbl9cesm2xnp     VM-0-7-centos    Down      Active                          20.10.0
vdwcwr3v6qrn6da40zdrjkwmy *   VM-0-7-centos    Ready     Active         Reachable        20.10.0
u3rlqynazrdiz6oaubnuuyqod     VM-0-11-centos   Ready     Active                          20.10.0
im6kk7qd2a3s9g98lydni6udi     VM-0-13-centos   Ready     Active         Leader           20.10.0

4. Elastic creation service

docker service COMMAND

Commands:
  create      Create a new service  # Create a service
  inspect     Display detailed information on one or more services  # View service information
  logs        Fetch the logs of a service or task  # journal
  ls          List services   # list
  ps          List the tasks of one or more services   # View our services
  rm          Remove one or more services  # Delete service
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services  # Dynamic expansion and contraction capacity
  update      Update a service  # to update

docker service create -p 8888:80 --name n1 nginx  # Create a service and assign it randomly in the cluster
kj0xokbxvf5uw91bswgp1cukf
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 

# Add three copies to our service
docker service update --replicas 3 n1

# Dynamic expansion and contraction capacity
docker service scale n1=10   # Like updata above, this is convenient

docker ps # see

Service, which can be accessed by any node in the cluster. The service can have multiple replicas to dynamically expand and shrink the capacity to achieve high availability.

5. Deploying blogs using Docker stack

Now we need to run the above blog in our cluster and open ten copies.

# Edit docker compose YML file
version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     deploy:
       replicas: 10
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}


# Start up
docker stack deploy -c docker-compose.yml wordpress

# see
docker service ls

Tags: Docker

Posted by Kelset on Mon, 02 May 2022 04:30:41 +0300