Getting started with front-end docker

The purpose of this article is to let readers have a general understanding of the whole docker system. If you want to know more about linux, please know more about linux first.

1. Docker

1.1 what is Docker

Docker is an open source engine based on Linux container technology, which unifies the API of isolated applications accessing the core of the system. Trying to solve the century problem of developers can run on my machine.

The front-end students can regard the image as npm package and the warehouse as npm warehouse. This is more convenient to understand.

1.2 why use Docker

Docker is a reduced version similar to virtual machine technology. Due to the long start-up process of virtual machine, the hardware after virtualization does not fit well with the physical machine when running the program. A typical example is the development of mobile terminal. When starting the virtual system, the process is very long.

We often open a virtual machine only to isolate an application, but the creation of a virtual machine takes up a complete set of guest OS, which has the problem of overtalent and underutilization, and the cost is also closely related.

With the update of Docker, the core function of Linux application is isolated, while the current Docker system only appears.

The following figure shows the comparison between virtual machine and Docker architecture:

The following figure shows the function comparison of container virtual machine:

In this way, Docker can be started in seconds, because Docker skips the system initialization (kernel init) and directly uses the current system core. For example, the migration of virtual machines is not very good.

Using Docker can quickly build and configure the application environment, simplify operations, ensure the consistency of the operation environment, "compile once and run everywhere", application level isolation, elastic expansion and rapid expansion.

1.3 basic concepts of docker

1.3.1 mirroring

Image is a special file system. In addition to providing the program, library, resource, configuration and other files required by the container during operation, it also includes some configuration parameters prepared for operation (such as anonymous volume, environment variable, user, etc.). The image does not contain any dynamic data, and its content will not be changed after construction.

The union file system provides a read-only template for application operation. It can provide only one function, or create multiple function services by superposition of multiple images.

1.3.2 containers

The image is just to define what is needed to isolate the application, and the container is the process that runs these images. Inside the container, it provides a complete file system, network, process space and so on. It is completely isolated from the external environment and will not be invaded by other applications.

The reading and writing of the container must use * * Volume * *, or the host storage environment. After the container is restarted or closed, the data existing in the running container will be lost. Each time the container is started, a new container is created through image.

1.3.3 warehouse

Docker warehouse is a place for centralized storage of image files. After the image is built, it can easily run on the current host. However, if we need to use this image on other servers, we need a centralized service for storing and distributing images, such as Docker Registry. Sometimes, a repository and a registry are confused and not strictly distinguished. The concept of docker warehouse is similar to Git. The registration server can be understood as a managed service such as GitHub. In fact, a Docker Registry can contain multiple repositories. Each repository can contain multiple tags, and each tag corresponds to a mirror image. Therefore, the image warehouse is the place where docker is used to centrally store image files, which is similar to the code warehouse we used before.

Usually, a warehouse will contain images of different versions of the same software, and labels are often used to correspond to each version of the software. We can specify which version of the software is mirrored through the format of < warehouse name >: < label >. If no label is given, latest will be used as the defau lt label.

Warehouses can be divided into two forms:

  • Public (public warehouse)
  • Private (private warehouse)

1.3.4 Docker client

Docker client is a generic term used to initiate a request to the specified Docker Engine and perform corresponding container management operations It can be either a docker command line tool or any client that follows the Docker API At present, there are many kinds of docker clients maintained in the community, including C# (supporting Windows), Java, Go, Ruby, JavaScript and other common languages, and even WebU format clients written in Angular library, which is enough to meet the needs of most users.

1.3.5 Docker Engine

Docker Engine is the core background process of docker. It is responsible for responding to requests from docker clients, and then translating these requests into system calls to complete container management operations. The process will start an API Server in the background, which is responsible for receiving requests sent by docker clients; The received request will be distributed and scheduled through a route inside the Docker Engine, and then the specific function will execute the request.

2. Practical Docker

##2.1 installing Docker

All environments in this article are running under CentOS 7.

First, remove all old versions of Docker.

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \

If it is a new environment, you can skip this step.

For some domestic reasons, it is impossible to install docker CE according to the official website. Therefore, we need domestic images to speed up the installation. Now let's speed up the installation through aliyun.

# step 1: install some necessary system tools
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: add software source information
sudo yum-config-manager --add-repo
# Step 3: update and install docker CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: start Docker service
sudo service docker start

After installation, you can run docker version to check whether the installation is successful.

➜  ~ docker version
Client: Docker Engine - Community
 Version:           19.03.3
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        a872fc2f86
 Built:             Tue Oct  8 00:58:10 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
  Version:          19.03.3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       a872fc2f86
  Built:            Tue Oct  8 00:56:46 2019
  OS/Arch:          linux/amd64
  Experimental:     false
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
  Version:          0.18.0
  GitCommit:        fec3683

##2.2 get an image

Now we need to pull an nginx image and deploy an nginx application.

➜  ~ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
68ced04f60ab: Pull complete 
28252775b295: Pull complete 
a616aa3b0bf2: Pull complete 
Digest: sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b
Status: Downloaded newer image for nginx:latest

After pulling, use docker image ls to view the list of current docker local images.

➜  ~ docker image ls
REPOSITORY                      TAG                            IMAGE ID            CREATED             SIZE
nginx                           latest                         6678c7c2e56c        13 hours ago        127MB

Rerun the same command docker pull nginx to update the local image.

##2.3 running a Docker container

Create a shell script file and write the following:

docker run \
	# Specify the restart policy after the container is stopped:
	#		no: do not restart when the container exits 
	#		On failure: restart when the container fails to exit (the return value is non-zero)
	#		Always: always restart when container exits
	--restart=always \
	# Specify that docker runs in the background. If - d is not added, after executing this command
	# If you exit the command line, the docker container will also be returned
	-d \
	# Bind the host port number to the container port
	-p 8080:80 \
	# Specify the exposed port of the container, that is, the exposed port of the modified image
	--expose=80  \
	# Map host directory to
	-v /wwwroot:/usr/share/nginx/html \
	# Specify the container name, which can be used for container management in the future. The links feature needs to use the name
	--name=testdocker \
	# Which image is used to initialize this container

We should make it clear that docker's container network is isolated from the host. Unless the specified container network mode relies on the host, it cannot be accessed directly.

Now let's run the script, then open the browser and enter http://ip:8080 You can see the application running using nginx image.

2.3.1 concise version of command parameters

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]    
03.  -d, --detach=false         Specifies whether the container runs in the foreground or background. The default is false     
04.  -i, --interactive=false   open STDIN,For console interaction    
05.  -t, --tty=false            distribution tty The device can support terminal login. The default is false    
06.  -u, --user=""              Specifies the user of the container    
07.  -a, --attach=[]            Login container (must be in docker run -d Started container)  
08.  -w, --workdir=""           Specifies the working directory of the container   
09.  -c, --cpu-shares=0        Set container CPU Weight, in CPU Shared scene usage    
10.  -e, --env=[]               Specifies an environment variable that can be used in containers    
11.  -m, --memory=""            Specifies the maximum memory limit for the container    
12.  -P, --publish-all=false    Specifies the port that the container is exposed to    
13.  -p, --publish=[]           Specifies the port that the container is exposed to   
14.  -h, --hostname=""          Specifies the host name of the container    
15.  -v, --volume=[]            Mount the storage volume to the container and mount it to a directory of the container    
16.  --volumes-from=[]          Mount a volume on another container to the container, and mount it to a directory of the container  
17.  --cap-add=[]               Add permission. For the permission list, see:    
18.  --cap-drop=[]              Delete permission. See the following for permission list:    
19.  --cidfile=""               After running the container, write the container in the specified file PID Value, a typical usage of monitoring system    
20.  --cpuset=""                Set what containers can use CPU,This parameter can be used for container exclusivity CPU    
21.  --device=[]                Adding a host device to a container is equivalent to a device pass through    
22.  --dns=[]                   Specifies the name of the container dns The server    
23.  --dns-search=[]            Specifies the name of the container dns Search the domain name and write it to the container/etc/resolv.conf file    
24.  --entrypoint=""            cover image Entry point for    
25.  --env-file=[]              Specify the environment variable file in the format of one environment variable per line    
26.  --expose=[]                Specify the exposed port of the container, that is, the exposed port of the modified image    
27.  --link=[]                  Specify the association between containers and use the association of other containers IP,env Other information    
28.  --lxc-conf=[]              Specify the configuration file of the container, only when specified--exec-driver=lxc When using    
29.  --name=""                  Specify the container name, which can be used for container management in the future, links The feature requires a name    
30.  --net="bridge"             Container network settings:  
31.                                bridge use docker daemon Specified bridge       
32.                                host    //The container uses the host's network    
33.                                container:NAME_or_ID  >//Use the network of other containers to share network resources such as IP and PORT    
34.                                none The container uses its own network (similar to--net=bridge),But not configured   
35.  --privileged=false         Specifies whether the container is a privileged container, and the privileged container owns all capabilities    
36.  --restart="no"             Specifies the restart policy after the container is stopped:  
37.                                no: Do not restart when container exits    
38.                                on-failure: Restart on container failure exit (return value non-zero)   
39.                                always: Always restart when container exits    
40.  --rm=false                 Specifies that containers are automatically deleted when they are stopped(Not supported with docker run -d Started container)    
41.  --sig-proxy=true           Set the agent to accept and process the signal, but SIGCHLD,SIGSTOP and SIGKILL Cannot be represented    

2.4 access containers

We can use docker exec -it [docker container id] /bin/bash to enter the running container.

There are two ways to exit the container:

  1. Directly enter exit on the command line to exit
  2. Using the shortcut key ctrl+P Q will also exit

The above two methods can exit from the container and keep the container running in the background.

##2.5 customize an image Dockerfile

Dockerfile is divided into four parts: basic image information, maintainer information, image operation instructions and instructions executed when the container is started.

Here I use a simple Dockerfile of node startup development environment.

# 1. Set the basic image of the source
FROM node:12.0
# Specify the working directory of subsequent RUN, CMD and ENTRYPOINT instructions
WORKDIR /workspace
# RUN the RUN subsequent command the last time through the directory specified by WORKDIR
RUN npm install --registry=
# Initialization exposure 8080 8001 8800 port number
# The following port numbers can also be exposed during docker run
# The default command is executed. If the host enters through docker run -it /bin/bash, the following commands will not be executed
# The instruction that is not overwritten at all is ENTRYPOINT
CMD ["npm","run","dev-server"]

Save, exit editing, and execute docker build - t nodeapp: v1 0 Pay attention to the last one Indicates the current directory.

After running, use docker image ls to check whether there is a compiled image.

At this time, some people may have a question: do you need npm install installation files every time?

In fact, if your node application package will not change and your image is specially developed for this application, you can consider using the ADD instruction to change the node_ Append to modules. (this is basically not the case in reality, because if the external mapped data volume comes in, the directory will be overwritten. This is just for the demonstration of appending files like images.)

##2.6 multi container startup: docker compose

Docker compose needs to be installed separately.

Let's assume a scenario where we start a front-end project. You need to start nginx to run the foreground project and start a database to record data to ensure the integrity of the whole application. So this is where docker compose can be used.

First of all, you should know that docker compose consists of the following two types:

  • Service: an application container can actually run multiple instances of the same image.
  • Project: a complete business unit composed of a set of associated application containers.

Go back to the directory we created before - docker. Compose YML file to configure multiple containers.

version: '1'
    build: .
     - "8080:80"
		 - /wwwroot:/usr/share/nginx/html
    image: "redis:alpine"

After running the command docker compose up, we can see through docker stats that two (web and redis) dockers have been started.

visit http://ip:8080 You can see the same web page as before.

2.7 network

Due to isolation, you cannot directly access the docker container on the host on the Internet. Therefore, we need to bind the port on the host to the container.

2.3 has described how to bind ports to export container ports. Let's take a look at container interconnection.

# Run the command to create a docker network
$ docker network create -d bridge my-net
# Create two containers to join my net network
$ docker run -it --rm --name busybox1 --network my-net busybox sh
$ docker run -it --rm --name busybox2 --network my-net busybox sh
# Then we enter busybox1
$ docker exec -it busybox1 /bin/bash
# ping another container and you can see its IP information
$ root@busybox1:ping busybox2
PING busybox2 ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.072 ms
64 bytes from seq=1 ttl=64 time=0.118 ms

3. Expand knowledge

3.1 Docker principle

Docker is written in Go language and uses a series of features provided by Linux kernel to realize its functions.

A system that can execute Docker is divided into two parts:

  • Core components of Linux
  • Docker related components

The Linux core module functions used by Docker include the following:

  • Cgroup – used to allocate hardware resources
  • Namespace – used to isolate the execution space of different containers
  • AUFS(chroot) – used to establish file systems of different containers
  • SELinux – used to secure the Container's network
  • Netlink – used to communicate trips between different containers
  • Netfilter – establish a network firewall packet filter based on the Container port
  • AppArmor – protect the network and execution security of the Container
  • Linux Bridge – enables different containers or containers on different hosts to communicate

3.2 operating principle of Docker in MAC window

Use the virtual machine to run Linux, and then run Docker Engine in Linux. Run Docker client locally.

3.3 you cannot start Docker with CMD ['node ']

Front end students' concerns

Why can't CMD ['node','app.js'] be used as the default startup, because in node JS official best practice says "Node.js was not designed to run as PID 1 which leads to unexpected behavior when running inside of docker.". The following figure comes from .

This problem involves the operating mechanism of linux. In short, the process with linux pid 1 is the system daemon process and will receive all orphan processes. And send a shutdown signal to these processes at the appropriate time.

However, the process of pid 1 in docker is node, and node does not recycle orphan processes. Therefore, if your application runs applications like crawlers, hang the process to pid 1 after execution, and the container will BOOM slowly.


1. use`/bin/bash`Start.

2. stay`docker run`Added later`--init`Used to initialize a docker The process of is pid 1. docker The provided process can recycle all orphan processes.

Tags: Front-end Web Development

Posted by fallen00sniper on Mon, 02 May 2022 04:39:21 +0300