Detailed use of Docker


Docker is an open source application container engine, which allows developers to package their applications and dependency packages into a portable image, and then publish them to any popular Linux or Windows machine, or realize virtualization. Containers are completely sandboxed, and there will be no interface between them.

1. Why does docker appear?

	A product goes from development to launch, from operating system, to operating environment, and then to application configuration. As development+We need to care about many things about the cooperation between operation and maintenance, which is also a problem that many Internet companies have to face. Especially after the iteration of each version, the compatibility of different versions of the environment is a test for the operation and maintenance personnel.

	Docker The reason why it develops so rapidly is that it gives a standard solution to the above problems.

	The environment configuration is so troublesome that it takes time and effort to change a machine. Many people think, can we fundamentally solve the problem, and the software can be installed with the environment? In other words, when installing, copy the original environment exactly the same. Developer utilization Docker It can eliminate the problem of "normal operation on my machine" during cooperative coding.

	Docker All environments and configurations that can run an application can be packaged together to form an image. Pass this image through docker Copy to another machine, then this machine has the environment and configuration of the original machine. Then the application will also run on the current machine.  

2. Concept of docker

	Docker Is an open source application container engine based on Go Language implementation of cloud open source projects, and comply with Apache2.0 Open source agreement.
Docker Developers can package their applications and dependency packages into a lightweight and portable container, and then publish them to any popular Linux On the machine, virtualization can also be realized.
Containers completely use the sandbox mechanism, and there will be no interface between them (similar) iPhone of app),More importantly, the container performance overhead is very low.
Docker From 17.03 After the version, it is divided into CE(Community Edition: Community Edition) and EE(Enterprise Edition: Enterprise version), we can use the community version.

	Docker The main objectives of the project are:"Build,Ship and Run Any App,Anywhere",That is, through the management of the life cycle of application components, such as encapsulation, distribution, deployment and operation, users can App(Can be a Web Application or database application, etc.) and its running environment can be "encapsulated at one time and run everywhere".  

	Linux The emergence of container technology solves such a problem, and Docker It was developed on its basis. Run app on Docker Above the container, and Docker Containers are consistent on any operating system, which enables cross platform and cross server. You only need to configure the environment once and switch to another machine to deploy it with one click. The operation is greatly simplified.

2.1 what can docker do:

  • Faster delivery and deployment
  • Faster upgrade and capacity expansion
  • Simpler system maintenance

2.2 one sentence description of docker:

A software container that solves the problems of running environment and configuration, facilitates continuous integration and contributes to the overall release of container virtualization technology.
	an open platform to build, ship, and run any app, anywhere
	An open platform that can build, publish and run any application anywhere

2.3 comparison of docker container virtualization with traditional virtual machines

     	Traditional virtual machine             	Docker container             

The disk occupies several GB to tens of GB, tens of MB to hundreds of MB
CPU memory occupation the virtual operating system occupies a lot of CPU and memory, and the Docker engine occupies a very low amount
Startup speed (from Startup to running the project) minutes (from opening the container to running the project) seconds
Installation management requires special operation and maintenance technology, which is convenient for installation and management
Application deployment is time-consuming and laborious every time. It is easy and simple from the second deployment
Coupling multiple application services are installed together, which is easy to affect each other. Each application service has a container to achieve isolation
The system relies on a kernel that does not require the same or similar kernel. Currently, Linux is recommended

3.Docker three elements

  • Image: a packaged application and its dependent environment configuration on a machine. Images can create containers, which is equivalent to templates. One image can create multiple containers

  • Container: Docker uses containers to run one or a group of applications independently. A container is a running instance created by a mirror.

  • Warehouse: a warehouse is a place where images are stored centrally. Similar to Maven's central warehouse or github. DockerHub

    The relationship between image and container is similar to the relationship between class and object in object-oriented programming

    Docker object oriented
    Mirror class
    Container object

4. Installation of docker

Docker in centOS must be installed above version 6.5. Here is centOS 7 2 as a demonstration

1. If docker has been installed before, uninstall it first. Execute the following command

$ yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
**be careful:\There is no special meaning here. It means line feed. In order to let us see what has been unloaded here

2. Install the package and some storage drivers required by docker

$ yum install -y yum-utils \
  device-mapper-persistent-data \

3. Use the following command to set up a stable repository

$ yum-config-manager \
    --add-repo \

4. Install the latest version of docker Community Edition (Docker CE)

$ yum install -y docker-ce docker-ce-cli

5. Start docker

$ systemctl start docker
$ systemctl enable docker #Power on self start

6. Verify whether Docker CE is installed correctly by running the Hello world image.

$ docker run hello-world

7. Configure the image accelerator

It is sometimes difficult to pull images from DockerHub in China. At this time, the image accelerator can be configured. Docker officials and many domestic cloud service providers provide domestic accelerator services, such as:

  • Chinese image library officially provided by Docker:
  • Qiniu cloud accelerator:

After configuring an accelerator address, if it is found that the image cannot be pulled, please switch to another accelerator address. All major cloud service providers in China have provided Docker image acceleration services. It is recommended to select the corresponding image acceleration services according to the cloud platform running Docker.

Attachment: configuring alicloud image (acceleration) creation configuration

in the light of Docker Client version greater than 1.10.0 User
 You can modify daemon configuration file/etc/docker/daemon.json To use the accelerator
 Add the following configuration file
  "registry-mirrors": [""]
$ systemctl daemon-reload  #Load profile from New
$ systemctl restart docker #Restart docker

5. Operation process of docker run

6. Common commands of docker

6.1 auxiliary commands

$ docker version	#Display Docker version information.

$ docker info		#Displays Docker system information, including the number of images and containers.

$ docker --help  	#Help command

6.2 image command

1. View image information

$ docker images	[options]	#List all local mirrors
	-a			#List all mirrors (including intermediate image layer)
    -q			#Display only image id
    --digests	#Display image summary information
    --no-trunc	#Do not truncate the output (truncate excessively long columns by default) for complete display

2. Search image

$ docker search [options] Image name		#Go to the dockerhub to query the current image
	-s Specified value		#Lists mirrors with a collection number of at least the specified value
    --no-trunc	  #Do not truncate the output (truncate excessively long columns by default) for complete display

Parameter Description:

NAME: NAME of the image warehouse source


OFFICIAL: is docker officially released

stars: similar to the star in Github, which means like and like.

AUTOMATED: built automatically.

3. Download Image

$ docker pull Image name[:TAG|@DIGEST]	
$ docker pull Image name:edition

4. Delete image

$ docker rmi Image name:edition	  #No version specified. Delete the latest version
	-f		#Force deletion 

6.3 container command

1. Operating the container

$ docker run [OPTIONS] Image name [cmd]			   #Image name creates a new and starts the container
	-i							#Run the container in interactive mode, usually with - t
	-t							#Reassign a pseudo terminal to the container, usually in conjunction with - i
	--name alias				   #Specify a name for the container
	-d							#Start the daemon container (start the container in the background) and return the container ID
	-p Mapping port number: original port number		 #Specify the port number to start, and specify the port mapping
	-P							#Host automatically assigns port number and random port mapping
	--rm                        #Automatic removal when container stops                                  
Example: $ docker run -it --name myTomcat -p 8888:8080 tomcat
    $ docker run -d --name myTomcat -P tomcat

be careful:

#If the following error is reported during startup:
docker: Error response from daemon: driver failed programming external connectivity on endpoint myTomcaaa (1148dccd5673fab421495087d352adc5428ab6ab7cf9f3fd708b662a25d92641):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32807 -j DNAT --to-destination ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)).
#The user-defined chain docker defined when the docker service is started is cleared for some reason
#Restart docker service and regenerate user-defined chain docker

#Restart the docker service before starting the container
systemctl restart docker
docker start foo

2. View running containers

$ docker ps		 #List all running containers
	-a			#Displays all containers, including those that are not running.
	-l			#Displays the most recently created container
	-n numerical value	   #Displays the last n containers created
	-q			#In silent mode, only the container number is displayed
	--no-trunc	 #Do not truncate the output (truncate excessively long columns by default) for complete display

Output details:


IMAGE: the IMAGE used.

COMMAND: the COMMAND that runs when the container is started.

CREATED: the creation time of the container.

STATUS: container STATUS.

There are seven states:

  • created
  • restarting
  • running
  • removing (migrating)
  • paused
  • exited
  • dead

PORTS: the port information of the container and the connection type used (tcp\udp).

NAMES: automatically assigned container name

3. Exit the container

$ exit		 #Container stop exit
$ Ctrl+p+q	 #The container does not stop exiting 

4. Enter the container

$ docker attach Container name/container id

5. Delete container

$ docker rm  Container name/container id		  #Delete container
$ docker rm -f 	Container name/container id	  #Delete a running container
$ docker rm -f $(docker ps -aq)	   #Delete all containers

6. Restart the container

$ docker start Container name/container id  	    #Open container
$ docker restart Container name/container id  	#Restart container

7. Stop the running container

$ docker stop Container name/container id 	   	#Stop the operation of the container normally
$ docker kill Container name/container id     	#Stop the container immediately

8. View container log

$ docker logs [OPTIONS] Container name/container id	  		
	-t			 #Join time
	-f			 #Follow the latest log print
	--tail number	#Show the last number

9. View the processes in the container

$ docker top Container name/container id   		

10. Check the internal details of the container

$ docker inspect Container name/container id    		

11. Enter the container

$ docker exec [options] Container name/container id In container command   		
	-i		#Run the container in interactive mode, usually with - t
	-t		#Assign a pseudo terminal

eg: docker exec -it centoss ls 

docker exec -it mytomcat /bin/bash

12. Copy

$ docker cp Container name/container id:Resource path in container host directory path  		#Copy the resources in the container to the host

[root@localhost~]docker cp centoss:/aaa.txt /root/

#Files can be shared between the host and the container
$ docker cp Host directory path container name/container id:Resource path in container

eg:docker cp /root/bbb.txt centoss:/

Host computer:Docker Where is the host installed  centos
 container:stay Docker A container that is started according to the image   centos container

13. Package and mirror a copy of the container

$ docker commit -a="author" -m="Description information" container ID Target image name:TAG

Example: docker commit -a="nan" -m="witout docs" b35d35f72b8d nan/mytomcat:1.2
	1.from dockerHub Download tomcat Mirror to local and run successfully
	2.Delete the image of the container generated in the previous step doc catalogue
	3.Not currently available doc Catalog tomcat As template commit Generate a new image
	4.Start a new image and compare it with the original one

7. Image principle of docker

7.1 what is mirroring?

	Image is a lightweight and executable independent software package, which is used to package the software running environment and the software developed based on the running environment. It contains all the contents required to run a software, including code, libraries required for runtime, environment variables and configuration files.

7.2 why is a tomcat image so large?

Mirror image is a thousand layer cake.

7.3 union fs (Union file system):

	Union File system is a layered, lightweight and high-performance file system. It supports the superposition of file system modifications as one submission. At the same time, different directories can be mounted under the same virtual file system. Union The file system is Docker The foundation of mirroring. Images can be inherited through layering. Based on the basic image (without parent image), various specific application images can be made.
	characteristic:Multiple file systems can be loaded at the same time, but from the outside, only one file system can be seen. Joint loading will overlay all layers of file systems, so that the final file system will contain all underlying files and directories 

7.4 Docker image loading principle:

	docker The image of is actually composed of layer by layer file systems.
	bootfs(boot file system)Mainly include bootloader and kernel,bootloader Mainly boot loading kernel,Linux It will be loaded when it is just started bootfs File system. stay docker The bottom layer of the image is bootfs. This layer and Linux/Unix The system is the same, including boot Loader( bootloader)And kernel( kernel). When boot After loading, the whole kernel is in memory. At this time, the right to use the memory has been granted by bootfs To the kernel, which will be uninstalled bootfs. 
	rootfs(root file system),stay bootfs Above, it contains typical linux In the system/dev,/proc,/bin,/etc And other standards. rootfs Various operating system distributions, such as Ubuntu/CentOS wait.
	We usually install it into the virtual machine centos There are 1 to several GB,Why? docker It's only 200 here MB?For a streamlined OS,rootfs It can be very small. It only needs to include the most basic commands, tools, and program libraries, because the bottom layer is used directly Host of Kernal,You only need to provide rootfs That's it. It can be seen that different linux Distribution, their bootfs Is consistent, rootfs There will be a difference. Therefore, different distributions can be shared bootfs. 
	linux :centos6 centos7  macos  

7.5 why does docker image adopt this hierarchical structure?

	One of the biggest benefits is resource sharing. For example, there are multiple images from the same base If it is built from the image, the host only needs to save one copy on the disk base Mirror image. At the same time, only one copy needs to be loaded in memory base Image, you can serve all containers. And every layer of the image can be shared.
	characteristic: Docker Mirrors are read-only. When the container starts, a new writable layer is loaded on top of the image. This layer is usually called the container layer, and below the container layer is called the mirror layer.

8.Docker container data volume

8.1 what is a data volume

	To put it simply, it's about data persistence. Similar to tape, removable hard disk or U Plate. Or something like that Redis Medium rdb and aof Documents. It is mainly used for container data persistence and data sharing between containers. A volume is a directory or file that exists in one or more containers and is controlled by docker Mounted to a container, but not part of a federated file system, so it can be bypassed Union File System Provides features for persistent storage or sharing of data. The volume is designed for data persistence, which is completely independent of the life cycle of the container docker The data volumes on which the container is mounted are not deleted when it is deleted.

8.2 characteristics of data volume

1,Data volumes can share or reuse data between containers.
2,Changes in the volume can take effect directly.
3,Changes in the data volume are not included in the update of the mirror.
4,The life cycle of a data volume continues until no container uses it.
5,Data volumes can also complete data sharing from host to container or from container to host.

8.3 adding data volumes

There are two ways to add data volumes to a container. The first direct command is added. The second is to use DockerFile add to.

8.3.1 command addition

1. Add

Command: docker run -it -v /Path to the host:/Path image name within the container

example: docker run -it -v /hostDataValueme:/containerDataValueme centos

2. Check whether the data volume is hung successfully

Run the docker inspect container id command to check whether there are the following contents in the json string. If so, it proves that the volume is mounted successfully.

"Mounts": [
                "Type": "bind",
                "Source": "/hostDataValueme",
                "Destination": "/containerDataValueme",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"

Then you can check whether the container and the host can share resources. Or whether the data in the host volume can be read and loaded after the container is closed and then opened.

Set the data volume in the container to be read-only.

Command: docker run -it -v /Host path:/Path in container:ro Image name
 example: docker run -it -v /hostDataValueme:/containerDataValueme:ro centos

Check configuration file

"Mounts": [
                "Type": "bind",
                "Source": "/hostDataValueme",
                "Destination": "/containerDataValueme",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"

8.3.2 adding dockerfile

What is Dockerfile

Simply put, it is a description file of the image.
  1. Create a new file (dockerfile) in a directory, and add the following script to the file

For example: create a folder of mydocker under the / directory, and create a dockerfile file under this folder

# volume test
FROM centos
VOLUME ["/containerDataValueme1","/containerDataValueme2"]
CMD echo "finished,-------success!"
CMD /bin/bash

Note: you can use the volume instruction in dockerfile to add one or more data volumes to the image.

  1. Production image after build (build image with written file)

    $ docker build -f mydocker/dockerfile -t zcn/centos .
    -f # specify the path of dockerfile file
    -t # specifies the name of the target image
    . # in the current folder

    After the image is created, run a container instance with the current image. At this time, we will find that there are already data volumes written in the container, but the data volumes in the container should be shared with the host. So where is the host's data volume reflected? Don't worry, although we can't specify the host volume with dockerfile. However, docker will provide us with the default host data volume.

Run docker inspect to check and find:

"Mounts": [
             "Type": "volume",
             "Name": "459e7a4be53a96eee859f11d10bc0b26a6a91bbd6754ecb8e355e9fe4a31e0b9",
             "Source": "/var/lib/docker/volumes/459e7a4be53a96eee859f11d10bc0b26a6a91bbd6754ecb8e355e9fe4a31e0b9/_data",
             "Destination": "/containerDataValueme2",
             "Driver": "local",
             "Mode": "",
             "RW": true,
             "Propagation": ""
             "Type": "volume",
             "Name": "4bf41829e4afaebbb40cf3d0d4725343980afffa04f09a30680fa957d80b6af4",
             "Source": "/var/lib/docker/volumes/4bf41829e4afaebbb40cf3d0d4725343980afffa04f09a30680fa957d80b6af4/_data",
             "Destination": "/containerDataValueme1",
             "Driver": "local",
             "Mode": "",
             "RW": true,
             "Propagation": ""


Benefits of using Dockerfile to build images

For the consideration of portability and sharing, use-v This way can not be directly in Dockerfile Implemented in.
Because the host directory depends on a specific host, it is not guaranteed that such a specific directory exists on all host computers.

9.DockerFile parsing

9.1. What is dockerfile

Dockerfile Is used to build docker The mirrored build file is a script composed of a series of commands and parameters.

Construction steps write Dockerfile file
2.Docker build  create mirror
3.Docker run    Run container

Dockerfile content Basics

1.Each reserved word instruction must be capitalized and followed by at least one parameter.
2.The instructions are executed from top to bottom.
3.#Indicates a comment.
4.Each instruction creates a new mirror layer and commits the mirror.

Reserved word instruction in Dockerfile

Reserved word function
FROM which image is the current image based
MAINTAINER name and email address of the image MAINTAINER
Instructions to RUN when the RUN container is built
Export the port number exposed by the current container
WORKDIR specifies the working directory in which the terminal logs in by default after creating the container, which is a foothold
ENV is used to set environment variables during image building
ADD copies the files in the host directory into the image, and the ADD command will automatically process the URL and decompress the tar package
COPY is similar to ADD, which copies files and directories into the image
Copy the file / directory from the < original path > in the build context directory to the < target path > location in the image of the new layer
VOLUME container data VOLUME, which is used for data saving and persistence
CMD specifies a command to run when the container starts
There can be multiple CMD instructions in Dockerfile, but only the last one takes effect. CMD will be replaced by the parameters after docker run
ENTRYPOINT specifies the command to run when a container starts
The purpose of ENTRYPOINT, like CMD, is to specify the container startup program and its parameters
onbuild runs the command when building an inherited Dockerfile. After the parent image is inherited, the onbuild of the parent image is triggered

9.2 analysis of dockerfile construction process

1.Docker Run a container from the underlying image
2.Execute a command and modify the container
3.Execution similar docker commit Commit a new mirror layer
4.docker Then run a new container based on the image just submitted
5.implement dockerfile The next instruction in until all instructions are executed

9.3 summary

1.From the perspective of application software, Dockerfile,docker Mirroring docker Containers represent three different phases of software
	Dockerfile It is the raw material of software
	docker Image is the deliverable of software
	docker The container can be considered as the running state of the software
2.Dockerfile Development oriented, docker Image becomes the delivery standard, docker Container involves deployment and operation and maintenance, which are indispensable and act together docker The cornerstone of the system.

1.Dockerfile,Need to define a Dockerfile,Dockerfile Defines everything the process needs. Dockerfile The contents involved include executing code or files, environment variables, dependent packages, runtime environment, dynamic link library, distribution version of operating system, process service and kernel process (when the application needs to deal with system service and kernel process, what design should be considered namespace And so on
2.docker Image, in use Dockerfile After defining a file, docker build Will produce a docker Mirror, when running docker When mirroring, it will really start providing services
3.docker Container, which provides services directly

10.Dockerfile case

Base scratch: 99% of the images in Docker Hub are built by the software required for installation and configuration in the base image.

10.1 custom image myCentos

  1. First learn about Centos on Docker Hub.

    The default foothold of centos image on DockerHub is /, and VIM is not supported by default. Now we want to customize a centos and change its default foothold to support vim.

  2. Write Dockerfile

    FROM centos
    ENV MYPATH /tmp
    RUN yum -y install vim
    EXPOSE 80
    CMD echo $MYPATH
    CMD echo "build-------success"
    CMD /bin/bash

  3. Build image

    $ docker build -f /myDocker/dockerfile -t zcn/mycentos .

  4. Run container

    $ docker run -it -P zcn/mycentos

10.2 custom image myTomcat

  1. First create the directory / mydocekrfile/tomcat

    Put the tar packages of tomcat and jdk8 in this directory. Then create a dockerfile file.

  2. Write Dockerfile

    FROM centos
    #Copy the c.txt of the current context of the host into the / usr/local / path of the container
    COPY ./c.txt /usr/local/cincontainer.txt
    #Copy the tar package of tomcat and jdk into the container
    ADD ./apache-tomcat-9.0.22.tar.gz /usr/local/
    ADD ./jdk-8u171-linux-x64.tar.gz /usr/local/
    #Install vim editor
    Run yum -y install vim
    #Set login foothold / usr/local
    ENV MYPATH /usr/local/
    #Configuring environment variables for java and tomcat
    ENV JAVA_HOME /usr/local/jdk1.8.0_171
    ENV CLASSPATH J A V A H O M E / l i b / d t . j a r : JAVA_HOME/lib/dt.jar: JAVAH​OME/lib/dt.jar:JAVA_HOME/lib/tools.jar
    ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.22
    ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.22
    #The port on which the container listens when running
    EXPOSE 8080
    #Run tomcat at startup

    ENTRYPOINT ["/usr/local/apache-tomcat-9.0.22/bin/"]

    CMD ["/usr/local/apache-tomcat-9.0.22/bin/","run"]

    CMD /usr/local/apache-tomcat-9.0.22/bin/ && tail -F /usr/local/apache-tomcat-9.0.22/bin/logs/catalina.out

Create image jar package for java project

FROM centos
#Copy the c.txt of the current context of the host into the / usr/local / path of the container
#Copy the tar package of jdk to the container
ADD ./jdk-8u171-linux-x64.tar.gz /usr/local/
#Set login foothold / usr/local
ENV MYPATH /usr/local/
#Configuring environment variables for java and tomcat
ENV JAVA_HOME /usr/local/jdk1.8.0_171
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
COPY ./yingx_zhangcn184s-0.0.1-SNAPSHOT.jar yingx_zhangcn184s-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","./yingx_zhangcn184s-0.0.1-SNAPSHOT.jar"]
  1. Build image

    $ docker build -t zcn/mytomcat .
    - f is not written here because - f can be omitted if it is in the current directory and the file name is orthodox dockerfile.

  2. Run container

    $ docker run -d -p 8888:8080 --name mytomcat -v /zcn/tomcat/test:/usr/local/apache-tomcat-9.0.22/webapps/test -v /zcn/tomcat/logs:/usr/local/apache-tomcat-9.0.22/logs --privileged=true zcn/tomcat

Explanation: create a data volume and make a mapping between the project directory in the webapps directory in the container and the test directory in the host

10.3 summary

11. Install mysql

11.1 find mysql image on DockerHub

$ docker search mysql

11.2 pull mysql locally

$ docker pull mysql:5.6

11.3 running mysql container

$ docker run -p 3333:3306 --name mysql -v /zcn/mysql/conf:/etc/mysql/conf.d -v /zcn/mysql/logs:/logs -v /zcn/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.6 

11.4 entering mysql container

$ docker exec -it container id /bin/bash

11.5 mysql operation

#Enter mysql  
mysql -u root -p 
#Enter password 123456

#Query mysql database    
show databases;

#Create database    
create database ems;

#Switch to db01     
use ems;

#Build table
create table t_book(id int not null primary key, name varchar(20));

#Query all tables
show tables;

#Insert a piece of data into the table  
insert into t_book values(1,'java');

#Query the data of this table 
select * from t_book;

#Try to connect this mysql with rhubarb navcat of win10

12. Install redis

12.1 find redis image on DockerHub

$ docker search redis

12.2 pull redis locally

$ docker pull redis:4.0.14

12.3 running redis container

$ docker run -it --name redis -p 6379:6379 redis:4.0.14
#amount to
$ docker run -it --name redis -p 6379:6379 redis:4.0.14 redis-server

12.4. Connect to redis client

$ docker exec -it container id redis-cli

12.5 designated port startup

$ docker run -d --name redise -p 6370:6379 redis:4.0.14
#Specify port connection
$ docker exec -it container id redis-cli -h -p 6370

#Command description:
-h : Specify connection address
-p 6370 : Specify connection port

12.6 data persistence mode startup

docker run -p 6379:6379 -v $PWD/data:/data --name redis -d redis:4.0.14 redis-server --appendonly yes

#Command description:
-p 6379:6379 : Map the 6379 port of the container to the 6379 port of the host
-v $PWD/data:/data : The current directory in the host data Mounted to container/data
redis-server --appendonly yes : Execute in container redis-server Start the command and open redis Persistent configuration

12.7 specify profile startup

1. Create a profile

#Create a folder, create a new profile, paste the profile downloaded from the official website and modify it

mkdir /usr/local/docker
vim /usr/local/docker/redis.conf

2. Modify the configuration file

port 7000  

Profile introduction

bind       #Bind: bind the network card IP of redis server. The default is, which is the local loopback address. In this case, the redis service can only be accessed through the local client, not through the remote connection. If the bind option is blank, all connections from available network interfaces will be accepted.

protected-mode yes    #The default is yes. The protection mode is enabled and restricted to local access. Change to no and turn off the protection mode

port 6379           #Specify the port on which redis runs. The default is 6379. Since redis is a single thread model, the port will be modified when multiple redis processes are opened on a single machine.

timeout 0           #Sets the timeout in seconds when the client connects. When the client does not issue any instructions during this period of time, close the connection. The default value is 0, which means it is not closed.

daemonize no   #By default, no is changed to yes, which means that it is started as a daemon and can be run in the background. Unless the kill process is changed to yes, redis will fail to start in the configuration file mode

databases 16   #The default value of the number of databases is 16, which means that Redis has 16 databases by default.

appendonly no #By default, redis uses rdb persistence, which is enough in many applications. Change yes to enable aof persistence

3. Specify profile startup

docker run -p 7001:7000 --name myredis -v /usr/local/docker/redis.conf:/etc/redis/redis.conf -v /usr/local/docker/data:/data -d redis:4.0.14 redis-server /etc/redis/redis.conf --appendonly yes

#Command explanation:
-p 6379:6379 Port mapping: the front represents the host part and the back represents the container part.
-v Mount directory. The rules are the same as port mapping.
Why do I need to mount the directory: I think docker It is a sandbox isolation level container. This is its feature and security mechanism. You can't access the external (host) resource directory casually, so you need this mounting directory mechanism.
-d redis Indicates background startup redis
redis-server /etc/redis/redis.conf Start with profile redis,Load in container conf File, and finally find the mounted directory/usr/local/docker/redis.conf

–-appendonly yes open redis Persistence

4. Connect the client

$ docker exec -it myredis redis-cli -h -p 7000

13. Use of remote warehouse

1. Prepare the image

2. Alibaba cloud creates an image library

3. Push the local image to Alibaba cloud image library

$ sudo docker login 
#Alicloud login password is required
$ sudo docker tag [ImageId][Mirror version number]
$ sudo docker push[Mirror version number]

$ sudo docker login
$ sudo docker tag zcn/mytomcat
$ sudo docker push

4. Pull the Alibaba cloud remote warehouse image to the local location

$ docker pull Image name of Alibaba cloud address:Version number
 Just copy it from the official website
 Public network address:

$ docker pull 1.0

14. Install Elasticsearch

Note: increase the JVM thread limit

#In the centos window, modify the configuration sysctl conf
	$ vim /etc/sysctl.conf
#Add the following configuration
	$ vm.max_map_count=262144 
#Enable configuration
	$ sysctl -p
#Note: this step is to prevent the following errors from being reported when starting the container:
bootstrap checks failed max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

14.1 pull es to local

$ docker pull elasticsearch:6.8.2

14.2 start es container

$ docker run -d --name es -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms128m -Xmx128m"  elasticsearch:6.8.2

#Parameter description
-p: take docker The port number in the image maps to the host machine port number, which is the host machine port number:docker Container port number can be written in multiple numbers. If multiple port numbers are continuous, they can be used directly-Connection, e.g. 4560-4600:4560-4600
-v: take docker The file in the image is mapped to the file specified by the host machine, which can be a folder,-v Host file:After the container file is mapped, the file on the host can be modified directly docker Multiple configurations can also be written. docker The configuration document of the software in the image is in the/usr/share"/{Software name}lower
-e Specify environment variables;

14.3 access test

Access address:

14.4 installation of kibana

1. Download kibana image to local

$ docker pull kibana:6.8.2

2. Start kibana container

$ docker run -d --name kibana -e ELASTICSEARCH_URL= -p 5601:5601 kibana:6.8.2

3. Visit kibana

Access address:

14.5 installing IK word splitter

1. Download the corresponding version of IK word splitter

$ wget

2. Unzip it into the plugins/elasticsearch folder

#Install unzip
$ yum install -y unzip  
#Unzip the zip using unzip
$ unzip -d plugins/elasticsearch 

3. Add user-defined extension words and stop words

cd plugins/elasticsearch/config
vim IKAnalyzer.cfg.xml

	<comment>IK Analyzer Extended configuration</comment>
	<!--Users can configure their own extended dictionary here -->
	<entry key="ext_dict">ext_dict.dic</entry>
	<!--Users can configure their own extended stop word dictionary here-->
	<entry key="ext_stopwords">ext_stopwords.dic</entry>

4. Create profile

#Create ext in config directory under ik word splitter directory_ The file code of dict.dic must be UTF-8 to take effect
$ vim ext_dict.dic Add extension words
#Create ext in config directory under ik word splitter directory_ stopword. DIC file 
$ vim ext_stopwords.dic Just add a stop word

5. Submit this container as a new image

$ docker commit -a="zcn" -m="with IKAnalyzer" container id zcn/elasticsearch:6.8.2

6. Use the newly generated es image to create a container and mount the data volume

$ docker run -d --name es -p 9200:9200 -p 9300:9300 -e ES_JAVA_OPTS="-Xms128m -Xmx128m" -v /usr/local/IKAnalyzer:/usr/share/elasticsearch/plugins/elasticsearch/config zcn/elasticsearch:6.8.2

#Parameter description
-p take docker The port number in the image maps to the host machine port number, which is the host machine port number:docker Container port number can be written in multiple numbers. If multiple port numbers are continuous, they can be used directly-Connection, e.g. 4560-4600:4560-4600
-e ES_JAVA_OPTS="-Xms256m -Xmx256m"  #Specify environment variables, set initial heap memory and maximum memory, and adjust virtual machine memory
-v: take docker The file in the image is mapped to the file specified by the host machine, which can be a folder,-v Host file:After the container file is mapped, the file on the host can be modified directly docker Multiple configurations can also be written. docker The configuration document of the software in the image is in the/usr/share"/{Software name}lower.

15. Install RabbitMQ

15.1 find RabbitMQ image on DockerHub

$ docker search rabbitmq

15.2 pull RabbitMQ locally

$ docker pull rabbitmq:management

15.3 running RabbitMQ container

$ docker run -d --hostname rabbitmq -p 5671:5671 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 --name okong-rabbit rabbitmq:management

#Parameter Description:

-d Background running container;
–name Specify the container name;
-p Specify the port on which the service runs (5672: application access port; 15672: console) Web Port number);
-v Mapping directories or files;
–hostname Host name( RabbitMQ An important note of is that it stores data according to the so-called "node name", which is the host name by default);
-e Specify environment variables; ( RABBITMQ_DEFAULT_VHOST: Default virtual machine name; RABBITMQ_DEFAULT_USER: Default user name; RABBITMQ_DEFAULT_PASS: Default user name (password)

15.4. Accessing RabbitMQ


Login account: guest, password: Guest

16. Install Nginx

16.1 find nginx image on DockerHub

$ docker search nginx

16.2 pull redis locally

$ docker pull nginx

16.3 running Nginx containers

$ docker run --name nginx-test -p 8080:80 -d nginx

#Parameter Description:

--name nginx-test: Container name.
-p 8080:80:  Port mapping, mapping the local 8080 port to the 80 port inside the container.
-d nginx:  Set the container to run all the time in the background.

16.4. Access Nginx

Access address: http://serverip:8080

17. Portal installation and configuration

17.1 introduction

Portal is an open source and lightweight Docker management user interface. Based on Docker API, it provides the functions of status display panel, rapid deployment of application template, container mirroring, basic operations of network data volume (including uploading, downloading images, creating containers, etc.), event log display, container console operation, centralized management and operation of Swarm clusters and services, login user management and control, etc. The function is very comprehensive, which can basically meet all the needs of small and medium-sized units for container management.

17.2 installation and use

See the official manual for installation and use: .

It is recommended to directly use docker for installation, which is convenient and fast.

17.3 pull the docker image first

# Search image
$ docker search portainer/portainer
# Pull image
$ docker pull portainer/portainer

17.4 running portainer image

$ docker run -d -p 9001:9000 -v /root/portainer:/data -v /var/run/docker.sock:/var/run/docker.sock --name dev-portainer portainer/portainer

#Parameter Description:
-d #The container runs in the background
-p 9001:9000 # 9000 ports in the host 9000 port mapping container
-v /var/run/docker.sock:/var/run/docker.sock # Mount the Unix domain socket that the Docker daemon of the host machine listens to by default into the container
-v /root/portainer:/data # Mount the host directory / root / container to the container / data directory;
–-name dev-portainer # Specifies the name of the run container

Note: you must mount the local / var / run / docker when starting the container Socker and / var / run / docker Socker connection.

17.2 connecting the portal

Connection address:

18. Image making of springboot project

1. Import maven dependency


        <!--support jsp Plug in for-->

        <!--  springboot Provided project compilation and packaging plug-ins  -->


matters needing attention:

1) The version of the springboot package plug-in must be 1.4.2 Release, only this version supports jsp, not higher versions
2) All front-end resources under webapp must be packaged in the META-INF/resources directory of the jar package, otherwise they cannot be recognized

2. Type the project into jar package

matters needing attention:

After the jar package is completed, use the local test

java -jar jar Package name.jar

3. Put the jar package into the host

4. Configure dockerfile

Preparation under the current directory: jdk-8u171-linux-x64 tar. gz yingx_ zhangcn184s-0.0.1-SNAPSHOT. jar

FROM centos
#Copy the tar package of jdk to the container
ADD ./jdk-8u171-linux-x64.tar.gz /usr/local/
#Set login foothold / usr/local
ENV MYPATH /usr/local/
#Configure java environment variables
ENV JAVA_HOME /usr/local/jdk1.8.0_171
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
COPY ./yingx192_zhangcns-0.0.1-SNAPSHOT.jar yingx192_zhangcns-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","./yingx192_zhangcns-0.0.1-SNAPSHOT.jar"]

5. Build image

docker build -t yingx .
It's not written here-f Because if it is in the current directory and file The file name is orthodox dockerfile that-f It can be omitted.

6. Run image

Note: the mapping port is the port number of the project

docker run -it --name yingx -p 9292:9090 yingx

7. Visit the project

Posted by zampu on Sun, 08 May 2022 16:22:41 +0300