Virtualization is the foundation of cloud computing. Through virtualization technology, a computer is virtualized into multiple logical computers. Multiple logical computers can be run on one computer at the same time. At the same time, each logical computer can run different operating systems. Applications can run in independent spaces without affecting each other, so as to improve the working efficiency of the computer. In short, virtualization enables multiple virtual machines to run on one physical server. Virtual machines share CPU, memory and IO hardware resources of physical machines, but logically, virtual machines are isolated from each other.
Introduction to Virtualization:
To put it bluntly, virtualization is a complete resource, which can be divided or virtualized into multiple parts, so that these multiple resources can be used to make the best use of everything, reduce waste, improve utilization and save money.
Virtualization technology first appeared in the IBM mainframe system in the 1960s and gradually became popular in the System 370 Series in the 1970s
Install software on the physical hardware: Virtual Machine Monitor (VMM), and use VMM to control the generation of multiple Virtual Machine instances. Each vm can run independent operating system and application software.
Virtualization is a broad term that may mean different things to different people, depending on their environment. In the field of computer science, virtualization represents the abstraction of computing resources, not limited to the concept of virtual machine.
For example, the abstraction of physical memory: virtual memory technology is produced, which makes the application think that it has continuously available Address Space. In fact, the code and data of the application may be separated into multiple fragments (pages or segments), or even exchanged to external memory such as disk and flash memory. Even if the physical memory is insufficient, the application can execute smoothly.
Full Virtualization: the core of full virtualization lies in "full", which means that VMM simulates the complete underlying hardware for the virtual machine, including processor, physical memory, clock, peripherals, etc.
Advantages: the operating system or other system software originally designed for physical hardware can run in the virtual machine without any modification.
- However, no matter how high you fly, you have to land at last. vm still has to deal with VMM. For full virtualization, VMM must completely and completely simulate itself as hardware and provide vm with all hardware call interfaces
- The execution of privileged instructions must be simulated
Semi Virtualization: This is a technology that modifies the code of Guest OS partial access privilege state to directly interact with VMM. In the semi virtualized virtual machine, some hardware interfaces are provided to the client operating system in the form of software, which can be provided through Hypercall (direct call provided by VMM to Guest OS, similar to system call). For example, Guest OS modifies the code of switching page table to call Hypercall to directly modify the shadow CR3 register and translation address. Since there is no need to generate additional exceptions and simulate some hardware execution processes, semi virtualization can greatly improve performance. The more famous VMM are Denali and Xen.
Direct: directly use physical hardware resources (need support, not perfect)
Advantages and disadvantages of virtualization
- Centralized management (remote management and maintenance)
- Improve hardware utilization (low physical resource utilization - e.g. peak, virtualization solves "idle" capacity)
- Dynamically adjust the machine / resource configuration (virtualization separates the application and service hardware of the system and improves flexibility)
- High reliability (additional functions and schemes can be deployed to improve transparent application environments such as load balancing, migration, recovery and replication)
- High initial expenses (initial hardware support)
- Reduce hardware utilization (specific scenarios - for example, applications that eat too much resources are not necessarily suitable for virtualization)
- Larger error impact surface (the local physical machine down opportunity leads to the unavailability of the virtual machine, and all files in the virtual machine may be damaged)
- Complex implementation configuration and management (management personnel have difficulty in operation and maintenance and troubleshooting)
- Certain restrictions (virtualization technology involves various restrictions and must be used in combination with servers, applications and suppliers supporting / compatible virtualization)
- Security (security risks of virtualization technology)
Comparison before and after virtualization
- Operating system:
In the LAMP architecture, if the architecture requires high security isolation between services, the Apache page and the MySQL database directory must not meet each other. If the Apache vulnerability is exposed, the attacker can access the MySQL data directory through the Apache process, so as to obtain the data in MySQL. This is a serious security hazard, and to solve this potential hazard, You can achieve kernel level isolation (using virtualization technology).
- Combination of software and hardware:
Because the hardware and operating system are incompatible or unsupported, some software and hardware functions cannot be used normally (which is also the most difficult problem). Virtualization is used. The software and hardware will be isolated (deployed) through the virtualization layer driver. As long as the virtualization layer can identify software / hardware applications, the software and hardware can be used together
- Port conflict
Apache and Nginx have the same location (port 80) and can only be separated by reverse proxy. At the same time, if this method is used on the same machine, if important data files in Apache and Nginx are leaked at the same time, virtualization can isolate services.
Comparison before and after Virtualization:
- Each host has an operating system
- Close combination of software and hardware
- Running multiple applications on the same host often creates conflicts
- Low utilization of system resources (e.g. 5%)
- Hardware is expensive and inflexible
- It breaks the interdependence between operating system and hardware
- By encapsulating the technology into virtual machines, the operating system and applications are managed as a single individual
- Strong safety and fault isolation
- Virtual machines are hardware independent and can run on any hardware
The full name of KVM is kernel based virtual machine. It is a kernel module of Linux, which makes Linux become a Hypervisor:
- It was developed by Quramnet, which was acquired by Red Hat in 2008.
- It supports x86 (32 and 64 bit), s390, Powerpc and other CPU s.
- It has been included in the Linux kernel as a module since Linux 2.6.20.
- It requires a CPU that supports virtualization expansion.
- It is completely open source.
KVM is an open source Linux native full virtualization solution based on X86 hardware of virtualization extension (Intel VT or AMD-V). In KVM, the virtual machine is implemented as a conventional Linux process, which is scheduled by the standard Linux scheduler; Each virtual CPU of the virtual machine is implemented as a regular Linux process. This enables KMV to use the existing functions of the Linux kernel.
However, KVM itself does not perform any hardware simulation. The client space program needs to set the address space of a client virtual server through the / dev/kvm interface, provide it with simulated I/O, and map its video display back to the host display screen. Currently, this application is QEMU.
User space, kernel space and virtual machine on Linux:
Guest: the client system, including CPU (vCPU), memory and driver (Console, network card, I/O device driver, etc.), is operated in a restricted CPU mode by KVM.
KVM: runs in kernel space and provides virtualization of CPU and memory, as well as I/O interception of clients. The I/O of the Guest is intercepted by KVM and handed over to QEMU for processing.
QEMU: modified QEMU code for KVM virtual machine, which runs in user space, provides hardware I/O virtualization, and interacts with KVM through IOCTL /dev/kvm device.
The functions supported by KVM include:
- Support CPU and memory Overcommit
- Support for semi virtualized I/O (virtio)
- Support hot plug (cpu, block device, network device, etc.)
- Support symmetric multi processing (SMP for short)
- Support Live Migration
- Support PCI device direct allocation and single root I/O Virtualization (SR-IOV)
- Support kernel same page merging (KSM)
- Support NUMA (non uniform memory access)
- Client mode: the mode in which the client runs in the operating system. The client is divided into kernel mode and user mode. Its functions are as follows:
- User mode: provide users with user space tools for virtual machine management and execute I / O on behalf of users. Qemu works in this mode. It mainly controls libkvm tool (tool function, controlling KVM in the kernel) to call physical virtualization resources, and calls physical virtualization resources through ioctl for use by virtual machines
- Linux kernel mode: simulate memory and CPU, realize client mode switching, and process the introduction of client mode. KVM runs in this mode. It mainly provides virtualization of CPU and memory (hardware resources) for virtual machines to be called by Qemu components
Qemu in user mode uses the interface libkvm to enter the kernel mode through ioctl system call. The KVM driver creates a virtual CPU and virtual memory for the virtual machine, then executes the VMLAU · NCH instruction to enter the client mode, loads the Guest OS and runs it. If exceptions occur during the operation of Guest OS, pause the operation of Guest OS and save the current state, and exit to kernel mode to deal with these exceptions.
When processing these exceptions in kernel mode, if I/O is not required, re-enter client mode after processing. If I/O is required, enter the user mode, and Qemu will process the I/O. after processing, enter the kernel mode, and then enter the customer mode
KVM create virtual machine
Modify host name: [root@localhost ~]# hostnamectl set-hostname kvm [root@localhost ~]# su
Set the mirror disc to auto / permanent mount:
Make sure the CD is connected before mounting
Set the mirror disc to auto / permanent mount:
[root@kvm ~]# vim /etc/fstab /dev/cdrom /mnt iso9660 defaults 0 0 #Add at the end [root@kvm ~]# mount -a mount: /dev/sr0 is write-protected, mounting read-only [root@kvm ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 xfs 292G 4.5G 288G 2% / devtmpfs devtmpfs 3.8G 0 3.8G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 9.1M 3.9G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 xfs 297M 157M 141M 53% /boot tmpfs tmpfs 781M 4.0K 781M 1% /run/user/42 tmpfs tmpfs 781M 32K 781M 1% /run/user/0 /dev/sr0 iso9660 4.3G 4.3G 0 100% /mnt Make local warehouse: [root@kvm ~]# cd /etc/yum.repos.d/ [root@kvm yum.repos.d]# ls CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo CentOS-CR.repo CentOS-Media.repo CentOS-Debuginfo.repo CentOS-Sources.repo [root@kvm yum.repos.d]# mkdir backup [root@kvm yum.repos.d]# mv C* backup/ [root@kvm yum.repos.d]# vi local.repo [local] name=kvm baseurl=file:///mnt gpgcheck=0 enable=1 Reload Yum Warehouse: [root@kvm yum.repos.d]# yum clean all Loaded plugins: fastestmirror, langpacks Cleaning repos: local Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos Cleaning up list of fastest mirrors [root@kvm yum.repos.d]# yum makecache Loaded plugins: fastestmirror, langpacks local | 3.6 kB 00:00 Network support tools (1/4): local/group_gz | 156 kB 00:00 (2/4): local/filelists_db | 3.1 MB 00:00 (3/4): local/primary_db | 3.1 MB 00:00 (4/4): local/other_db | 1.2 MB 00:00 Determining fastest mirrors Metadata Cache Created Turn off firewall and core protection: [root@kvm yum.repos.d]# systemctl stop firewalld [root@kvm yum.repos.d]# setenforce 0
Install KVM basic components;
install GNOME If the desktop environment is equipped with a graphical interface, it does not need to be installed: [root@kvm yum.repos.d]# yum groupinstall -y "GNOME Desktop" KVM modular: [root@kvm yum.repos.d]# yum -y install qemu-kvm install KVM Debugging tools,Optional: [root@kvm yum.repos.d]# yum -y install qemu-kvm-tools Command line tools for building virtual machines: [root@kvm yum.repos.d]# yum -y install virt-install qemu assembly,Create disk, start virtual machine, etc: [root@kvm yum.repos.d]# yum -y install qemu-img Network support tools: [root@kvm yum.repos.d]# yum -y install bridge-utils Virtual machine management tool: [root@kvm yum.repos.d]# yum -y install libvirt GUI management virtual machine: [root@kvm yum.repos.d]# yum -y install virt-manager
Check whether the CPU supports virtualization:
[root@kvm ~]# cat /proc/cpuinfo | grep vmx
Check whether KVM module is installed:
[root@kvm ~]# lsmod | grep kvm kvm_intel 170086 0 kvm 566340 1 kvm_intel irqbypass 13503 1 kvm
Set the display mode of the startup interface:
[root@kvm ~]# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
Two modes of KVM network:
- Nat: by default, data packets are transmitted through the host interface in NAT mode. You can access the external network, but you cannot access the virtual machine network from the outside
- Bridge: this mode allows the virtual machine to have a network like an independent host. External machines can directly access the inside of the virtual machine, but they need network card support (generally supported by wired network cards)
Deploy using Bridge mode:
[root@kvm ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
Create and edit bridge network card:
[root@kvm ~]# cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-br0 [root@kvm ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0
Restart the network card:
[root@kvm ~]# systemctl restart network [root@kvm ~]# ifconfig br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.133 netmask 255.255.255.0 broadcast 192.168. inet6 fe80::b22d:afe3:b593:dd16 prefixlen 64 scopeid 0x20<lin ether 00:0c:29:18:f1:cd txqueuelen 1000 (Ethernet) RX packets 109 bytes 9383 (9.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 117 bytes 18338 (17.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:18:f1:cd txqueuelen 1000 (Ethernet) RX packets 683939 bytes 1013462127 (966.5 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 262919 bytes 16121133 (15.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.12 ether 52:54:00:c9:4d:d4 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
KVM is deployed in management:
Create a directory for KVM storage and image data and upload centos7 image:
[root@kvm ~]# mkdir -p /data_kvm/iso [root@kvm ~]# mkdir -p /data_kvm/store [root@kvm ~]# cd /data_kvm/iso/ [root@kvm iso]# ls [root@kvm iso]# ll total 0 [root@kvm iso]# rz -E rz waiting to receive. [root@kvm iso]# ll total 221184 -rw-r--r--. 1 root root 226492416 Jan 4 2018 CentOS-7-x86_64-DVD-1708.iso
Managing virtual machines using virtual system manager
Create storage pool (iso, store):
[root@kvm ~]# virt-manager #After entering, the virtual system manager will pop up
Then keep clicking forward and click Finish to enter centOS installation screen