Install ceph and openstack multi node
network environment
hostname | network card | ip |
---|---|---|
master | em1 | 10.201.7.10 |
em2 | 10.10.10.10 | |
em3 | Connected to the switch, but no ip | |
node1 | em1 | 10.201.7.11 |
em2 | 10.10.10.11 | |
node2 | em1 | 10.201.7.12 |
em2 | 10.10.10.12 |
-
10.201.7.0/24: it can be connected to the external network, which is the default route for three machines
-
10.10.10.0/24: LAN for communication between 3 machines
-
All ip is in the form of static ip
Multi network card renaming method (master node1 node2)
-
Modify network card configuration file
mv /etc/sysconfig/network-scripts/ifcfg-ens32 /etc/sysconfig/network-scripts/ifcfg-em1 -
Edit network card name and device
vim /etc/sysconfig/network-scripts/ifcfg-em1
NAME=em1
DEVICE=em1 -
Disable system predictable naming rules
vim /etc/default/grub modify GRUB_CMDLINE_LINUX is as follows:
GRUB_ CMDLINE_ Add net. Net in Linux Ifnames = 0 biosdevname = 0 to quiet
Run GRUB2 mkconfig - O / boot / GRUB2 / GRUB CFG regenerate GRUB configuration and update kernel parameters.
-
Change configuration
vim /etc/udev/rules.d/70-persistent-net.rules
Change attr {address} kernel (generally not changed, I don't know, it can be changed to *) NAMESUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1e:67:ce:19:58", ATTR{type}=="1", KERNEL=="eth*", NAME="em1"
Static ip configuration (master node1 node2)
# ifcfg-em1 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=em1 DEVICE=em1 ONBOOT=yes GATEWAY=10.201.7.254 IPADDR=10.201.7.10 # ip can be changed according to different machines NETMASK=255.255.255.0 DNS1=61.128.128.68 # ifcfg-em2 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=no IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=em2 DEVICE=em2 ONBOOT=yes GATEWAY=10.10.10.1 IPADDR=10.10.10.10 # ip can be changed according to different machines NETMASK=255.255.255.0
Basic configuration
Change node hostname (master node1 node2)
hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
Modify hosts (master node1 node2)
# Modify on master first 10.201.7.10 master 10.201.7.11 node1 10.201.7.12 node2 scp /etc/hosts root@node1:/etc scp /etc/hosts root@node2:/etc
Generate SSH key pair. (master)
# When "Enter passphrase" is prompted, enter directly and the password will be empty ssh-keygen
Copy the public key to each Ceph node (master)
ssh-copy-id root@node1 ssh-copy-id root@node2 ssh-copy-id root@master
Configure yum source (master node1 node2)
yum clean all
[base] name=CentOS-$releasever - Base baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #released updates [updates] name=CentOS-$releasever - Updates baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
yum install epel-release sed -e 's!^metalink=!#metalink=!g' \ -e 's!^#baseurl=!baseurl=!g' \ -e 's!//download\.fedoraproject\.org/pub!//mirrors.tuna.tsinghua.edu.cn!g' \ -e 's!http://mirrors\.tuna!https://mirrors.tuna!g' \ -i /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo
sudo yum remove docker docker-common docker-selinux docker-engine sudo yum install -y yum-utils device-mapper-persistent-data lvm2 wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install python-pip pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pip -U pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
- Set the ceph source, which specifies the ceph installation version (master node1 node2)
vim /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages for $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/$basearch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/noarch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1
- Update source
yum clean all yum makecache yum update
Turn off the firewall and selinux (master node1 node2)
systemctl disable --now firewalld systemctl stop --now firewalld setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
Set time zone (master node1 node2)
hwclock tzselect # Enter 5 9 1 1 in turn # Get TZ='Asia/Shanghai'; export TZ # Modify the profile file and add the following two lines: vim /etc/profile TZ='Asia/Shanghai' export TZ
Configure NTP time synchronization server (master node1 node2)
yum -y install ntpdate ntpdate -u ntp.api.bz crontab -e */20 * * * * ntpdate -u ntp.api.bz > /dev/null 2>&1 systemctl reload crond.service
Install docker(master node1 node2)
# 1. Set the yum (docker) source # 2. Uninstall the old version of docker yum remove -y docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine # 3. Tools required for installation yum install -y yum-utils device-mapper-persistent-data lvm2 # 4. Install docker yum install docker-ce docker-ce-cli containerd.io # 5. Modify the configuration mkdir -p /etc/systemd/system/docker.service.d tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF' [Service] MountFlags=shared EOF sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["http://f1361db2.m.daocloud.io"] } EOF # 6. Set startup and restart sudo systemctl enable docker && sudo systemctl daemon-reload && sudo systemctl restart docker
System partition (master node1 node2)
-
Partition with parted or fdisk
-
Refresh partition with partprobe
-
Create pv with pvcreate
#parted create new partition example parted (parted) p (parted) mkpart (parted) xfs (parted) 0GB (parted) 400GB (parted) p (parted) q partprobe pvcreate /dev/sda5 #Sample fdisk create new partition fdisk /dev/sda command(input m get help): `n` Select (default p):`l` Start sector (1268789248-1874329599,The default is 1268789248): Last a sector, +a sector or +size{K,M,G} (1268789248-1874329599,The default is 1874329599): `+500G` command(input m get help): `w` partprobe pvcreate /dev/sda5
Using CEPH ansible to install ceph(docker version)
CEPH ansible installation
-
stable-4.0 Supports Ceph version nautilus. This branch requires Ansible version 2.8.
-
Download stable version 4.0: https://github.com/ceph/ceph-ansible/tree/stable-4.0
virtualenv ceph-env source ceph-env/bin/activate pip install --upgrade pip unzip ceph-ansible-stable-4.0.zip cd ceph-ansible-stable-4.0 && pip install -r requirements.txt
ceph deployment of docker
-
Document preparation
cd ceph-ansible-stable-4.0 cp site-container.yml.sample site-container.yml cp dummy-ansible-hosts ansible-hosts cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml
-
Configuration modification
`vim ansible-hosts` [mons] master node1 node2 [osds] master node1 node2 [mgrs] master node1 node2 [clients] master node1 node2 [rgws] master node1 node2 [mdss] master node1 node2 [grafana-server] master [nfss] master `vim site-container.yml` - hosts: - mons - osds - mdss - rgws - nfss # - rbdmirrors - clients # - iscsigws # - iscsi-gws # for backward compatibility only! - mgrs - grafana-server `vim group_vars/all.yml (cat all.yml | grep -Ev '^$|#')` generate_fsid: true monitor_interface: em1 public_network: 10.201.7.0/24 cluster_network: 10.10.10.0/24 osd_objectstore: bluestore radosgw_interface: em1 ceph_docker_image: "ceph/daemon" ceph_docker_image_tag: latest-nautilus ceph_docker_registry: 10.201.7.116:4000 # Private warehouse address, default docker io ceph_docker_on_openstack: true containerized_deployment: true openstack_config: true openstack_glance_pool: name: "images" pg_num: "{{ osd_pool_default_pg_num }}" pgp_num: "{{ osd_pool_default_pg_num }}" rule_name: "replicated_rule" type: 1 erasure_profile: "" expected_num_objects: "" application: "rbd" size: 3 min_size: "{{ osd_pool_default_min_size }}" pg_autoscale_mode: False openstack_cinder_pool: name: "volumes" pg_num: "{{ osd_pool_default_pg_num }}" pgp_num: "{{ osd_pool_default_pg_num }}" rule_name: "replicated_rule" type: 1 erasure_profile: "" expected_num_objects: "" application: "rbd" size: 3 min_size: "{{ osd_pool_default_min_size }}" pg_autoscale_mode: False openstack_nova_pool: name: "vms" pg_num: "{{ osd_pool_default_pg_num }}" pgp_num: "{{ osd_pool_default_pg_num }}" rule_name: "replicated_rule" type: 1 erasure_profile: "" expected_num_objects: "" application: "rbd" size: 3 min_size: "{{ osd_pool_default_min_size }}" pg_autoscale_mode: False openstack_cinder_backup_pool: name: "backups" pg_num: "{{ osd_pool_default_pg_num }}" pgp_num: "{{ osd_pool_default_pg_num }}" rule_name: "replicated_rule" type: 1 erasure_profile: "" expected_num_objects: "" application: "rbd" size: 3 min_size: "{{ osd_pool_default_min_size }}" pg_autoscale_mode: False openstack_pools: - "{{ openstack_glance_pool }}" - "{{ openstack_cinder_pool }}" - "{{ openstack_nova_pool }}" - "{{ openstack_cinder_backup_pool }}" openstack_keys: - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" } - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" } - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" } - { name: client.nova, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" } dashboard_admin_user: admin dashboard_admin_password: 123456 grafana_admin_user: admin grafana_admin_password: 123456 `vim group_vars/osds.yml (cat osds.yml | grep -Ev '^$|#')` devices: - /dev/sda5
-
deploy
ansible-playbook -i ansible-hosts site-container.yml (Deployment) docker exec ceph-mon-master ceph -s ((check) http://10.201.7.127:8443 / (dashboard verification) mount -t ceph 10.10.3.10:6789:/ /root/ceph-fs -o name=admin,secret=AQCEqx5fr9HHIhAAg+y9/irA9vJN0MOQEXXRUw==(cephfs check)
-
uninstall
ansible-playbook -i ansible-hosts infrastructure-playbooks/purge-container-cluster.yml (uninstall)
-
Client installation (master node1 node2)
# At the grace API node, you need to bind Python for librbd sudo yum install python-ceph sudo yum install python-rbd # Python and Ceph client command line tools are used in Nova compute, cinder backup and cinder volume nodes: sudo yum install ceph-common
-
Configure Virtualization (optional) (master node1 node2)
# on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key # Start the kvm service and set it to start automatically yum install libvirt -y systemctl start libvirtd systemctl enable libvirtd yum install -y virt-manager libvirt-client # You do not need to have UUIDs on all compute nodes. However, from the perspective of platform consistency, it is best to keep the same UUID uuidgen 492e6b76-d5ae-410d-b1e7-1b8b76230e37 cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>492e6b76-d5ae-410d-b1e7-1b8b76230e37</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created # sudo virsh secret-set-value --secret 492e6b76-d5ae-410d-b1e7-1b8b76230e37 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml sudo virsh secret-set-value --secret 492e6b76-d5ae-410d-b1e7-1b8b76230e37 --base64 AQBVrB5fAAAAABAA4U/FaQyzgXqnDXTQbbVWEQ== && rm secret.xml virsh secret-list UUID Usage -------------------------------------------------------------------------------- 492e6b76-d5ae-410d-b1e7-1b8b76230e37 ceph client.cinder secret # Virsh secret undefine 492e6b76-d5ae-410d-b1e7-1b8b76230e37
Physical version ceph deployment (for reference only)
- Document preparation
cd ceph-ansible-stable-4.0 cp site.yml.sample site.yml cp dummy-ansible-hosts ansible-hosts cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml
-
Configuration modification
`vim ansible-hosts` [mons] master node1 node2 [osds] master node1 node2 [mgrs] master node1 node2 [clients] master node1 node2 [rgws] master node1 node2 [grafana-server] master [nfss] master vim site.yml - hosts: - mons - osds # - mdss - rgws - nfss # - rbdmirrors - clients - mgrs # - iscsigws # - iscsi-gws # for backward compatibility only! - grafana-server # - rgwloadbalancers `vim group_vars/all.yml (cat all.yml | grep -Ev '^$|#')` ceph_origin: repository ceph_repository: community ceph_mirror: https://mirrors.tuna.tsinghua.edu.cn/ceph/ ceph_stable_key: https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc ceph_stable_release: nautilus ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}" ceph_stable_redhat_distro: el7 monitor_interface: em1 public_network: 10.201.7.0/24 cluster_network: 10.10.10.0/24 osd_objectstore: bluestore radosgw_interface: "{{ monitor_interface }}" dashboard_admin_user: admin dashboard_admin_password: 123456 grafana_admin_user: admin grafana_admin_password: 123456 `vim group_vars/osds.yml (cat osds.yml | grep -Ev '^$|#')` devices: - /dev/sda5
-
deploy
ansible-playbook -i ansible-hosts site.yml (Deployment) ceph -s ((check) http://10.201.7.127:8443 / (dashboard verification) mount -t ceph 10.10.3.10:6789:/ /root/ceph-fs -o name=admin,secret=AQCEqx5fr9HHIhAAg+y9/irA9vJN0MOQEXXRUw==(cephfs test)
-
uninstall
ansible-playbook -i ansible-hosts infrastructure-playbooks/purge-cluster.yml (uninstall)
Using kolla ansible to install openstack(docker version)
Environment configuration
-
Shut down some services
systemctl stop libvirtd.service systemctl disable libvirtd.service systemctl stop NetworkManager systemctl disable NetworkManager systemctl stop firewalld systemctl disable firewalld systemctl stop iptables.service systemctl disable iptables.service
Kolla ansible installation
-
Download train version kolla ansible: https://codeload.github.com/openstack/kolla-ansible/zip/stable/train
virtualenv kolla-env source kolla-env/bin/activate pip install --upgrade pip unzip kolla-ansible-stable-train.zip && cd kolla-ansible-stable-train pip install -r requirements.txt -r test-requirements.txt git init python setup.py install pip install ansible
openstack deployment
-
Document preparation
# Copy openstack configuration file sudo mkdir -p /etc/kolla sudo cp etc/kolla/* /etc/kolla/ sudo cp ansible/inventory/* /etc/kolla/ kolla-genpwd # Production password # Add openstack docking ceph additional configuration file mkdir -p /etc/kolla/config/cinder/{cinder-volume,cinder-backup} cp /etc/ceph/ceph.client.glance.keyring /etc/kolla/config/glance/ cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/ cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/ cp /etc/ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/ cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/ cp /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/ cp /etc/ceph/ceph.conf /etc/kolla/config/glance/ cp /etc/ceph/ceph.conf /etc/kolla/config/cinder/ cp /etc/ceph/ceph.conf /etc/kolla/config/nova/ cat > /etc/kolla/config/glance/glance-api.conf <<EOF [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 EOF cat > /etc/kolla/config/cinder/cinder-volume.conf <<EOF [DEFAULT] enabled_backends=ceph [ceph] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder backend_host=rbd:volumes rbd_pool=volumes volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid = 567a4c19-188d-494d-ac0e-7717205514b7 # Take / etc / kolla / passwords yml:cinder_ rbd_ secret_ uuid bd_default_features = 1 EOF cat > /etc/kolla/config/cinder/cinder-backup.conf <<EOF [DEFAULT] backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool=backups backup_driver = cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true EOF cat > /etc/kolla/config/nova/nova-compute.conf <<EOF [libvirt] images_rbd_pool=vms images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=nova rbd_secret_uuid=ec6a35aa-8c8f-4169-a81a-a03892f0aa03 # Take / etc / kolla / passwords yml:rbd_ secret_ uuid virt_type=qemu # The complete additional configuration structure is as follows . ├── cinder │ ├── ceph.conf │ ├── cinder-backup │ │ ├── ceph.client.cinder-backup.keyring │ │ └── ceph.client.cinder.keyring │ ├── cinder-backup.conf │ ├── cinder-volume │ │ └── ceph.client.cinder.keyring │ └── cinder-volume.conf ├── glance │ ├── ceph.client.glance.keyring │ ├── ceph.conf │ └── glance-api.conf └── nova ├── ceph.client.cinder.keyring ├── ceph.client.nova.keyring ├── ceph.conf └── nova-compute.conf
-
Configuration modification
# admin password modification `vim /etc/kolla/passwords.yml` keystone_admin_password=123456 `vim /etc/kolla/globals.yml ( cat /etc/kolla/globals.yml | grep -Ev '^$|#')` --- kolla_base_distro: "centos" kolla_install_type: "source" openstack_release: "train" node_custom_config: "/etc/kolla/config" kolla_internal_vip_address: "10.201.7.200" network_interface: "em1" neutron_external_interface: "em3" enable_haproxy: "yes" enable_keepalived: "{{ enable_haproxy | bool }}" enable_ceph: "no" enable_cinder: "yes" enable_cinder_backup: "yes" glance_backend_ceph: "yes" cinder_backend_ceph: "yes" nova_backend_ceph: "yes" nova_compute_virt_type: "qemu" `vim /etc/kolla/multinode` [control] master node1 node2 [network] master [compute] master node1 node2 [monitoring] master node1 node2 [storage] master node1 node2 [deployment] localhost ansible_connection=local
-
deploy
kolla-ansible -i /etc/kolla/multinode bootstrap-servers kolla-ansible -i /etc/kolla/multinode prechecks kolla-ansible -i /etc/kolla/multinode deploy kolla-ansible post-deploy pip install python-openstackclient python-glanceclient python-neutronclient
-
uninstall
cd kolla-ansible-stable-train/tools ./cleanup-containers ./cleanup-host
matters needing attention
-
If the compute node is a physical machine or a virtual machine with nested virtualization enabled (CPU hardware acceleration): virt_type=kvm
If the compute node is a virtual machine with nested virtualization not enabled: virt_type=qemu -
The host compute is not mapped to any cell
Compute node log: Instance xxx has allocations against this compute host but is not found in the database
Solution: add the compute node to the cell database: su -s /bin/sh -c "nova manage cell_v2 discover_hosts -- verbose" novaIf it fails: check whether the permissions and keys of nova user in ceph are all correct. If the key verification of nova in additional configuration fails, it is impossible to add the computing node to the database
node2
[deployment]
localhost ansible_connection=local -
deploy
kolla-ansible -i /etc/kolla/multinode bootstrap-servers kolla-ansible -i /etc/kolla/multinode prechecks kolla-ansible -i /etc/kolla/multinode deploy kolla-ansible post-deploy pip install python-openstackclient python-glanceclient python-neutronclient
-
uninstall
cd kolla-ansible-stable-train/tools ./cleanup-containers ./cleanup-host
matters needing attention
-
If the compute node is a physical machine or a virtual machine with nested virtualization enabled (CPU hardware acceleration): virt_type=kvm
If the compute node is a virtual machine with nested virtualization not enabled: virt_type=qemu -
The host compute is not mapped to any cell
Compute node log: Instance xxx has allocations against this compute host but is not found in the database
Solution: add the compute node to the cell database: su -s /bin/sh -c "nova manage cell_v2 discover_hosts -- verbose" novaIf it fails: check whether the permissions and keys of nova user in ceph are all correct. If the key verification of nova in additional configuration fails, it is impossible to add the computing node to the database