catalogue
- 1, Prepare the machine
- 2, ceph node installation
- 1. Install NPT (all nodes)
- 2. Install SSH (all nodes)
- 3. Create and deploy CEPH users (all nodes)
- 4. Allow SSH login without password (management node)
- 5. Networking during boot (ceph node)
- 6. Open the required port (ceph node)
- 7. Terminal (TTY) (ceph node)
- 8. Close selinux (ceph node)
- 9. Configure EPEL source (management node)
- 10. Add the software package source to the software library (management node)
- 11. Update the software library and install CEPH deploy (management node)
- 3, Build clusters
- 1. Prepare for installation and create a folder
- 2. Create clusters and monitoring nodes
- 3. Modify the configuration file
- 4. Install Ceph
- 5. Configure the initial monitor(s) and collect all keys
- 6. Add 2 OSD s
- 7. Copy the configuration file and admin key to the management node and Ceph node
- 8. Make sure you know CEPH client. admin. Keyring has correct operation permission
- 9. Check the health status of the cluster and OSD nodes
- 4, Expand cluster (capacity expansion)
1, Prepare the machine
This article describes how to build Ceph STORAGE CLUSTER under CentOS 7.
There are four machines in total, one of which is the management node and the other three are ceph nodes:
hostname | ip | role | describe |
---|---|---|---|
admin-node | 192.168.0.130 | ceph-deploy | Management node |
node1 | 192.168.0.131 | mon.node1 | ceph node |
node2 | 192.168.0.132 | osd.0 | ceph node, OSD node |
node3 | 192.168.0.133 | osd.1 | ceph node, OSD node |
Management node: admin node
ceph nodes: node1, node2, node3
All nodes: admin node, node1, node2, node3
1. Modify host name
# vi /etc/hostname
2. Modify the hosts file
# vi /etc/hosts 192.168.0.130 admin-node 192.168.0.131 node1 192.168.0.132 node2 192.168.0.133 node3
3. Ensure connectivity (management node)
Confirm the network connectivity by ping the short hostname (hostname -s). Solve the possible host name resolution problem.
$ ping node1 $ ping node2 $ ping node3
2, ceph node installation
1. Install NPT (all nodes)
We recommend that NTP service (especially Ceph Monitor node) be installed on all Ceph nodes to avoid failure due to clock drift. See clock for details.
# sudo yum install ntp ntpdate ntp-doc
2. Install SSH (all nodes)
# sudo yum install openssh-server
3. Create and deploy CEPH users (all nodes)
Ceph deploy tool must log in to Ceph node as an ordinary user, and this user has the permission to use sudo without password, because it needs not to enter a password during the installation of software and configuration files.
It is recommended to create a specific user for ceph deploy on all ceph nodes in the cluster, but do not use the name "ceph".
- Create a new user in each Ceph node
# sudo useradd -d /home/zeng -m zeng # sudo passwd zeng
- Ensure that the newly created user on each Ceph node has sudo permission
# echo "zeng ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/zeng # sudo chmod 0440 /etc/sudoers.d/zeng
4. Allow SSH login without password (management node)
Because Ceph deploy does not support entering passwords, you must generate an SSH key on the management node and distribute its public key to each Ceph node. Ceph deploy attempts to generate an SSH key pair for the initial monitors.
- Generate SSH key pair
Do not use sudo or root. When "Enter passphrase" is prompted, enter directly and the password will be empty:
//Switch users. Unless otherwise specified, subsequent operations shall be carried out under this user # su zeng //Generate key pair $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/zeng/.ssh/id_rsa): Created directory '/home/zeng/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/zeng/.ssh/id_rsa. Your public key has been saved in /home/zeng/.ssh/id_rsa.pub. The key fingerprint is: SHA256:Tb0VpUOZtmh+QBRjUOE0n2Uy3WuoZVgXn6TBBb2SsGk zeng@admin-node The key's randomart image is: +---[RSA 2048]----+ | .+@=OO*| | *.BB@=| | ..O+Xo+| | o E+O.= | | S oo=.o | | .. . | | . | | | | | +----[SHA256]-----+
- Copy the public key to each Ceph node
$ ssh-copy-id zeng@node1 $ ssh-copy-id zeng@node2 $ ssh-copy-id zeng@node3
When finished, / home / Zeng / SSH / path:
- Admin node has more file IDS_ rsa,id_rsa.pub and known_hosts;
- Node1, node2 and node3 have more files authorized_keys.
- Modify ~ / ssh/config file
Modify ~ / ssh/config file (if not added), so that Ceph deploy can log in to Ceph node with the user name you created.
// sudo must be used $ sudo vi ~/.ssh/config Host admin-node Hostname admin-node User zeng Host node1 Hostname node1 User zeng Host node2 Hostname node2 User zeng Host node3 Hostname node3 User zeng
- Test whether ssh is successful
$ ssh zeng@node1 $ exit $ ssh zeng@node2 $ exit $ ssh zeng@node3 $ exit
- Problem: if "Bad owner or permissions on /home/zeng/.ssh/config" appears, execute the command to modify the file permissions.
$ sudo chmod 644 ~/.ssh/config
5. Networking during boot (ceph node)
Ceph's OSD processes are interconnected through the network and report their status to Monitors. If the network defaults to off, the Ceph cluster cannot go online when it is started until you turn on the network.
$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 //Make sure that ONBOOT is set to yes
6. Open the required port (ceph node)
By default, Ceph Monitors use 6789 ports for communication, and OSDs use 6800:7300 ports for communication. Ceph OSD can use multiple network connections for replication and heartbeat communication with clients, monitors and other OSDs.
$ sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent // Or turn off the firewall $ sudo systemctl stop firewalld $ sudo systemctl disable firewalld
7. Terminal (TTY) (ceph node)
Errors may be reported when the Ceph deploy command is executed on CentOS and RHEL. If your Ceph node is set to requiretty by default, execute
$ sudo visudo
Find the defaults requirement option and change it to defaults: Ceph! Requiretty or comment it out directly so that Ceph deploy can connect with the previously created user (the user who created and deployed Ceph).
When editing the configuration file / etc/sudoers, you must use sudo visudo instead of a text editor.
8. Close selinux (ceph node)
$ sudo setenforce 0
To make the SELinux configuration permanent (if it is indeed the root cause of the problem), modify its configuration file / etc/selinux/config:
$ sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
That is, modify SELINUX=disabled.
9. Configure EPEL source (management node)
$ sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
10. Add the software package source to the software library (management node)
$ sudo vi /etc/yum.repos.d/ceph.repo
Paste the following contents and save them to / etc / yum repos. d/ceph. In the repo file.
[Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/ enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1
11. Update the software library and install CEPH deploy (management node)
$ sudo yum update && sudo yum install ceph-deploy $ sudo yum install yum-plugin-priorities
It may take a long time. Wait patiently.
3, Build clusters
Perform the following steps under the management node:
1. Prepare for installation and create a folder
Create a directory on the management node to store the configuration files and key pairs generated by CEPH deploy.
$ cd ~ $ mkdir my-cluster $ cd my-cluster
Note: if you encounter trouble after installing ceph, you can use the following commands to clear the package and configure it:
// Remove installation package $ ceph-deploy purge admin-node node1 node2 node3 // Clear configuration $ ceph-deploy purgedata admin-node node1 node2 node3 $ ceph-deploy forgetkeys
2. Create clusters and monitoring nodes
Create a cluster and initialize the monitoring node:
$ ceph-deploy new {initial-monitor-node(s)}
Here node1 is the monitor node, so execute:
$ ceph-deploy new node1
After completion, there are three more files under my clster: CEPH conf,ceph-deploy-ceph.log and CEPH mon. keyring.
- Problem: if "[ceph_deploy] [error] runtimeerror: remote connection got closed, ensure requirement is disabled for node1", execute sudo visudo to comment out the defaults requirement.
3. Modify the configuration file
$ cat ceph.conf
The contents are as follows:
[global] fsid = 89933bbb-257c-4f46-9f77-02f44f4cc95c mon_initial_members = node1 mon_host = 192.168.0.131 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
Change the default number of copies in Ceph configuration file from 3 to 2, so that only two OSDs can reach the active + clean state. Add osd pool default size = 2 to the [global] section:
$ sed -i '$a\osd pool default size = 2' ceph.conf
If there are multiple network cards,
You can write the public network into the [global] section of Ceph configuration file:
public network = {ip-address}/{netmask}
4. Install Ceph
Install ceph on all nodes:
$ ceph-deploy install admin-node node1 node2 node3
- Question: [ceph_deploy] [error] runtimeerror: failed to execute command: Yum - y install EPEL release
resolvent:
sudo yum -y remove epel-release
5. Configure the initial monitor(s) and collect all keys
$ ceph-deploy mon create-initial
After completing the above operations, these key rings should appear in the current directory:
{cluster-name}.client.admin.keyring {cluster-name}.bootstrap-osd.keyring {cluster-name}.bootstrap-mds.keyring {cluster-name}.bootstrap-rgw.keyring
6. Add 2 OSD s
- Log in to the Ceph node, create a directory for the OSD daemon, and add permissions.
$ ssh node2 $ sudo mkdir /var/local/osd0 $ sudo chmod 777 /var/local/osd0/ $ exit $ ssh node3 $ sudo mkdir /var/local/osd1 $ sudo chmod 777 /var/local/osd1/ $ exit
- Then, execute CEPH deploy from the management node to prepare the OSD.
$ ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
- Finally, activate OSD.
$ ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
7. Copy the configuration file and admin key to the management node and Ceph node
$ ceph-deploy admin admin-node node1 node2 node3
8. Make sure you know CEPH client. admin. Keyring has correct operation permission
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
9. Check the health status of the cluster and OSD nodes
[zeng@admin-node my-cluster]$ ceph health HEALTH_OK [zeng@admin-node my-cluster]$ ceph -s cluster a3dd419e-5c99-4387-b251-58d4eb582995 health HEALTH_OK monmap e1: 1 mons at {node1=192.168.0.131:6789/0} election epoch 3, quorum 0 node1 osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects 12956 MB used, 21831 MB / 34788 MB avail 64 active+clean [zeng@admin-node my-cluster]$ ceph osd df ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 0 0.01659 1.00000 17394M 6478M 10915M 37.24 1.00 64 1 0.01659 1.00000 17394M 6478M 10915M 37.25 1.00 64 TOTAL 34788M 12956M 21831M 37.24 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
4, Expand cluster (capacity expansion)
1. Add OSD
Add an OSD on node1 2.
- Create directory
$ ssh node1 $ sudo mkdir /var/local/osd2 $ sudo chmod 777 /var/local/osd2/ $ exit
- Prepare OSD
$ ceph-deploy osd prepare node1:/var/local/osd2
- Activate OSD
$ ceph-deploy osd activate node1:/var/local/osd2
- Check the cluster status and OSD nodes:
[zeng@admin-node my-cluster]$ ceph -s cluster a3dd419e-5c99-4387-b251-58d4eb582995 health HEALTH_OK monmap e1: 1 mons at {node1=192.168.0.131:6789/0} election epoch 3, quorum 0 node1 osdmap e15: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v37: 64 pgs, 1 pools, 0 bytes data, 0 objects 19450 MB used, 32731 MB / 52182 MB avail 64 active+clean [zeng@admin-node my-cluster]$ ceph osd df ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 0 0.01659 1.00000 17394M 6478M 10915M 37.24 1.00 41 1 0.01659 1.00000 17394M 6478M 10915M 37.24 1.00 43 2 0.01659 1.00000 17394M 6494M 10899M 37.34 1.00 44 TOTAL 52182M 19450M 32731M 37.28 MIN/MAX VAR: 1.00/1.00 STDDEV: 0.04
2. Add MONITORS
Add monitoring nodes in ndoe2 and node3.
- Modify mon_initial_members,mon_host and public network configuration:
[global] fsid = a3dd419e-5c99-4387-b251-58d4eb582995 mon_initial_members = node1,node2,node3 mon_host = 192.168.0.131,192.168.0.132,192.168.0.133 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 2 public network = 192.168.0.120/24
- Push to other nodes:
$ ceph-deploy --overwrite-conf config push node1 node2 node3
- Add monitoring node:
$ ceph-deploy mon add node2 node3 Article reprinted from: https://www.cnblogs.com/zengzhihua/p/9829472.html
- To view cluster status and monitoring nodes:
[zeng@admin-node my-cluster]$ ceph -s cluster a3dd419e-5c99-4387-b251-58d4eb582995 health HEALTH_OK monmap e3: 3 mons at {node1=192.168.0.131:6789/0,node2=192.168.0.132:6789/0,node3=192.168.0.133:6789/0} election epoch 8, quorum 0,1,2 node1,node2,node3 osdmap e25: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v3919: 64 pgs, 1 pools, 0 bytes data, 0 objects 19494 MB used, 32687 MB / 52182 MB avail 64 active+clean [zeng@admin-node my-cluster]$ ceph mon stat e3: 3 mons at {node1=192.168.0.131:6789/0,node2=192.168.0.132:6789/0,node3=192.168.0.133:6789/0}, election epoch 8, quorum 0,1,2 node1,node2,node3