linux (centos7) install ceph using ceph-deploy

a summary

This article is based on centos7.6 version using the ceph-deploy tool to install the ceph nautilus version.

Two environment

(1) Operating system version

[root@ceph ~]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
[root@ceph ~]# uname -a
Linux ceph.novalocal 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@ceph ~]#

(2) ceph version

(3) Deployment planning

CPU name IP disk Role
ceph001 172.31.185.127 System disk: /dev/vda Data disk: /dev/vdb time server, ceph-deploy,monitor,mgr,mds,osd
ceph002 172.31.185.198 System disk: /dev/vda Data disk: /dev/vdb monitor,mgr,mds,osd
ceph003 172.31.185.203 System disk: /dev/vda Data disk: /dev/vdb monitor,mgr,mds,osd

Three deployment implementation

(1) Basic installation

The following installation, if there is no special instructions, all three nodes need to be installed.

3.1.1 Set the hostname

Take 172.31.185.127 as an example

[root@ceph ~]# hostnamectl set-hostname ceph001
[root@ceph ~]#

PS:
172.31.185.198 hostnamectl set-hostname ceph002
172.31.185.203 hostnamectl set-hostname ceph003

3.1.2 Turn off the firewall

[root@ceph ~]# systemctl stop firewalld | systemctl disable firewalld | systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@ceph ~]#

3.1.3 Close selinux

[root@ceph001 ~]# cp /etc/selinux/config /etc/selinux/config.bak.orig
[root@ceph001 ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@ceph001 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted


[root@ceph001 ~]#

3.1.4 Modify the hosts file

[root@ceph001 ~]# cp /etc/hosts /etc/hosts.bak.orig
[root@ceph001 ~]# vi /etc/hosts

new

172.31.185.127 ceph001
172.31.185.198 ceph002
172.31.185.203 ceph003

3.1.5 Restart the virtual machine

[root@ceph001 ~]# reboot

3.1.6 Configuring Time Service

Because centos7 generally installs chrony by default, if it is not installed directly yum install chrony

3.1.6.1 Configure time server server side

172.31.185.127 as time server server

3.1.6.1.1 Modify the configuration /etc/chrony.conf
[root@ceph001 ~]# cp /etc/chrony.conf  /etc/chrony.conf.bak.orig

3.1.6.2 Configure time server client

3.1.7 Configure ceph source

Configure the ceph sources according to the OS version and the ceph version that needs to be installed

vim ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

Configure the other two machines as well

[root@ceph001 yum.repos.d]# scp ceph.repo root@172.31.185.198:/etc/yum.repos.d/
root@172.31.185.198's password:
ceph.repo                                                                                                    100%  611   310.3KB/s   00:00
[root@ceph001 yum.repos.d]# scp ceph.repo root@172.31.185.203:/etc/yum.repos.d/
The authenticity of host '172.31.185.203 (172.31.185.203)' can't be established.
ECDSA key fingerprint is SHA256:ES6ytBX1siYV4WMG2CF3/21VKaDd5y27lbWQggeqRWM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.31.185.203' (ECDSA) to the list of known hosts.
root@172.31.185.203's password:
Permission denied, please try again.
root@172.31.185.203's password:
ceph.repo                                                                                                    100%  611   312.3KB/s   00:00
[root@ceph001 yum.repos.d]#

3.1.8 Create deployment user cephadmin

All three nodes must create the user and set sudo

[root@ceph001 yum.repos.d]# useradd cephadmin
[root@ceph001 ~]# echo "cephnau@2020" | passwd --stdin cephadmin
Changing password for user cephadmin.
passwd: all authentication tokens updated successfully.
[root@ceph001 ~]#
[root@ceph001 yum.repos.d]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
cephadmin ALL = (root) NOPASSWD:ALL
[root@ceph001 yum.repos.d]# chmod 0440 /etc/sudoers.d/cephadmin
[root@ceph001 yum.repos.d]#

(2) Deploy node basic installation

The deployment node here is the ceph001 node that is ceph001

3.2.1 Configure cephadmin user ssh password-free login

cephadmin on the deployment node can log in to three ceph clusters ceph001,ceph002,ceph003 without password

[root@ceph001 ~]# su - cephadmin
Last login: Mon Nov 30 11:35:15 CST 2020 from 172.31.185.198 on pts/1
[cephadmin@ceph001 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa):
Created directory '/home/cephadmin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephadmin/.ssh/id_rsa.
Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:sw1pd339wzApgtVCwDQV4fGq8/oT7hug7P5WK2ifbpE cephadmin@ceph001
The key's randomart image is:
+---[RSA 3072]----+
|      o+o*o      |
|       .+ +      |
|         + o     |
|        o.o  .. .|
|       oSo...+. o|
|    . .EoO... +..|
|     o.o=.+    o.|
|    .o +++.     .|
|    oo**==o      |
+----[SHA256]-----+
[cephadmin@ceph001 ~]$ ssh-copy-id cephadmin@ceph001
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'ceph001 (172.31.185.127)' can't be established.
ECDSA key fingerprint is SHA256:ES6ytBX1siYV4WMG2CF3/21VKaDd5y27lbWQggeqRWM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@ceph001's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@ceph001'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@ceph001 ~]$ ssh-copy-id cephadmin@ceph002
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'ceph002 (172.31.185.198)' can't be established.
ECDSA key fingerprint is SHA256:ES6ytBX1siYV4WMG2CF3/21VKaDd5y27lbWQggeqRWM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@ceph002's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@ceph002'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@ceph001 ~]$ ssh-copy-id cephadmin@ceph003

(3) Deploy ceph

3.3.1 Deploy node to install ceph-deploy

Install ceph-deploy using cephadmin user on deployment node ceph001

3.3.1.1 Download the installation package to the local

Download these installation packages locally to prepare for later offline installation.

sudo yum -y install --downloadonly --downloaddir=/home/cephadmin/software/ceph-deploy/ ceph-deploy python-pip
[cephadmin@ceph001 ~]$ ls software/ceph-deploy/
ceph-deploy-2.0.1-0.noarch.rpm  python2-pip-8.1.2-12.el7.noarch.rpm
[cephadmin@ceph001 ~]$

3.3.1.2 Install ceph-deploy python-pip

[cephadmin@ceph001 ~]$ sudo yum -y install  ceph-deploy python-pip

3.3.2 Install the ceph package

All ceph cluster nodes need to be installed
Use yum to install the ceph package and download all related dependencies

3.3.2.1 Download the installation package to the local

 sudo yum -y install --downloadonly --downloaddir=/home/cephadmin/software/ceph/ ceph  ceph-radosgw

3.3.2.2 Install the ceph package

Install ceph on three nodes

sudo yum -y install  ceph  ceph-radosgw

3.3.3 Create a cluster

Deploy nodes in ceph-deploy Operation

3.3.3.1 Install the ceph software (the title name needs to be changed)

deployment node

[cephadmin@ceph001 ~]$ mkdir /home/cephadmin/cephcluster
[cephadmin@ceph001 ~]$ ll
total 0
drwxrwxr-x 2 cephadmin cephadmin  6 Nov 30 16:48 cephcluster
drwxr-xr-x 4 root      root      58 Nov 30 16:45 software
[cephadmin@ceph001 ~]$ cd cephcluster/
[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$ ceph-deploy new ceph001 ceph002 ceph003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new ceph001 ceph002 ceph003
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f296c15bd70>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f296bae7950>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph001', 'ceph002', 'ceph003']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
. . . . 

It can be seen from the above that the cluster name is ceph

and generate the following files

[cephadmin@ceph001 cephcluster]$ ll
total 16
-rw-rw-r-- 1 cephadmin cephadmin  247 Nov 30 16:50 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin 5231 Nov 30 16:50 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin   73 Nov 30 16:50 ceph.mon.keyring
[cephadmin@ceph001 cephcluster]$

PS:
ceph-deploy –cluster {cluster-name} new node1 node2 //Create a ceph cluster with a custom cluster name, default
think ceph

Modify ceph.conf to add network configuration

[global]
fsid = 69002794-cf45-49fa-8849-faadae48544f
mon_initial_members = ceph001, ceph002, ceph003
mon_host = 172.31.185.127,172.31.185.198,172.31.185.203
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 172.31.185.0/24  
cluster network = 172.31.185.0/24
~

public network = 172.31.185.0/24
cluster network = 172.31.185.0/24
Two different network segments should be used because I only have one network card here.
For example eth0, eth1

3.3.3.2 Cluster configuration initialization, generating all keys

[cephadmin@ceph001 cephcluster]$ ceph-deploy mon create-initial #Configure initial monitor(s) and collect all keys
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0a8fa7c0e0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f0a8fcdf398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph001 ceph002 ceph003

generated key

[cephadmin@ceph001 cephcluster]$ ls -l *.keyring
-rw------- 1 cephadmin cephadmin 113 Nov 30 17:17 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin 113 Nov 30 17:17 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin 113 Nov 30 17:17 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin 113 Nov 30 17:17 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin 151 Nov 30 17:17 ceph.client.admin.keyring
-rw------- 1 cephadmin cephadmin  73 Nov 30 16:50 ceph.mon.keyring
[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$


3.3.3.3 Distribution of configuration information to each node

The configuration information will be copied to the /etc/ceph directory of each node

[cephadmin@ceph001 cephcluster]$ ceph-deploy admin ceph001 ceph002 ceph003 #Copy configuration information to three nodes

Check if you can
Switch to root account

[root@ceph001 ~]# ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 10m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@ceph001 ~]#

[root@ceph002 ~]# ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 12m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@ceph002 ~]#

If you want to use the cephadmin account to execute ceph -s, you need to modify the /etc/ceph directory permissions

[root@ceph001 ~]# su - cephadmin
Last login: Mon Nov 30 16:42:45 CST 2020 on pts/0
[cephadmin@ceph001 ~]$ ceph -s
[errno 2] error connecting to the cluster
[cephadmin@ceph001 ~]$ ll /etc/ceph/
total 12
-rw------- 1 root root 151 Nov 30 17:25 ceph.client.admin.keyring
-rw-r--r-- 1 root root 313 Nov 30 17:25 ceph.conf
-rw-r--r-- 1 root root  92 Nov 24 03:33 rbdmap
-rw------- 1 root root   0 Nov 30 17:16 tmp10a3zI
[cephadmin@ceph001 ~]$ sudo chown -R cephadmin:cephadmin /etc/ceph
[cephadmin@ceph001 ~]$ ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 14m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@ceph001 ~]$

All three nodes need to execute sudo chown -R cephadmin:cephadmin /etc/ceph

3.3.3.4 Configure osd

First check the name of the data disk, all three nodes need to see

[root@ceph001 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  478K  0 rom
vda    253:0    0   50G  0 disk
├─vda1 253:1    0  200M  0 part /boot
└─vda2 253:2    0 49.8G  0 part /
vdb    253:16   0   50G  0 disk

When I can see my data disk, vdb
I have written various scripts here, which need to be executed in the /home/cephadmin/cephcluster directory

for dev in /dev/vdb
do
ceph-deploy disk zap ceph001 $dev
ceph-deploy osd create ceph001 --data $dev
ceph-deploy disk zap ceph002 $dev
ceph-deploy osd create ceph002 --data $dev
ceph-deploy disk zap ceph003 $dev
ceph-deploy osd create ceph003 --data $dev
done

[cephadmin@ceph001 cephcluster]$ for dev in /dev/vdb
> do
> ceph-deploy disk zap ceph001 $dev
> ceph-deploy osd create ceph001 --data $dev
> ceph-deploy disk zap ceph002 $dev
> ceph-deploy osd create ceph002 --data $dev
> ceph-deploy disk zap ceph003 $dev
> ceph-deploy osd create ceph003 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap ceph001 /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcd19fcd8c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph001
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fcd1a21c8c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/vdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph001
[ceph001][DEBUG ] connection detected need for sudo

Check if the command was executed successfully

[cephadmin@ceph001 cephcluster]$ ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 26m)
    mgr: no daemons active
    osd: 3 osds: 3 up (since 4m), 3 in (since 4m)
## I have three nodes here, and each node has a data disk. It can be seen that it has been successful.
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@ceph001 cephcluster]$

3.3.3.5 Deploy mgr

[cephadmin@ceph001 cephcluster]$ ceph-deploy mgr create ceph001 ceph002 ceph003

[cephadmin@ceph001 cephcluster]$ ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 30m)
    mgr: ceph002(active, since 83s), standbys: ceph003, ceph001
#mgr has been deployed successfully
    osd: 3 osds: 3 up (since 8m), 3 in (since 8m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:

[cephadmin@ceph001 cephcluster]$


3.3.3.6 Install mgr-dashboard (all three nodes need to be installed)

The nautilus version requires the dashboard to be installed

sudo yum -y install --downloadonly --downloaddir=/home/cephadmin/software/cephmgrdashboard/ ceph-mgr-dashboard
 sudo yum -y install  ceph-mgr-dashboard

3.3.3.7 Open mgr-dashboard (main node open)

[cephadmin@ceph002 ~]$ ceph mgr module enable dashboard
[cephadmin@ceph002 ~]$ ceph dashboard create-self-signed-cert
Self-signed certificate created
[cephadmin@ceph002 ~]$ ceph dashboard set-login-credentials admin admin
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
[cephadmin@ceph002 ~]$ ceph mgr services
{
    "dashboard": "https://ceph002:8443/"
}
[cephadmin@ceph002 ~]$

Tags: Linux

Posted by MilesWilson on Wed, 04 May 2022 21:39:24 +0300