File sharing - ceph distributed cluster sharing

Principle introduction

The distributed storage server is shared to a scheduling server;
High availability (one file defaults to 3 copies);
High performance (one file is divided into multiple concurrent storage);

Component introduction

mon: the number of cluster monitoring and dispatching terminals is odd.
osd storage device, regularly upload the status to mon
radosgw: object storage
MDS: File System Storage
common: client

Build environment

  1. The control end has no password connection, so the node and itself

    ssh-keygen
    for i in 31 32 33; do ssh-copy-id root@192.168.66.$i; done
    
  2. Resolve domain name

    vim /etc/hosts
    192.168.66.31 ceph-node1 
    192.168.66.32 ceph-node2 
    192.168.66.33 ceph-node3
    
  3. Configure time synchronization

    yum -y install chrony
    cat /etc/chrony.conf
    systemctl restart chronyd
    systemctl enable chronyd
    
  4. Configure local and remote cephyum sources

    Build ceph source

    yum -y install httpd
    systemctl  start httpd 
    systemctl	enable httpd
    

    Create mount point

    mkdir  /var/www/html/ceph
    
    vim /etc/fstab
    /iso/ceph10.iso /var/www/html/ceph iso9660 defaults,loop  0 0
    
    mount -a
    

    Synchronize all nodes using yum source

    yum-config-manager  --add-repo  http://192.168.66.31/ceph/MON
    yum-config-manager  --add-repo  http://192.168.66.31/ceph/OSD
    yum-config-manager  --add-repo  http://192.168.66.31/ceph/Tools
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_MON.repo
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_OSD.repo
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_Tools.repo
    
  5. Prepare 2 disks for the physical machine (3 copies by default, more than 3)

Deployment (mon and osd are Co located on the same server in this build)

Mon+Mgr

  1. Install software on all mon machines

    yum -y install ceph-mon ceph-osd ceph-mds ceph-radosgw
    
  2. Create Ceph cluster configuration (operate on ceph-node1)

    mkdir-p  /data/ceph-cluster
    cd  /data/ceph-cluster
    
  3. Create Ceph mon cluster configuration and generate Ceph configuration file in Ceph cluster directory (operate on Ceph node1)

    cd  /data/ceph-cluster
    ceph-deploy new --public-network External network end --cluster-network 192.168.66.0/24 ceph-node1 		ceph-node2 ceph-node3
    

    #– no writing without public network
    #– cluster network internal access network end
    #CEPH nodex mon node

  4. Initialize mon service (operate on ceph-node1)

    cd /data/ceph-cluster
    ceph-deploy mon create-initial
    
  5. Deployment monitoring (operating on ceph-node1)

    ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
    

Osd

  1. Install software on all osd machines

    yum -y install ceph-mon ceph-osd ceph-mds ceph-radosgw
    
  2. Create OSD block storage device

    (operate on ceph-node1) initialize and empty the stored disk data

    ceph-deploy disk  zap ceph-node1:sdc ceph-node1:sdd
    ceph-deploy disk  zap ceph-node2:sdc ceph-node2:sdd
    ceph-deploy disk  zap ceph-node3:sdc ceph-node3:sdd
    

    (operate on ceph-node1) add the disk to the OSD storage space

    ceph-deploy osd create  ceph-node1  --data  /dev/sdc  /dev/sdd 
    ceph-deploy osd create  ceph-node2  --data  /dev/sdc  /dev/sdd 
    ceph-deploy osd create  ceph-node3  --data  /dev/sdc  /dev/sdd 
    
  3. (operate on ceph-node1) view cluster status

    ceph  -s
    

    View osd status

    ceph osd tree
    

    View mon selections

    ceph quorum_status --format jason-pretty
    

Node expansion

Expansion mon
Environmental preparation

  1. The control end has no password connection, so the node and itself

    for i in 34 35; do ssh-copy-id root@192.168.66.$i; done
    
  2. Resolve domain name

    vim /etc/hosts
    192.168.66.31 ceph-node1 
    192.168.66.32 ceph-node2 
    192.168.66.33 ceph-node3
    192.168.66.34 ceph-node4
    192.168.66.35 ceph-node5
    
  3. Configure time synchronization

    yum -y install chrony
    cat  /etc/chrony.conf
    systemctl restart chronyd
    systemctl enable chronyd
    
  4. Configure local and remote cephyum sources

    Synchronize all nodes using yum source

    yum-config-manager  --add-repo  http://192.168.66.31/ceph/MON
    yum-config-manager  --add-repo  http://192.168.66.31/ceph/OSD
    yum-config-manager  --add-repo  http://192.168.66.31/ceph/Tools
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_MON.repo
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_OSD.repo
    echo gpgcheck=0 >>  /etc/yum.repos.d/192.168.66.31_ceph_Tools.repo
    
  5. Prepare 2 disks for the physical machine (3 copies by default, more than 3)

install

  1. mon machine installation software

    yum -y install  ceph-deploy ceph-mon ceph-osd ceph-mds ceph-radosgw
    
  2. Add the newly added mon to the cluster (operate on ceph-node1)

    cd /data/ceph-cluster
    ceph-deploy mon add ceph-node4 --address node ip
    ceph-deploy mon add ceph-node5 --address node ip
    
  3. (operate on ceph-node1) view cluster status

    ceph  -s
    

    View osd status

    ceph osd tree
    

    View mon selections

    ceph quorum_status --format jason-pretty
    

Capacity expansion mgr

On machines without mgr in mon

  1. mgr joins the cluster (operate on ceph-node1)

    cd /data/ceph-cluster
    ceph-deploy mgr create ceph-node4 ceph-node5
    

Capacity expansion osd

  1. Install software on all osd machines

     yum -y install ceph-mon ceph-osd ceph-mds ceph-radosgw
    
  2. Create OSD block storage device

    (operate on ceph-node1) initialize and empty the stored disk data

     ceph-deploy disk  zap ceph-node4:sdc ceph-node4:sdd
    

    (operate on ceph-node1) add the disk to the OSD storage space

     ceph-deploy osd create  ceph-node4  --data  /dev/sdc  /dev/sdd 
    
  3. (operate on ceph-node1) view cluster status

    ceph  -s
    

    View osd status

    ceph osd tree
    

    View mon selections

    ceph quorum_status --format jason-pretty
    

Create Ceph block storage (operate on ceph-node1)

  1. Create storage pool

    ceph osd pool create cephfs_(Pool name) 128
    

    View storage pools

    ceph osd lspools
    
  2. create mirror

    Method 1: create a mirror in the default pool

    rbd create cxk-image --image-feature  layering --size 1G
    

    Method 2: create a mirror in the specified pool

    rbd create Pool name/cxk --image-feature  layering --size 1G
    

    View mirror

    rbd list
    

    View details

    rbd info cxk-image
    
  3. Mirror dynamic adjustment

    Expansion capacity

    rbd resize --size 2G cxk-image
    

    Reduce capacity

    rbd resize --size 1G cxk-image --allow-shrink
    

    Mirror snapshot operation

    Create a snapshot of the mirror

    rbd snap create cxk-image  --snap cxk-snap1
    

    Clone snapshot

    rbd clone cxk-image --snap cxk-snap1 cxk-snap2 --image-feature layering
    

    Protect snapshots from deletion

    rbd snap protect  cxk-image --snap cxk-snap1
    

    Unprotect

    rbd snap unprotect  cxk-image --snap cxk-snap1
    

    Restore snapshot

    rbd snap rollback cxk-image --snap cxk-snap1
    

    Work independently of your parents

    rbd flatten cxk-snap1
    

    Delete snapshot

    rbd snap rm cxk-image --snap cxk-snap1
    

    View snapshot

    View mirrored snapshots

    rbd snap ls cxk-image
    

    View the relationship between clone mirror and parent mirror snapshot

    rbd info cxk-clone
    

    The client accesses block storage through KRBD

    1. Client installation

      yum -y  install ceph-common
      
    2. The client copies the ceph file locally

       scp 192.168.4.11:/data/ceph-cluster/* /etc/ceph/
      
    3. The client mounts the image of the server

       rbd map  cxk-image
      

      Undo disk mount:

       rbd unmap /dev/rbd0(Mount point)
      
    4. Client view the image of the attached server

       rbd showmapped
       lsblk
      
    5. Client format and mount partition

       mkfs.xfs /dev/rbd0
       mount /dev/rbd0 /mnt/
      

Create MDS file system (inode + block) (operate on ceph-node1)

To use CephFS, you need at least one metadata server process. You can create an MDS manually, or use CEPH deploy or CEPH ansible to deploy the MDS.

  1. Deploy mds services to those OSDs:

    cd /data/ceph-cluster
    ceph-deploy mds create ceph-node1 (You can continue to add)
    
  2. Create storage pool

    data

    ceph osd pool create cephfs_data 128
    

    metadata

    ceph osd pool create cephfs_metadata 128
    
  3. Create Ceph file system

    ceph fs new myfs1 cephfs_metadata cephfs_data new fs with metadata pool 2 and data pool 1
    
  4. see

    ceph fs ls
    

    Client mount

    mount -t ceph  MON Nodal IP:6789:/  /mnt/cephfs/ -o name=admin,secret=AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg==
    

    The file system type is CEPH. Admin is the user name and secret is the key
    The key can be in / etc / CEPH / CEPH client. admin. Found in Keyring

Create RGW object store

  1. Deploy rgw services to those OSDs:

       ceph-deploy rgw create ceph-node1 (You can continue to add)
    
  2. Log in to this node to verify whether the service is started

    ps aux |grep radosgw
    
  3. This node modifies the service port

    vim  /etc/ceph/ceph.conf
    [client.rgw.node5]
    host = ceph-node1
    rgw_frontends = "civetweb port=8000"
    #node5 is the host name
    #civetweb is a web service built into RGW
    
  4. Restart the service on this node

    systemctl  status ceph-radosgw@\*
    
  5. Create radosgw user

    radosgw-admin user create --uid="radosgw" --display-name="radosgw"
    {
        "user_id": "radosgw",
        "display_name": "radosgw",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [],
        "keys": [
            {
                "user": "radosgw",
                "access_key": "DKOORDOMS6YHR2OW5M23",
                "secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
            }
        ],
        "swift_keys": [],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "temp_url_keys": [],
        "type": "rgw"
    }
    

Client installation software

  1. install

    yum install s3cmd
    
  2. Modify software configuration

    s3cmd --configure
    		Access Key: DKOORDOMS6YHR2OW5M23
    		Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
    		Default Region [US]: ZH
    		S3 Endpoint [s3.amazonaws.com]: 192.168.4.15:8000
    		[%(bucket)s.s3.amazonaws.com]: %(bucket)s.192.168.4.15:8000
    		Use HTTPS protocol [Yes]: no
    		Test access with supplied credentials? [Y/n] n
    		Save settings? [y/N] y
    
  3. Create a bucket for storing data (similar to the directory for storing data), create a bucket and put it into a file

    s3cmd ls
    s3cmd mb s3://my_bucket
    		Bucket 's3://my_bucket/' created
    s3cmd ls
    		2018-05-09 08:14 s3://my_bucket
    s3cmd put /var/log/messages s3://my_bucket/log/
    s3cmd ls s3://my_bucket
    		DIR s3://my_bucket/log/
    s3cmd ls s3://my_bucket/log/
    		2018-05-09 08:19 309034 s3://my_bucket/log/messages 
    
  4. Test download function

    s3cmd get s3://my_bucket/log/messages /tmp/
    
  5. Test deletion function

    s3cmd del s3://my_bucket/log/messages
    

Like the pro can focus on praise comments Oh! It will be updated every day in the future! Edit original articles for this article;
The files and installation packages used in the article can be obtained by adding the contact information of Xiaobian;
Welcome to exchange Xiaobian's contact information VX: cxklittlebroth

Tags: Ceph

Posted by pmzq on Mon, 02 May 2022 22:03:13 +0300