Cluster node management of Redis service

The previous blog mainly talked about the deployment configuration of redis cluster, the construction of the ruby ​​environment required by the redis.trib.rb tool, the creation of the redis.trib.rb tool, and the viewing of cluster-related information. For a review, please refer to https://www.cnblogs.com/qiuhom-1874/p/13442458.html ; Today we will learn about the tool redis.trib.rb to manage the nodes in the redis3/4 cluster;

Add a node to an existing cluster

Environmental description

  To add a node to an existing cluster, first of all, we need to have the same version and authentication password as redis in the cluster, and secondly, the hardware configuration should be the same; then start two redis server s; I will demonstrate here that in order to save machines, start two directly on node03 instance instead of redis server. The environment is as follows

   Directory Structure

[root@node03 redis]# ll
total 12
drwxr-xr-x 5 root root  40 Aug  5 22:57 6379
drwxr-xr-x 5 root root  40 Aug  5 22:57 6380
drwxr-xr-x 2 root root 134 Aug  5 22:16 bin
-rw-r--r-- 1 root root 175 Aug  8 08:35 dump.rdb
-rw-r--r-- 1 root root 803 Aug  8 08:35 redis-cluster_6379.conf
-rw-r--r-- 1 root root 803 Aug  8 08:35 redis-cluster_6380.conf
[root@node03 redis]# mkdir {6381,6382}/{etc,logs,run} -p
[root@node03 redis]# tree
.
├── 6379
│   ├── etc
│   │   ├── redis.conf
│   │   └── sentinel.conf
│   ├── logs
│   │   └── redis_6379.log
│   └── run
├── 6380
│   ├── etc
│   │   ├── redis.conf
│   │   └── sentinel.conf
│   ├── logs
│   │   └── redis_6380.log
│   └── run
├── 6381
│   ├── etc
│   ├── logs
│   └── run
├── 6382
│   ├── etc
│   ├── logs
│   └── run
├── bin
│   ├── redis-benchmark
│   ├── redis-check-aof
│   ├── redis-check-rdb
│   ├── redis-cli
│   ├── redis-sentinel -> redis-server
│   └── redis-server
├── dump.rdb
├── redis-cluster_6379.conf
└── redis-cluster_6380.conf

17 directories, 15 files
[root@node03 redis]#

Copy the configuration file to the /etc/ directory of the corresponding directory

[root@node03 redis]# cp 6379/etc/redis.conf 6381/etc/
[root@node03 redis]# cp 6379/etc/redis.conf 6382/etc/

Modify the corresponding port information in the configuration file

[root@node03 redis]# sed -ri 's@6379@6381@g' 6381/etc/redis.conf 
[root@node03 redis]# sed -ri 's@6379@6382@g' 6382/etc/redis.conf 

Confirm the configuration file information

[root@node03 redis]# grep -E "^(port|cluster|logfile)" 6381/etc/redis.conf 
port 6381
logfile "/usr/local/redis/6381/logs/redis_6381.log"
cluster-enabled yes
cluster-config-file redis-cluster_6381.conf
[root@node03 redis]# grep -E "^(port|cluster|logfile)" 6382/etc/redis.conf 
port 6382
logfile "/usr/local/redis/6382/logs/redis_6382.log"
cluster-enabled yes
cluster-config-file redis-cluster_6382.conf
[root@node03 redis]# 

Tip: If there is no problem with the configuration file in the corresponding directory, you can start the redis service directly;

start redis

Tip: You can see that the corresponding port is already in the listening state; then we can use redis.trib.rb to add two nodes to the cluster

Add a new node to the cluster

  Tip: add-node means to add a node to the cluster, you need to specify the ip address and port of the node to be added, followed by any ip address and port of the existing node in the cluster; from the above information, you can see 192.168. 0.43: 6381 has been successfully added to the cluster, but there is no slot or slave on it;

Allocate slots to newly added nodes

  Tip: Use reshard to specify the address and port of any node in the cluster to start the cluster resharding operation; to reassign slots, you need to specify to move multiple slots, receive the node id of the specified number of slots, and move the specified number of slots from those nodes. The number of slots, all means on the nodes that already have slots in the cluster; if it is specified manually, then you need to specify the ID of the corresponding node, and finally if the specification is completed, you need to use done to indicate that the above source node specification is completed; then it will print A scenario slot move scenario, let's determine.

Ready to move 4096 slots.
  Source nodes:
    M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
    M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
    M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
  Destination node:
    M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381
   slots: (0 slots) master
   0 additional replica(s)
  Resharding plan:
    Moving slot 5461 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5462 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5463 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5464 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5465 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5466 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5467 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 5468 from 91169e71359deed96f8778cf31c823dbd6ded350
......Some parts are omitted...
    Moving slot 12281 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12282 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12283 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12284 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12285 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12286 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 12287 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
Do you want to proceed with the proposed reshard plan (yes/no)? yes

Tip: Enter yes to agree to the above plan;

Moving slot 1177 from 192.168.0.41:6379 to 192.168.0.43:6381: 
Moving slot 1178 from 192.168.0.41:6379 to 192.168.0.43:6381: 
Moving slot 1179 from 192.168.0.41:6379 to 192.168.0.43:6381: 
Moving slot 1180 from 192.168.0.41:6379 to 192.168.0.43:6381: 
[ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY)
[root@node01 ~]# 

  Tip: The reason for the above error is that there is data bound to slot 1180 on 192.168.0.41:6379; it should be noted here that when the cluster allocates a slot, it must allocate a slot without bound data, and there is data No, so usually reassigning a slot requires shutting down and copying the data to other servers, and then adding the slot after the slot is allocated;

   clear data

Repair the cluster

Allocate slots again

[root@node01 ~]# redis-trib.rb reshard 192.168.0.41:6379
>>> Performing Cluster Check (using node 192.168.0.41:6379)
M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:1181-5460 (4280 slots) master
   1 additional replica(s)
S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380
   slots: (0 slots) slave
   replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e
S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381
   slots:0-1180,5461-6826 (2547 slots) master
   0 additional replica(s)
S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379
   slots: (0 slots) slave
   replicates 91169e71359deed96f8778cf31c823dbd6ded350
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 0449aa43657d46f487107bfe49344701526b11d8
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all

Ready to move 4096 slots.
  Source nodes:
    M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:1181-5460 (4280 slots) master
   1 additional replica(s)
    M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
    M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
  Destination node:
    M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381
   slots:0-1180,5461-6826 (2547 slots) master
   0 additional replica(s)
  Resharding plan:
    Moving slot 10923 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10924 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10925 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10926 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10927 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10928 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
    Moving slot 10929 from a7ace08c36f7d55c4f28463d72865aa1ff74829e
......Some information is omitted...
Moving slot 8033 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 8034 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 8035 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 8036 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 8037 from 91169e71359deed96f8778cf31c823dbd6ded350
    Moving slot 8038 from 91169e71359deed96f8778cf31c823dbd6ded350
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 10923 from 192.168.0.43:6379 to 192.168.0.43:6381: 
Moving slot 10924 from 192.168.0.43:6379 to 192.168.0.43:6381: 
Moving slot 10925 from 192.168.0.43:6379 to 192.168.0.43:6381: 
Moving slot 10926 from 192.168.0.43:6379 to 192.168.0.43:6381: 
Moving slot 10927 from 192.168.0.43:6379 to 192.168.0.43:6381: 
......Some information is omitted...
Moving slot 8035 from 192.168.0.43:6380 to 192.168.0.43:6381: 
Moving slot 8036 from 192.168.0.43:6380 to 192.168.0.43:6381: 
Moving slot 8037 from 192.168.0.43:6380 to 192.168.0.43:6381: 
Moving slot 8038 from 192.168.0.43:6380 to 192.168.0.43:6381: 
[root@node01 ~]# 

Tip: If no error is reported when re-allocating the slot, it means that the slot re-allocation is complete;

Confirm the existing cluster slot allocation

  Tip: As you can see from the above screenshot, 6642 slots are allocated to our newly added node, and there is no even distribution. The reason is that an error occurred after the first allocation of 2547 slots was successful, and the re-allocation has been successfully allocated. It will not return to 0, so we allocate 4096 slots to the newly added node again, resulting in the last newly added node slot becoming 6642 slots; the slot allocation is successful, but the corresponding master No slave yet;

Configure the slave for the newly added node

Tip: To add a slave node to the new master, first add the slave node to the cluster, and then configure the slave to belong to a master;

Change the newly added node slave 192.168.0.43:6382

[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves.
192.168.0.43:6382 (6df33baf...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 2884 slots | 1 slaves.
192.168.0.43:6381 (0449aa43...) -> 0 keys | 6642 slots | 0 slaves.
[OK] 0 keys in 5 masters.
0.00 keys per slot on average.
[root@node01 ~]# 
[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382
192.168.0.43:6382> AUTH admin
OK
192.168.0.43:6382> info replication
# Replication
role:master
connected_slaves:0
master_replid:69716e1d83cd44fba96d10e282a6534983b3ab8c
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
192.168.0.43:6382> CLUSTER NODES
0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 master - 0 1596851725000 12 connected 0-2446 5461-8038 10923-12539
91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596851725354 8 connected 8039-10922
8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596851726377 11 connected 2447-5460
62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596851725762 3 connected
e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596851724334 8 connected
6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 myself,master - 0 1596851723000 0 connected
dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596851723000 11 connected
a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596851723311 3 connected 12540-16383
192.168.0.43:6382> CLUSTER REPLICATE 0449aa43657d46f487107bfe49344701526b11d8
OK
192.168.0.43:6382> CLUSTER NODES
0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 master - 0 1596851781000 12 connected 0-2446 5461-8038 10923-12539
91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596851784708 8 connected 8039-10922
8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596851784000 11 connected 2447-5460
62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596851782000 3 connected
e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596851781000 8 connected
6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 myself,slave 0449aa43657d46f487107bfe49344701526b11d8 0 1596851783000 0 connected
dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596851783688 11 connected
a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596851785730 3 connected 12540-16383
192.168.0.43:6382> quit
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 2884 slots | 1 slaves.
192.168.0.43:6381 (0449aa43...) -> 0 keys | 6642 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
[root@node01 ~]# 

Tip: To subordinate a node to a certain master in the cluster, you need to connect to the corresponding slave node and execute cluster replicate + the ID of the corresponding master; at this point, adding a new node to the cluster is complete;

Validation: Add data on the newly added node to see if it can be added?

[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6381
192.168.0.43:6381> AUTH admin
OK
192.168.0.43:6381> get aa
(nil)
192.168.0.43:6381> set aa a1
OK
192.168.0.43:6381> get aa
"a1"
192.168.0.43:6381> set bb b1
(error) MOVED 8620 192.168.0.43:6380
192.168.0.43:6381> 

Tip: It is possible to read and write data on the newly added master;

Verification: Stop the newly added master and see if the corresponding slave will be promoted to master?

[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6381
192.168.0.43:6381> AUTH admin
OK
192.168.0.43:6381> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.0.43,port=6382,state=online,offset=1032,lag=1
master_replid:d65b59178dd70a13e75c866d4de738c4f248c84c
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1032
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1032
192.168.0.43:6381> quit
[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382
192.168.0.43:6382> AUTH admin
OK
192.168.0.43:6382> info replication
# Replication
role:slave
master_host:192.168.0.43
master_port:6381
master_link_status:up
master_last_io_seconds_ago:8
master_sync_in_progress:0
slave_repl_offset:1046
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:d65b59178dd70a13e75c866d4de738c4f248c84c
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1046
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1046
192.168.0.43:6382> quit
[root@node01 ~]# ssh node03
Last login: Sat Aug  8 10:07:15 2020 from node01
[root@node03 ~]# ps -ef |grep redis
root       1425      1  0 08:34 ?        00:00:18 redis-server 0.0.0.0:6379 [cluster]
root       1431      1  0 08:35 ?        00:00:18 redis-server 0.0.0.0:6380 [cluster]
root       1646      1  0 09:04 ?        00:00:14 redis-server 0.0.0.0:6381 [cluster]
root       1651      1  0 09:04 ?        00:00:07 redis-server 0.0.0.0:6382 [cluster]
root       5888   5868  0 10:08 pts/1    00:00:00 grep --color=auto redis
[root@node03 ~]# kill -9 1646
[root@node03 ~]# redis-cli -p 6382
127.0.0.1:6382> AUTH admin
OK
127.0.0.1:6382> info replication
# Replication
role:master
connected_slaves:0
master_replid:34d6ec0e58f12ffe9bc5fbcb0c16008b5054594f
master_replid2:d65b59178dd70a13e75c866d4de738c4f248c84c
master_repl_offset:1102
second_repl_offset:1103
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1102
127.0.0.1:6382> 

Tip: You can see that when the master is down, the corresponding slave will be promoted to the master;

delete node

To delete a node in the cluster, you need to ensure that there is no data on the node to be deleted.

If the node is not empty, migrate the slot to another master

[root@node01 ~]# redis-trib.rb reshard 192.168.0.41:6379
>>> Performing Cluster Check (using node 192.168.0.41:6379)
M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:2447-5460 (3014 slots) master
   1 additional replica(s)
M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382
   slots:0-2446,5461-8038,10923-12539 (6642 slots) master
   0 additional replica(s)
S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380
   slots: (0 slots) slave
   replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e
S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:12540-16383 (3844 slots) master
   1 additional replica(s)
M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:8039-10922 (2884 slots) master
   1 additional replica(s)
S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379
   slots: (0 slots) slave
   replicates 91169e71359deed96f8778cf31c823dbd6ded350
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 6642
What is the receiving node ID? 91169e71359deed96f8778cf31c823dbd6ded350
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:6df33baf68995c61494a06c06af18045ca5a04f6
Source node #2:done

Ready to move 6642 slots.
  Source nodes:
    M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382
   slots:0-2446,5461-8038,10923-12539 (6642 slots) master
   0 additional replica(s)
  Destination node:
    M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:8039-10922 (2884 slots) master
   1 additional replica(s)
  Resharding plan:
    Moving slot 0 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 1 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 2 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 3 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 4 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 5 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 6 from 6df33baf68995c61494a06c06af18045ca5a04f6
......Some parts are omitted...
 Moving slot 12536 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 12537 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 12538 from 6df33baf68995c61494a06c06af18045ca5a04f6
    Moving slot 12539 from 6df33baf68995c61494a06c06af18045ca5a04f6
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 0 from 192.168.0.43:6382 to 192.168.0.43:6380: 
Moving slot 1 from 192.168.0.43:6382 to 192.168.0.43:6380: 
Moving slot 2 from 192.168.0.43:6382 to 192.168.0.43:6380: 
Moving slot 3 from 192.168.0.43:6382 to 192.168.0.43:6380: 
......Some parts are omitted...
Moving slot 1178 from 192.168.0.43:6382 to 192.168.0.43:6380: 
Moving slot 1179 from 192.168.0.43:6382 to 192.168.0.43:6380: 
Moving slot 1180 from 192.168.0.43:6382 to 192.168.0.43:6380: 
[ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY)
[root@node01 ~]# 

  Tip: This error is the same as the error of the new node above, which is caused by telling us that the corresponding slot is bound to the data; the solution is to copy the data on the corresponding node, and then move the data and then move the slot; Let me talk about it here, we want to move the slot on a node to other master s, we need to specify how many slots to move to that node, the node here also needs to be specified by id, if there are multiple source node s, specify their IDs respectively, and finally Use done to indicate completion; in fact, it is the same as reassigning a slot;

clear data

[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382       
192.168.0.43:6382> AUTH admin
OK
192.168.0.43:6382> KEYS *
1) "aa"
192.168.0.43:6382> FLUSHDB
OK
192.168.0.43:6382> KEYS *
(empty list or set)
192.168.0.43:6382> BGSAVE
Background saving started
192.168.0.43:6382> quit
[root@node01 ~]# 

Move the slot to another node again

Tip: If you move the slot again, you need to repair the cluster first, and then you can reassign the slot

Repair the cluster

[root@node01 ~]# redis-trib.rb fix 192.168.0.41:6379    
>>> Performing Cluster Check (using node 192.168.0.41:6379)
M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:2447-5460 (3014 slots) master
   1 additional replica(s)
M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382
   slots:1180-2446,5461-8038,10923-12539 (5462 slots) master
   0 additional replica(s)
S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380
   slots: (0 slots) slave
   replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e
S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:12540-16383 (3844 slots) master
   1 additional replica(s)
M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:0-1179,8039-10922 (4064 slots) master
   1 additional replica(s)
S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379
   slots: (0 slots) slave
   replicates 91169e71359deed96f8778cf31c823dbd6ded350
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
[WARNING] Node 192.168.0.43:6382 has slots in migrating state (1180).
[WARNING] Node 192.168.0.43:6380 has slots in importing state (1180).
[WARNING] The following slots are open: 1180
>>> Fixing open slot 1180
Set as migrating in: 192.168.0.43:6382
Set as importing in: 192.168.0.43:6380
Moving slot 1180 from 192.168.0.43:6382 to 192.168.0.43:6380: 
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379   
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves.
192.168.0.43:6382 (6df33baf...) -> 0 keys | 5461 slots | 0 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 4065 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
[root@node01 ~]#

Tip: After repairing the cluster, you can see that there are 5461 slots left on the corresponding master. Next, we will assign these 5461 slots to other nodes again (you can divide them multiple times if you can’t divide them all at once);

Allocate slots to other nodes again (allocate 1461 slots to 192.168.0.43:6379)

Allocate slots to other nodes again (allocate 12000 slots to 192.168.0.41:6379)

Allocate slots to other nodes again (allocate 12000 slots to 192.168.0.43:6380)

Confirm cluster slot allocation

Tip: You can first see that there are no slots on the 192.168.0.43:6382 node, then we can delete it from the cluster

Remove node from cluster (192.168.0.43:6382)

Tip: To delete a node from the cluster, you need to specify any ip address in the cluster and the corresponding ID of the node to be deleted;

Verification: View existing cluster information

[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379                                             
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 1 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
[root@node01 ~]# redis-trib.rb check 192.168.0.41:6379                                            
>>> Performing Cluster Check (using node 192.168.0.41:6379)
M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:2447-5460,5656-7655 (5014 slots) master
   1 additional replica(s)
S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380
   slots: (0 slots) slave
   replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e
S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:1181-2446,5461-5655,12540-16383 (5305 slots) master
   1 additional replica(s)
M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:0-1180,7656-12539 (6065 slots) master
   1 additional replica(s)
S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379
   slots: (0 slots) slave
   replicates 91169e71359deed96f8778cf31c823dbd6ded350
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@node01 ~]# 

Tip: You can see that there are currently only 3 masters, 3 slaves and 6 nodes in the cluster;

Verification: 192.168.0.43:6381, which was down before starting, see if it is still in the cluster?

[root@node01 ~]# ssh node03
Last login: Sat Aug  8 10:08:40 2020 from node01
[root@node03 ~]# ss -tnl
State      Recv-Q Send-Q        Local Address:Port                       Peer Address:Port              
LISTEN     0      128                       *:22                                    *:*                  
LISTEN     0      100               127.0.0.1:25                                    *:*                  
LISTEN     0      128                       *:16379                                 *:*                  
LISTEN     0      128                       *:16380                                 *:*                  
LISTEN     0      128                       *:6379                                  *:*                  
LISTEN     0      128                       *:6380                                  *:*                  
LISTEN     0      128                    [::]:22                                 [::]:*                  
LISTEN     0      100                   [::1]:25                                 [::]:*                  
LISTEN     0      128                    [::]:2376                               [::]:*                  
[root@node03 ~]# redis-server /usr/local/redis/6381/etc/redis.conf 
[root@node03 ~]# ss -tnl
State      Recv-Q Send-Q        Local Address:Port                       Peer Address:Port              
LISTEN     0      128                       *:22                                    *:*                  
LISTEN     0      100               127.0.0.1:25                                    *:*                  
LISTEN     0      128                       *:16379                                 *:*                  
LISTEN     0      128                       *:16380                                 *:*                  
LISTEN     0      128                       *:16381                                 *:*                  
LISTEN     0      128                       *:6379                                  *:*                  
LISTEN     0      128                       *:6380                                  *:*                  
LISTEN     0      128                       *:6381                                  *:*                  
LISTEN     0      128                    [::]:22                                 [::]:*                  
LISTEN     0      100                   [::1]:25                                 [::]:*                  
LISTEN     0      128                    [::]:2376                               [::]:*                  
[root@node03 ~]# exit
logout
Connection to node03 closed.
[root@node01 ~]# redis-trib.rb check 192.168.0.41:6379
[ERR] Sorry, can't connect to node 192.168.0.43:6382
>>> Performing Cluster Check (using node 192.168.0.41:6379)
M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379
   slots:2447-5460,5656-7655 (5014 slots) master
   2 additional replica(s)
S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380
   slots: (0 slots) slave
   replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e
S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379
   slots:1181-2446,5461-5655,12540-16383 (5305 slots) master
   1 additional replica(s)
M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380
   slots:0-1180,7656-12539 (6065 slots) master
   1 additional replica(s)
S: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381
   slots: (0 slots) slave
   replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855
S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379
   slots: (0 slots) slave
   replicates 91169e71359deed96f8778cf31c823dbd6ded350
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 
[ERR] Sorry, can't connect to node 192.168.0.43:6382
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 2 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
[root@node01 ~]# redis-cli -a admin
127.0.0.1:6379> CLUSTER NODES
62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596855739865 15 connected
dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596855738000 16 connected
8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 myself,master - 0 1596855736000 16 connected 2447-5460 5656-7655
a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596855737000 15 connected 1181-2446 5461-5655 12540-16383
91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596855740877 18 connected 0-1180 7656-12539
0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596855738000 16 connected
30a34b27d343883cbfe9db6ba2ad52a1936d8b67 192.168.0.43:6382@16382 handshake - 1596855726853 0 0 disconnected
e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596855739000 18 connected
127.0.0.1:6379>

  Tip: It can be seen that when 192.168.0.43:6381 (the slave that deletes the master from the source) starts, it will automatically slave to a master of the cluster node; from the above information, we can see that the cluster is now 3 masters and 4 slaves, 192.168.0.43 : 6381 is subordinate to 192.168.0.41: 6379; you may have found that the corresponding 6382 node on node03 is no longer there. From the cluster node relationship, its status has become handshake disconnected;

Verification: Start 192.168.0.43:6382 and see if it is still in the cluster?

[root@node01 ~]# ssh node03
Last login: Sat Aug  8 11:00:50 2020 from node01
[root@node03 ~]# ss -tnl
State      Recv-Q Send-Q        Local Address:Port                       Peer Address:Port              
LISTEN     0      128                       *:22                                    *:*                  
LISTEN     0      100               127.0.0.1:25                                    *:*                  
LISTEN     0      128                       *:16379                                 *:*                  
LISTEN     0      128                       *:16380                                 *:*                  
LISTEN     0      128                       *:16381                                 *:*                  
LISTEN     0      128                       *:6379                                  *:*                  
LISTEN     0      128                       *:6380                                  *:*                  
LISTEN     0      128                       *:6381                                  *:*                  
LISTEN     0      128                    [::]:22                                 [::]:*                  
LISTEN     0      100                   [::1]:25                                 [::]:*                  
LISTEN     0      128                    [::]:2376                               [::]:*                  
[root@node03 ~]# redis-server /usr/local/redis/6382/etc/redis.conf 
[root@node03 ~]# ss -tnl
State      Recv-Q Send-Q        Local Address:Port                       Peer Address:Port              
LISTEN     0      128                       *:22                                    *:*                  
LISTEN     0      100               127.0.0.1:25                                    *:*                  
LISTEN     0      128                       *:16379                                 *:*                  
LISTEN     0      128                       *:16380                                 *:*                  
LISTEN     0      128                       *:16381                                 *:*                  
LISTEN     0      128                       *:16382                                 *:*                  
LISTEN     0      128                       *:6379                                  *:*                  
LISTEN     0      128                       *:6380                                  *:*                  
LISTEN     0      128                       *:6381                                  *:*                  
LISTEN     0      128                       *:6382                                  *:*                  
LISTEN     0      128                    [::]:22                                 [::]:*                  
LISTEN     0      100                   [::1]:25                                 [::]:*                  
LISTEN     0      128                    [::]:2376                               [::]:*                  
[root@node03 ~]# redis-cli 
127.0.0.1:6379> AUTH admin
OK
127.0.0.1:6379> CLUSTER NODES
0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596856251000 16 connected
a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 myself,master - 0 1596856250000 15 connected 1181-2446 5461-5655 12540-16383
dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596856250973 16 connected
e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596856253018 18 connected
8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596856252000 16 connected 2447-5460 5656-7655
6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 master - 0 1596856253000 17 connected
62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596856252000 15 connected
91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596856254043 18 connected 0-1180 7656-12539
127.0.0.1:6379> quit
[root@node03 ~]# exit
logout
Connection to node03 closed.
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379
192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 2 slaves.
192.168.0.43:6382 (6df33baf...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves.
192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
[root@node01 ~]# 

Tip: You can see that when we start 192.168.0.43:6382 and check the cluster information again, it returns to the cluster, but it has no corresponding slot. Of course, if there is no slot, no connection will be scheduled to it;

Tags: Redis nosql

Posted by GB_001 on Mon, 23 May 2022 22:14:20 +0300