Node planning
Three masters and three slaves |
master |
slave |
node-1 |
192.168.0.142 6379 |
192.168.0.142 26379 |
node-2 |
192.168.0.143 6379 |
192.168.0.143 26379 |
node-3 |
192.168.0.144 6379 |
192.168.0.144 26379 |
|
16379 Messaging port |
36379 Messaging port |
Directory Planning
installation manual |
/usr/local/redis4 |
config file directory |
/usr/local/redis4/conf/redis_cluster/ |
data storage directory |
/database/redis4/ |
log directory |
/var/log/redis4/ |
pidfile |
/var/run/ |
Service startup username |
root |
Single node deployment
Version: redis 4.0.10
The following takes one 0.142 as an example, and the other two also need to be matched
installation manual |
/usr/local/redis4 |
config file directory |
/usr/local/redis4/conf |
data storage directory |
/database/redis4/redis_6379 |
log directory |
/var/log/redis4/redis_6379.log |
Service startup username |
root |
redis instance file naming rules |
|
|
type |
rule |
Example |
instance configuration file name |
redis_port.conf |
redis_6379.conf |
instance data directory name |
redis_port |
redis_6379 |
Instance log file name |
redis_port.log |
redis_6379.log |
Optimized server configuration
It is mainly to optimize the server before installing the service. It depends on the server situation to decide whether to use the following optimized configuration. If the following optimized service is not configured, there will be a WARNING Warning, but the service starts and runs fine.
1,disabled Linux Transparent Huge Pages
cd /etc/init.d
wget http://y-tools.up366.cn/tools/mongodb/disable-transparent-hugepages
chmod 755 /etc/init.d/disable-transparent-hugepages
chkconfig --add disable-transparent-hugepages
/etc/init.d/disable-transparent-hugepages start
2,/proc/sys/net/core/somaxconn
socket monitor( listen)of backlog Upper limit (the system defaults to 128, which limits the ability to receive new TCP the size of the connection listening queue, redis config file default tcp-backlog is 511)
edit/etc/sysctl.conf file add the following
net.core.somaxconn = 1024
Execute the command to take effect: sysctl -p
3,memory allocation strategy()
edit/etc/sysctl.conf file add the following
vm.overcommit_memory = 1
Execute the command to take effect: sysctl -p
Install
unzip
tar -xf redis-4.0.10.tar.gz
Enter the directory to compile and install
#cd redis-4.0.10
#make PREFIX=/usr/local/redis4
Note: run make test 8 required.5 above tcl,So run first: yum install tcl -y
#make test
#make install PREFIX=/usr/local/redis4
If an error is reported after compiling, gcc cannot find the command
yum install -y gcc epel-release jemalloc-devel
cd deps/
make hiredis jemalloc linenoise lua
cd ..
make PREFIX=/usr/local/redis4
echo $?
Create directories (config file directory, data directory, log directory)
[root@node-1 redis-4.0.10]# mkdir -p /usr/local/redis4/conf /database/redis4 /var/log/redis4
#Create an instance directory
[root@node-1 redis-4.0.10]# mkdir -p /database/redis4/redis_6379
Modify the configuration file
[root@node-1 redis-4.0.10]# cp redis.conf /usr/local/redis4/conf/
[root@node-1 redis-4.0.10]# vim /usr/local/redis4/conf/redis.conf
1 daemonize yes #Specifies whether to run in the background, yes: yes, no: no
2 pidfile/var/run/redis_6379.pid #Specify the pid file path
3 port 6379#Specify the instance port
4.bind 192.168.0.142
5 logfile "/var/log/redis4/redis_6379.log"#Specify the log file path
6 dbfilename dump.rdb #Specify the dump file name
7 dir/database/redis4/redis_6379/#Specify the data storage directory
8 appendonly yes#Enable AOF persistence mode, yes: enable, no: disable
start the service
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis.conf
Cluster deployment
The following is an example of 0.142 and the other two also need to be matched
Prepare the environment
1. Add epel source
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
2, ruby environment installation
yum -y install ruby ruby-devel rubygems rpm-build
gem install redis -v 3.3.5 If an error is reported ruby The version is too low, the solution: Redis summary of a problem
Note: ruby gem Installed redis library, version 4 cannot be used.0,otherwise in reshard An error will be reported when sharding.
If executed gem install redis -v 3.3.5 No reaction
Can be installed manually gem,re-execute gem install redis -v 3.3.5 refer to: https://blog.csdn.net/wangshuminjava/article/details/80284810
3. Create a data storage directory
mkdir -p /database/redis4/redis_26379
4. Create a configuration file directory
cd /usr/local/redis4/conf
mkdir redis_cluster/{6379,26379} -p
cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/6379/redis6379.conf
cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/26379/redis26379.conf(Note to modify the port)
Edit configuration file redis_6379.conf and redis26379.conf,Add the following configuration (note to modify the port)
dir ,logfile ,pidfile The path port should also be modified
cluster-enabled yes
cluster-config-file nodes6379.conf <-- Note the port modification
cluster-node-timeout 10000
cluster-require-full-coverage no
5. Set the environment variable PATH
cp -a /root/redis-4.0.10/src/redis-trib.rb /usr/local/redis4/bin/
vim /etc/profile
PATH=$PATH:/usr/local/redis4/bin
source /etc/profile
start all nodes
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/6379/redis_6379.conf
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/26379/redis_26379.conf
Manage and create a cluster through the redis-trib.rb cluster management tool provided by redis (only operate on 142)
redis-trib.rb create --replicas 1 192.168.0.142:6379 192.168.0.143:6379 192.168.0.144:6379 192.168.0.144:26379 192.168.0.143:26379 192.168.0.142:26379
--replicas The parameter specifies that each master node in the cluster is equipped with several slave nodes, which is set to 1 here.
#Can I set the above configuration? (type 'yes' to accept): yes #There will be an interaction in the middle, asking if you agree with the node configuration yes
.....
.....
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
16384 All slots are allocated and the cluster is created successfully. Note: give redis-trib.rb The node address must be one that does not contain any slots/data nodes, otherwise it will refuse to create a cluster.
Check cluster status
[root@node-1 conf]# redis-trib.rb check 192.168.0.142:6379 Specify any node
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
View cluster information
[root@node-1 conf]# redis-trib.rb info 192.168.0.142:6379 Any node can
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 5461 slots | 1 slaves.
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 5462 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
add node
Add master node
1.Steps repeat the environment preparation phase, pay attention to modify the port
2.Join the node to the cluster
method one: redis-trib.rb add-node 192.168.0.142:6380(new masternode ip:port) 192.168.0.142:6379(existing node ip:port)
Method 2: Enter any node:
[root@node-1 6381]# redis-cli -c -h 192.168.0.142 -p 6379
192.168.0.142:6379> cluster meet 192.168.0.142 6380
OK
Allocate slot s to new nodes
before assignment
#ready id
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596789801413 0 connected
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596789800000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596789800000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596789796000 7 connected 0-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596789802416 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596789800411 6 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596789799408 3 connected 11672-16383
#6380 is still empty
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 6961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
start allocating
[root@node-1 conf]# redis-trib.rb reshard 192.168.0.142:6380
>>> Performing Cluster Check (using node 192.168.0.142:6380)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots: (0 slots) master
0 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:11672-16383 (4712 slots) master
1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:6212-10922 (4711 slots) master
1 additional replica(s)
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-6211,10923-11671 (6961 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2000 #The first interaction, how many slots are allocated?
What is the receiving node ID? 2e9f699fde48fcfbc566a8f14d21be85c66dc062 #ID of the new node
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Source node #2:done
#all means to redistribute from all master s;
#Or write data to extract the master node id of the slot, and finally end with done
Ready to move 2000 slots.
Source nodes:
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-6211,10923-11671 (6961 slots) master
1 additional replica(s)
Destination node:
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots: (0 slots) master
0 additional replica(s)
Resharding plan:
Moving slot 0 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Moving slot 1 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Moving slot 2 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
..............
..............
Do you want to proceed with the proposed reshard plan (yes/no)? yes #Do you want to continue splitting slots?
..............
................
Assignment complete
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
add slave node
before adding
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
add node
Repeat the steps to prepare the environment to create a new slave node
16383
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596793698059 8 connected 0-1999
#add node
Command format: redis-trib.rb add-node --slave --master-id master node id add node ip and port node already exists in the cluster ip and port
[root@node-1 conf]# redis-trib.rb add-node --slave --master-id 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:26380 192.168.0.142:6379
>>> Adding node 192.168.0.142:26380 to cluster 192.168.0.142:6379
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:2000-6211,10923-11671 (4961 slots) master
1 additional replica(s)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots:0-1999 (2000 slots) master
0 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:6212-10922 (4711 slots) master
1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:11672-16383 (4712 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.0.142:26380 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.0.142:6380.
[OK] New node added correctly.
add complete
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
delete node
delete master node
1,first to use reshard remove master all of slot(Currently only deleted master of slot Migrate to a node)
redis-trib.rb reshard 192.168.0.183:6379
...
Migration process
...
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 6712 slots | 2 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
2, delete current node
redis-trib.rb del-node 192.168.0.142:6380 '7030164ada8fcabd6f8ecca2d03350a2c436d73a' < any ip:port own node id>
[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:6380 2e9f699fde48fcfbc566a8f14d21be85c66dc062
>>> Removing node 2e9f699fde48fcfbc566a8f14d21be85c66dc062 from cluster 192.168.0.142:6380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797414173 7 connected
33e74c38f9ff08c725702ba0024b916e3f944a20 192.168.0.142:26380@36380 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797411000 9 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797411166 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797406000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797413170 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797412167 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797410000 9 connected 0-1999 11672-16383
delete slave node
[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:26380 33e74c38f9ff08c725702ba0024b916e3f944a20
>>> Removing node 33e74c38f9ff08c725702ba0024b916e3f944a20 from cluster 192.168.0.142:26380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383
failover
Order: CLUSTER failover: Manual failover needs to be performed on the slave node.
eg:
192.168.0.144:6380> CLUSTER failover
OK
before transfer
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383
start transfer
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 26379
192.168.0.142:26379>
192.168.0.142:26379> cluster failover
OK
192.168.0.142:26379>
transfer complete
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797862201 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797861198 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797852000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797864205 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 master - 0 1596797861198 10 connected 0-1999 11672-16383
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 slave 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 0 1596797863203 10 connected