Redis learning notes: redis master-slave replication, sentinel mechanism, cluster and spring boot integrate redis cluster to realize distributed session management

The content of the notes comes from the video of the up master who is a poor programmer in station B
The content of the article is relatively long. If you need a small partner, you can jump to the specified location according to the directory
The notes refer to many big guys' articles. Thank you here

1. Basic knowledge

1. Memory based key value database
2. api //set written in c language that can support multiple languages 110000 times per second and get 81000 times
3. Support data persistence
4.value can be string, hash, list, set, sorted set

1.1 usage scenario

  1. Operation of getting the latest n data
  2. Ranking list, take top n data / / top 10 best popularity
  3. Precise setting of expiration time
  4. Counter
  5. Real time system, anti garbage system
  6. pub, sub publish and subscribe to build a real-time message system
  7. Building message queues
  8. cache

1.2 application of different types of Value in the Internet

  • String: cache, current limit, counter, distributed lock, distributed session
  • hash: store user information, user home page visits, and combined queries
  • List: timeline list and simple queue of Weibo followers
  • Set: like, step on, tag, friend relationship
  • ZSet: Leaderboard

1.3 common operation instructions

cmd accessing redis
redis-cli.exe -h 127.0.0.1 -p 6379 if the accessed Redis service is deployed locally, the - h... - p... Parameter is not required

select 0 select the first library

move mykey 1 moves the current database key to a database. If the target database has, it cannot be moved

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-jv9ytc2p-1605670346894) (C: \ users \ mi \ appdata \ roaming \ typora \ typora user images \ image-20201105113524018. PNG)]

key

keys * get all key s
flush db clears the specified library
Random key
type key gets the type of value corresponding to the key

set key1 value1 set key
get key1 get the value corresponding to the key
mset key1 value1 key2 value2 key3 value3 batch setting key value
mget key1 key2 key3 get the value corresponding to the key in batch
del key1 delete key
exists key determines whether a key exists
expire key 10 sets the expiration time of 10 seconds for the specified key
pexpire key 1000 sets the expiration time of 1000 milliseconds for the specified key
persist key removes the set expiration time from the specified key to make it never expire

string

getrange name 0 -1 string interception. If there is no substring in the range, null will be returned
​ getset name new_cxx sets the value and returns the old value
mset key1 key2 batch setting
mget key1 key2 batch acquisition
setnx key value does not exist (not exists)
setex key time value expiration time
setrange key index value replaces value from index
incr age increment
incrby age 10 increment
decr age decreasing
Decrement by age 10
incrbyfloat increase / decrease floating point number
Append append
strlen length
getbit/setbit/bitcount/bitop bit operation

hash

​ hset myhash name cxx
​ hget myhash name
​ hmset myhash name cxx age 25 note "i am notes"
​ hmget myhash name age note
hgetall myhash get all
Does hexists myhash name exist
hsetnx myhash score 100 setting does not exist
hincrby myhash id 1 increment
hdel myhash name delete
hkeys myhash take only key
hvals myhash only takes value
Helen myhash length

list

lpush mylist a b c left insert
rpush mylist x y z right insert
lrange mylist 0 -1 output data in the specified range
lpop mylist pops up the element from the left
rpop mylist pops up the element from the right
llen mylist length
lrem mylist count value deletes the value value of the specified number
lindex mylist 2 specifies the value of the index
lset mylist 2 n index setting value (provided that the index must exist and cannot exceed the boundary)
ltrim mylist 0 4 intercepts and preserves substrings
linsert mylist before a insert (match from left to right to the first one, and then execute the operation)
linsert mylist after a insert
Rpolpush list List2 transfer the data of the list

lpushx mylist value determines whether the list exists. If it does not exist, cancel the push, and execute the push if it exists

set (non repeatable)

sadd myset redis inserts data into the set
smembers myset view data set
srem myset set1 deletes the specified element
sismember myset set1 determines whether an element is in the set (returns 0 or 1)
​ scard key_name view the number of collection data
sdiff | sinter | sunion operation: operation between sets: difference set | intersection | Union
srandmember randomly gets the elements in the collection (returns but does not delete)
spop pops an element from the collection (out of order, so the pop-up element is random and the element in the collection will be deleted)

smove original set target set data (both original set and destination set must be set)

zset (sortable set set, non repeatable)

​ zadd zset 1 one
​ zadd zset 2 two
​ zadd zset 3 three
zincrby zset 1 one growth score
zscore zset two get a score
Zrange Zset 0 - 1 WithCores lists all key s in ascending order (adding WithCores will display scores)

Zrevrange Zset 0 - 1 WithCores lists all key s in descending order (adding WithCores will display scores)

Zrangebyscore Zset 10 25 WithCores specifies the value of the range
Zrangebyscore Zset 10 25 WithCores limit 1 2 paging
Zrevrangebyscore Zset 10 25 WithCores specifies the value of the range
Number of zcard zset elements
Zcount zset obtains the number of elements within the specified score range
Zrem zset one two delete one or more elements
Zremrangebyrank zset 0 1 delete elements by ranking range
Zremrangebyscore zset 0 1 delete elements by score range
Zrank zset member query the position of elements in ascending set (index starts from 0)
Zrevrank zset member queries the position of elements in a descending set
​ Zinterstore
​ zunionstore rank:last_week 7 rank:20150323 rank:20150324 rank:20150325 weights 1 1 1 1 1 1 1

Sort:
sort mylist sort
sort mylist alpha desc limit 0 2 alphabetical sort
sort list by it:* desc by command
sort list by it:* desc get it:* get parameters
Sort list by it: * desc get it: * store sort: the store parameter of the result sort command: means to save the result set of the sort query

Subscription and publication:
Subscription channel: subscribe chat1
Release message: publish chat1 "hell0 ni hao"
View channels: pubsub channels
View the number of subscribers of a channel: pubsub numsub chat1
Unsubscribe to the specified channel: unsubscribable chat1, punsubscribe Java*
Subscribe to a group of channels: psubscribe java*

redis transaction:
Isolation, atomicity,
Steps: start the transaction, execute the command, and commit the transaction
multi / / start transaction
sadd myset a b c
sadd myset e f g
lpush mylist aa bb cc
lpush mylist dd ff gg

Server management
dump.rdb
appendonly.aof
//BgRewriteAof performs an aop(appendOnly file) file rewrite asynchronously
An optimized version of the current AOF file volume is created

//BgSave asynchronously saves data to disk in the background, and creates a file dump. In the current directory rdb
//save synchronously saves data to disk, which will block the main process and make other clients unable to connect

//client kill close client connection
//client list lists all clients

//Set a name for the client
client setname myclient1
client getname

config get port
//configRewrite overwrites the redis configuration file

rdb save 900 1save 300 10save 60 10000

aop backup processing appendonly yes enable persistent appendfsync everysec backup once per second

Command: bgsave asynchronously saves data to disk (snapshot save) lastsave returns the timestamp of the unix that was successfully saved to disk last time. shutdown synchronously saves data to the server and closes the redis server bgrewriteaof file compression processing (command)

2. Persistence mechanism

2.1 Snapshot mechanism

  1. This snapshot method writes all data at a certain time to the hard disk. Of course, this is also the default way of redis to enable persistence. The saved files are saved in End with RDB form. Therefore, this method is also called RDB method


2. Snapshot generation method:

  • Client: BGSAVE and SAVE instructions

    1. When the client uses the BGSAVE (Background Save) instruction, redis will call fork to create a child process, and then the child process is responsible for writing the snapshot to the disk, while the parent process can continue to process the command request. This means that under the BGSAVE instruction, redis command requests will not be blocked.

    About fork: when a process creates a child process, the underlying operating system will create a copy of the process, and the operation of creating a child process in a unix like system will be optimized: at the beginning, the parent and child processes will share the same memory until one of them writes to the memory.

​ 2. When using the SAVE command, the Redis server will use the main process to execute the task of creating snapshots, and will not respond to tasks other than snapshots.

  • Automatic triggering of server configuration (configured in redis.config)

    1. Server configuration mode, automatically triggered when conditions are met

      # BGSAVE execution snapshot will be triggered if the following conditions are met any time
      save 900 1
      save 300 10
      save 60 10000
      
    2. The server receives the client shutdown command

      After receiving the client shutdown instruction, the server will call save to execute a snapshot. Will not respond to other commands.

2.2 AOF Append Only File

In the default configuration, AOF is off, and AOF is on: in redis Find Append Only Model in config configuration file and set it to yes

appendonly yes

Synchronization frequency:

  • always [not recommended]

    Every write command is synchronized to the hard disk.

    Although this synchronization method can guarantee the problem of data loss to the greatest extent, due to the limited io performance of the hard disk, it is likely to affect the performance of redis. The mechanical hard disk can synchronize about 200 commands / s, while the solid-state hard disk can reach millions of commands / s. This continuous operation of writing a small amount of data may lead to write amplification and greatly shorten the service life of the hard disk. Therefore, SSD users should use it with caution.

  • everysec [recommended]

    Perform synchronization once per second to explicitly synchronize multiple write commands to disk.

    The performance of the synchronization performed by redis is guaranteed to be no different from that performed by redis for a few seconds at most.

  • no [not recommended]

    Synchronization is not performed, and the appropriate synchronization is determined by the operating system.

The two persistence methods can be turned on or off at the same time. If they are turned on at the same time, redis will give priority to AOF because AOF is relatively safer.

Although AOF is a relatively safe way of persistence, there will be redundant records in AOF. This record will cause the AOF file to become larger and larger.

2.3 AOF rewriting mechanism

Rewriting reason: the AOF file is constantly redundant, resulting in the file is too large and difficult to maintain.

Rewriting purpose: reduce redundant instructions and reduce the volume of AOF files.

Overridden settings:

  • Client command execution

    The BGREWRITEAOF command will not block the redis command

  • Automatically triggered by server configuration mode

    On redis Auto AOF rewrite percentage and auto AOF rewrite min size are configured in the config configuration file. When AOF persistence is enabled, AOF file rewriting will be triggered automatically according to the configuration

    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    # The meaning of this setting is that when the AOF file is larger than 64Mb and the size of the AOF file increases by 100% compared with the last rewritten, rewriting will be automatically triggered
    

Rewriting principle: Rewriting actually means that the new AOF file overwrites the old AOF file and does not read the old AOF file

3. Java operation Redis

3.1 environmental preparation

    <!--Import jedis rely on-->
    <dependency>
      <groupId>jedis.clients</groupId>
      <artifactId>jedis</artifactId>
      <version>3.0.0</version>
    </dependency>

3.2 operating with Jedis objects

        //Create jedis client object
        Jedis jedis = new Jedis("localhost",6379);
        //Select a library and use library 0 by default
        jedis.select(0);
		        //Get all key information of redis
//        Set<String> keys = jedis.keys("*");
//        keys.forEach(key-> System.out.println("key: " + key));
        //Clear all key s
//        jedis.flushAll();
        //Delete key
//        Long name = jedis.del("name");
//        System.out.println(name);
        //Set timeout
//        Long age = jedis.expire("addr", 10);
//        System.out.println(age);
        //Get a key at random
//        String s = jedis.randomKey();
//        System.out.println(s);
        //Rename a key
//        String rename = jedis.rename("age", "newage");
//        System.out.println(rename);
        //View value type
        String addr = jedis.type("class");
        System.out.println(addr);
        //Release connection
        jedis.close();

4. Spring Boot integrates redis

RedisTemplate and StringRedisTemplate are provided in Spring Boot Data Redis. StringRedisTemplate is a subclass of RedisTemplate. The two methods are basically the same. The difference is that the generic type of key and value in RedisTemplate is Object, while the generic type of StringRedisTemplate is String.

This means that RedisTemplate can perform redis access and other operations on any serialized and deserialized object, while StringTemplate can only support String operations.

4.1 import dependency

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
            <version>2.3.4.RELEASE</version>
        </dependency>

4.2 configuration parameters

spring.redis.host=localhost
spring.redis.port=6379
spring.redis.database=0

4.3 building test classes

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.util.Set;

@SpringBootTest(classes = SpringBootRedisApplication.class)
public class TestRedis {
    @Autowired
    RedisTemplate redisTemplate;

    @Autowired
    StringRedisTemplate stringRedisTemplate;

    @BeforeEach
    public void setRedisTemplate(){
        this.redisTemplate.setKeySerializer(new StringRedisSerializer());
        this.redisTemplate.setValueSerializer(new StringRedisSerializer());
    }
    @Test
    public void test(){
        redisTemplate.opsForValue().set("fromSpringBootTest","hello redis");
        String fromSpringBootTest = (String) redisTemplate.opsForValue().get("fromSpringBootTest");
        System.out.println(fromSpringBootTest);
        Boolean age = redisTemplate.hasKey("addr");
        System.out.println(age);
        Set<String> keys = stringRedisTemplate.keys("*");
        keys.forEach(key-> System.out.println("key:" + key + ",value:" + 				  stringRedisTemplate.opsForValue().get(key)));
    }
}

Summary of two problems during the test:

  1. Tips for starting the test method IDEA: failed to resolve org junit. platform:junit-platform-launcher

    Solution: add related dependencies in pom

    <dependency>
                <groupId>org.junit.platform</groupId>
                <artifactId>junit-platform-launcher</artifactId>
                <scope>test</scope>
    </dependency>
    
  2. When using RedisTemplate to write redis, the first part of the key will be garbled

    Solution: the reason for this error is that the KeySerializer is not specified for redisTemplate, which causes redisTemplate to call the default Serializer of jedis SDK, and the Outputstream used in the default Serializer uses ISO-8859-1 encoding.

   this.redisTemplate.setKeySerializer(new StringRedisSerializer());

Note that if the operation is of Hash type, you also need to set the hashKey serialization scheme

   this.redisTemplate.setHashKeySerializer(new StringRedisSerializer());

4.4 binding API

bound api: use binding API to operate Redis in Spring boot, which is a more friendly operation to Redis

Explanation: if you need to perform multiple operations on a key, you can simplify the writing by binding.

//Before binding
redisTemplate.opsForValue().set("name","huangwuping");
redisTemplate.opsForValue().append("name","is a kind person");
String name = (String) redisTemplate.opsForValue().get("name");
System.out.println(name);

//After binding
BoundValueOperations name1 = redisTemplate.boundValueOps("name");
name1.set("pingwuhuang");
name1.append("is a good person");
String object = (String) name1.get();
System.out.println(object);

summary

  • For data with both key and value of String type, you can use StringRedisTemplate

  • When using Redistemplate to store a custom object, you must first serialize the object

  • When using String as key and HashKey in RedisTemplate, the serialization scheme should be specified. StringRedisSerializer is recommended

  • In the case of multiple operations on a key, the binding api can be used to simplify the code

5. Redis application scenario

5.1 using mybatis local cache and redis to realize distributed cache

Local Cache implemented by Mybatis: the default is perpetual Cache, which implements the Cache interface. The HashMap built in Java is used internally, and the local Cache is realized through the get and put operations of HashMap. PerpetualCache implements CacheKey based on HashMap to adapt to more complex caching scenarios.

To enable the local cache, you only need to open it in mapper Introducing single tags into XML files

<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="im.hwp.spring_boot_redis_cache.DAO.UserDao">
    <cache/>
    <select id="findAll" resultType="User">
        select name,age,birthday,addr from user
    </select>
</mapper>
  1. Customize RedisCache class, implement Cache interface, and implement putObject,getObject and clear methods in RedisCache class respectively
   package im.hwp.spring_boot_redis_cache.cache;
   
   import im.hwp.spring_boot_redis_cache.utils.ApplicationContextUtils;
   import org.apache.ibatis.cache.Cache;
   import org.springframework.beans.factory.annotation.Autowired;
   import org.springframework.data.redis.core.RedisTemplate;
   import org.springframework.data.redis.serializer.StringRedisSerializer;
   import java.util.concurrent.locks.ReadWriteLock;
   
   public class RedisCache implements Cache {
   
       private final String id;
       //Construction method must exist
       public RedisCache(String id){
           System.out.println("=====id:" + id + " =======");
           this.id = id;
       }
       //Returns the unique ID of the cache
       @Override
       public String getId() {
           return this.id;
       }
   
       //Put in cache
       @Override
       public void putObject(Object key, Object value) {
           RedisTemplate redisTemplate = (RedisTemplate) ApplicationContextUtils.getBean("redisTemplate");
           redisTemplate.setKeySerializer(new StringRedisSerializer());
           redisTemplate.setHashKeySerializer(new StringRedisSerializer());
           redisTemplate.opsForHash().put(id.toString(),key.toString(),value);
       }
   
       @Override
       public Object getObject(Object key) {
           RedisTemplate redisTemplate = (RedisTemplate) ApplicationContextUtils.getBean("redisTemplate");
           Object value = redisTemplate.opsForHash().get(id.toString(), key.toString());
           return value;
       }
   	//This method is a reserved method of mybatis and may be used in subsequent versions
       @Override
       public Object removeObject(Object o) {
           return null;
       }
   	//The reason for implementing the clear method is that if any one of them is added, deleted or modified, the information in the redis cache will not be refreshed.
       //Therefore, when adding, deleting and modifying, the information in the redis cache is cleared.
       @Override
       public void clear() {
           //When any one of them is added, deleted or modified, the redis cache needs to be emptied
           RedisTemplate redistemplate = (RedisTemplate) ApplicationContextUtils.getBean("redisTemplate");
           redistemplate.setKeySerializer(new StringRedisSerializer());
           redistemplate.setHashKeySerializer(new StringRedisSerializer());
           redistemplate.delete(id.toString());
           System.out.println("eliminate id:" + id.toString() + " The corresponding cache has been executed");
   
       }
   
       @Override
       public int getSize() {
           return 0;
       }
   
       @Override
       public ReadWriteLock getReadWriteLock() {
           return null;
       }
   }
  1. Specify the type of cache in the xml file as the full path class name of the user-defined RedisCache class
   <?xml version="1.0" encoding="utf-8" ?>
   <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
   <mapper namespace="im.hwp.spring_boot_redis_cache.DAO.UserDao">
       <cache type="im.hwp.spring_boot_redis_cache.cache.RedisCache"/>
       <select id="findAll" resultType="User">
           select name,age,birthday,addr from user
       </select>
       <select id="findById" parameterType="String" resultType="User">
           select name,age,birthday,addr from user where name = #{name}
       </select>
       <delete id="deleteByName" parameterType="String">
           delete from user where name = #{name}
       </delete>
   </mapper>
  1. Build a test class to test redis cache
   package im.hwp.spring_boot_redis_cache;
   
   import im.hwp.spring_boot_redis_cache.entity.User;
   import im.hwp.spring_boot_redis_cache.service.UserService;
   import org.junit.jupiter.api.Test;
   import org.springframework.beans.factory.annotation.Autowired;
   import org.springframework.boot.test.context.SpringBootTest;
   import java.util.List;
   
   @SpringBootTest(classes = SpringBootRedisCacheApplication.class)
   public class TestSql {
       @Autowired
       private UserService userService;
   
       @Test
       public void findAll(){
   //        List<User> all = userService.findAll();
   //        all.forEach(user-> System.out.println(user));
   //        System.out.println("= = = = second query = = = =");
   //        all = userService.findAll();
   //        all.forEach(user-> System.out.println(user));
   //
           User user = userService.findById("pingwuhuang");
           System.out.println(user);
           System.out.println("========Second query========");
           user = userService.findById("pingwuhuang");
           System.out.println(user);
           userService.deleteByName("pingwuhuang");
       }
   }

In order to clearly see the role of redis cache, you need to Open log in proerties

   logging.level.im.hwp.spring_boot_redis_cache.DAO=debug
  1. Question:

    When the DAO of the interface is cleared automatically, it will be deleted in the next DAO unit. For example, when a user is added, the cache corresponding to UserDao will be cleared. However, if there is an association relationship between two tables before, the cache with association relationship in the cache should also be cleared.

    Therefore, it is necessary to introduce cache ref to ensure that the associated cache can be cleared.

   <?xml version="1.0" encoding="utf-8" ?>
   <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
   <mapper namespace="im.hwp.spring_boot_redis_cache.DAO.JobDao">
       <cache-ref namespace="im.hwp.spring_boot_redis_cache.DAO.UserDao"/>
       <select id="findAll" resultType="Job">
           select company,title from job
       </select>
   </mapper>

After the cache ref is introduced, the Hash cache key under the Job will not be im hwp. spring_ boot_ redis_ cache. DAO. Jobdao, but im hwp. spring_ boot_ redis_ cache. DAO. Userdao, so when you clear a Job or User, the cache associated with it will be cleared.

Storage without cache Ref:

Storage after implementing cache Ref:

  1. Cache optimization strategy

    Optimize the key put into redis: the length of the key cannot be too long. Try to give a brief introduction to the key design.

   # key: -609484135:5256359200:im.hwp.spring_boot_redis_cache.DAO.JobDao.findAll:0:2147483647:select company,title from job:SqlSessionFactoryBean

Algorithm: MD5 (used for encryption)

characteristic:

  1. After md5 processing, all file strings will generate 32-bit hexadecimal strings
  2. Different content files are encrypted by md5, and the encryption results must be inconsistent
  3. The md5 value of the same file is always the same (stability)

Recommendation: md5 optimize the key during the integration of redis and mybatis.

  1. Interview questions

    1) What is cache penetration

    Definition: the client submits an invalid query for many times. The query does not exist in the redis cache, and there is no corresponding result in the database. In this case, every time a request comes in and checks redis, it finds that there is no matching record, so it will launch a query to the database every time.

    mybatis's solution to this situation: after the first query to the database, redis will cache a corresponding key and a record with null value. The next query directly returns null from redis.

    2) What is cache avalanche

    Definition: at a certain time when the system is running, all the caches in the system are invalid. Just at this time, a large number of client requests pour in, resulting in the unavailability of all module caches. A large number of requests flow to the database, resulting in database blocking and suspension in extreme cases.

    During cache storage, because the business system is very large, there are many modules, and the business data is different, different modules will set a timeout when putting into the cache. If a large amount of cached data fails at the same time, it will cause a large number of requests to flow to the database and cause the database to hang.

    Solution:

    1) permanent storage: not recommended. Because some data do not need long-term cache, permanent storage in redis will lead to low utilization of storage space.

    2) set different timeout times for different business data. The specific setting scheme should be determined according to user scenarios and needs. However, the data of all business modules must not be set to the same timeout.

5.2 Redis master-slave replication

The master node provides external services. Multiple slave nodes can only synchronize the data of the master node and provide redundant backup function. The slave node cannot perform automatic failover and becomes the master node to provide external services.

Principle analysis: three steps of redis master-slave replication: connection establishment - > data synchronization - > command propagation

  1. Connection establishment

    Master host and master port information are maintained from inside the node, indicating the ip address and port number of the master node respectively

    1) The slave node checks whether the master node can connect through the timing function replicationCron. If it can, it creates a connection through socket.

    After the socket connection is successful, the slave node creates a process for processing replication for the connection, and is responsible for subsequent RDB file reception, command propagation and reception.

    2) After the master node receives the socket connection request from the slave node, the slave node acts as one of the clients connected to the master node, and the subsequent operations are run in the form of a request command from the slave node to the master node.

    root3) after the master node becomes a client, it will send a ping request for the first time to detect whether the socket connection is available and whether the master node can process the request at present. If you receive a pong response from the primary node, it indicates that the connection is normal. Otherwise, reconnect.

    4) If the masterauth option is turned on from the node configuration file, authentication is performed to the master node. This operation is carried out by issuing auth command from the slave node to the master node. The parameter is the value of masterauth in the slave node configuration file.

    5) The slave node sends its listening port number to the master node. The master node saves this information to the corresponding slave node slave_ listening_ In the port field. The port number can be viewed on the master node client using the info Replication command.

  1. Data synchronization

    1) The slave node sends psync2 to the master node to request data synchronization.

    2) After receiving the instruction, the master node executes the BGSAVE instruction to generate an RDB file. While generating RDB files, the master node can continue to receive instructions from the client. Therefore, this part of the instruction needs to be put into the command buffer.

    3) After the master node creates the RDB file, it sends the file to the slave node through the Socket connection.

    4) After receiving the RDB file, the slave node empties the original data and restores it from the RDB file.

    5) After the slave node completes the recovery of RDB files, inform the master node that the recovery is complete.

    6) The master node sends the contents of the buffer to the slave node (AOF)

    7) The slave node receives the information and executes bgrewrite AOF to complete the final data synchronization.

    # Supplement: Redis 2.8 psync1 and Redis 4.0 psync2
    # psync1: 
    # Before version 2.8, the second interrupt of redis replication will trigger the slave node to perform full sync, which will affect the performance of the cluster.
    # In order to solve this problem, redis 2.8 uses Replication backlog buffer to introduce psync1.
    # The size of the replication backlog buffer is maintained by redis. The default size is 1M. A master node has only one copy backlog buffer. Share this one.
    	1.When replication is interrupted, slave Will try master runid + Current copy offset( offset)Send to master. 
    	2.If master runid Match and offset If the value is in the copy backlog buffer, full copy is not required, otherwise full copy is performed.
    --------But in slave Restart, master During failover, full replication will still be triggered--------
    # psync2:
    # Introducing master_replid1 + offset and master_replid2 + offset
    # master_replid1: it is a random string with a length of 41 bytes. The generation format is the same as runid
    # master_replid2: initialization defaults to all 0. It is used to store replid1 of the last master. offset defaults to - 1
     Main steps
    	1.redis When closing, the copied information is used as an auxiliary field( AUX Field)Store to RDB In the file
    	2.redis At startup, the copied information will be reassigned to the relevant fields
    	3.redis When resynchronizing, it will be reported replid and offset,If and master Consistent information, and offset If it is still in the replication backlog buffer, partial replication is performed.
    # matters needing attention
    	As a slave node redis When restarting, the master node needs to dynamically adjust the size of the replication backlog buffer. If the buffer size is not adjusted, when the slave node is rebooting, because the master node is still receiving instructions, it may cause the data recorded by the slave node offset Is squeezed out of the buffer. Thus, when the slave node restart is completed, due to offset Not in the copy buffer, causing a full copy to be triggered. Therefore, it is necessary to reasonably calculate the restart time of the slave node and the instructions received by the current master node per second. This operation needs to be completed before restarting the instance.
    #	config set dynamically adjusts the repl backlog size of redis
    	
    
  2. Command propagation

    This stage mainly synchronizes each instruction received by the master node after the full replication is completed. The main idea is AOF

  3. Set up master-slave replication

    1) Master node profile

    # bind refers to which ip addresses are allowed to connect to the node, and multiple IPS are separated by spaces
    # 0.0.0.0 indicates that any ip connection is allowed
    bind 0.0.0.0
    

    2) Slave node profile

    bind 0.0.0.0
    # slaveof master node ip master node port
    slaveof 127.0.0.1 6379
    
  4. After the configuration file processing is completed, the master-slave node needs to be started through the configuration file.

    Master node startup:

Start from node:

5.3 sentinel mechanism

  1. Noun interpretation

    Sentinel is Redis's high availability scheme: the system composed of one or more sentinel instances can monitor any number of master servers and all slave servers under these master servers, and automatically upgrade any slave server of the offline server to the master server when the master server enters the offline state. In short, sentinel mechanism is a master-slave replication architecture with automatic failover.

  2. Schematic diagram of sentry structure

  1. Build sentry structure

    1) First, create a sentinel in the server where the sentinel node is located conf

    # sentinel monitor <master-name> <ip> <port> <quorum>
    sentinel monitor mymaster 127.0.0.1 6379 1
    

    This configuration indicates that the sentinel node regularly monitors the master node with the name and IP port number.

    Indicates the number of votes that the primary node is considered to have failed. Generally, this parameter is set to half of the number of sentinel nodes + 1

    2) Start the corresponding redis master-slave node group

    3) Start sentinel node and use the following command:

       # Mode 1
       redis-server s:/redis/sentinel/sentinel.conf --sentinel
       # Mode 2 requires a redis sentinel program, which can be copied from the redis source code
       redis-sentinel s:/redis/sentinel/sentinel.conf
    

    Through the above three steps, the structure of sentry section can be built.

  2. Working principle of sentinel mechanism

    Problems that may be encountered in the current master-slave architecture. When the master node goes down, in order to ensure that the system can continue to work, we need to perform the following steps:

    1) Select a slave node and execute the slave no one command to promote it from the slave node to the master node.

    2) Modify other slave nodes to the slave nodes of the new master node.

    3) Inform the redis service caller of the new master node.

    4) After the original down master node is restored, set it as the slave node of the new master node.

    It can be seen that the above steps are cumbersome, the manual operation is difficult, and the high availability and timeliness of the system can not be guaranteed. Therefore, sentinel mechanism can be used to solve such problems and simplify operation.

    • Basic configuration of sentinel

      sentinel monitor mymaster 127.0.0.1 6379 2
      # In addition to the first line, the general format of other configurations is sentinel [option_name] [master_name] [option_value]
      
      sentinel down-after-milliseconds mymaster 60000 
      # Sentinel will send a heartbeat PING to the master to confirm whether the master is alive. If the master does not respond to PONG within a "certain time range" or replies to an error message, the sentinel will subjectively think that the master is no longer available. The down after milliseconds is used to specify the "certain time range". The unit is milliseconds.
      
      sentinel failover-timeout mymaster 180000
      # sentinel will be considered as failed if it fails to execute failover for more than the set time. The failover will be continued by other sentinels in milliseconds
      
      sentinel parallel-syncs mymaster 1
      # When a failover master-slave handover occurs, this option specifies the maximum number of slaves that can synchronize the new master at the same time. The smaller the number, the longer it takes to complete the master-slave failover. However, if the number is larger, it means that more slaves are unavailable due to master-slave synchronization. You can set this value to 1 to ensure that only one slave is in a state that cannot process command requests at a time.
      
    • Regular monitoring of Sentinels

      1) The Sentry will send info commands to the master and slave nodes every 10 seconds to obtain the latest topology map. Therefore, only the master node information needs to be configured during sentinel configuration, and the slave node information can be obtained by issuing the info command to the master node.

      2) The sentinel will send the node's judgment on the master node and its sentinel information to the designated channel (sentinel: hello) of the redis master-slave node every 2 seconds. Each sentinel subscribes to the channel to get messages from other sentinels. Therefore, when initializing the sentry, it is not necessary to explicitly define the ip of other sentry nodes in the sentry configuration file.

      3) Each sentinel will send a ping command to other master-slave nodes and sentinel nodes every 1 second for heartbeat detection to judge whether other nodes are working normally.

      SDOWN: when the sentinel finds that there is an invalid reply or no reply within milliseconds of down after milliseconds during heartbeat detection, the sentinel considers that the node is a subjective offline. If the node is the master node, the sentinel will seek the judgment of other sentinel nodes through the sentinel is masterdown by addr command. When the number of sentinels considered as subjective offline exceeds the set number of quorum, the node is considered as objective offline (ODOWN)

5.4 Redis Cluster

  1. Principle analysis

    Architecture diagram

Schema resolution:

  1. Each master node is responsible for storing data, cluster status and the corresponding relationship between slots and nodes.

  2. The master node has a failure detection mechanism. When more than half of the nodes in the cluster think that a node fails, the node will be failed

    Voting mechanism: all master nodes in the cluster will participate. If more than half of the master nodes communicate with a master node timeout, the current master node is considered dead.

  3. All redis nodes are interconnected with each other (PING-PONG mechanism). Redis uses binary protocol to optimize transmission speed and bandwidth.

  4. The client can directly connect with any node (master or slave) in the cluster to operate the cluster.

  5. The cluster maps 16384 slot s on average to all physical nodes (master nodes).

  6. If a master node goes down, the automatic failover of the cluster will make its slave node become the master node. If it does not have a slave node. Then the whole cluster will be unavailable.

    Conditions for the whole cluster to fail: 1 A master node fails and there is no slave node or 2 More than half of the master servers in the cluster are down

2. Cluster construction

Install Ruby environment dependency (version > 2.3.0 required)

Online installation:

# If the online download cannot be performed, you need to manually download the compressed file from the official website, and then upload it to the server for installation
wget http://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.5.tar.gz
tar zxvf  ruby-2.3.5.tar.gz
cd ruby-2.3.5
./configure  --prefix=/opt/ruby
ln -s /opt/ruby/bin/ruby /usr/bin/ruby
ln -s /opt/ruby/bin/gem /usr/bin/gem
# Finally, check whether the installation is successful
ruby -v 

Install rubygem redis dependency

# If the download cannot be performed, it needs to be manually downloaded and uploaded to the server
wget http://rubygems.org/downloads/redis-3.3.0.gem
gem install -l redis-3.3.0.gem
# An error may be reported when executing this command:
ERROR:  Loading command: install (LoadError)
        cannot load such file -- zlib
ERROR:  While executing gem ... (NoMethodError)
    undefined method `invoke_with_build_args' for nil:NilClass
# The solutions to the above problems are as follows
yum -y install zlib-devel
cd ruby-2.3.5/ext/zlib
ruby ./extconf.rb
make
make install
# Continue gem installation
gem install -l redis-4.2.2.gem

Create a node folder and modify the corresponding configuration file

mkdir 7000 7001 7002 7003 7004 7005 7006
cp redis-5.0.0/redis.conf 7000/redis.conf
cp redis-5.0.0/redis.conf 7001/redis.conf
cp redis-5.0.0/redis.conf 7002/redis.conf
cp redis-5.0.0/redis.conf 7003/redis.conf
cp redis-5.0.0/redis.conf 7004/redis.conf
cp redis-5.0.0/redis.conf 7005/redis.conf
cp redis-5.0.0/redis.conf 7006/redis.conf

After copying, modify the corresponding configuration file parameters, taking the node corresponding to port number 7000 as an example

port 7000 # Port number
bind 0.0.0.0 # Open external access
appendonly yes #Enable aof mode
appendfilename "appendonly-7000.aof" # The file name plus the port number is used to distinguish
dbfilename dump-7000.rdb # rdb snapshot mechanism file name
daemonize yes # It runs in the background and does not need to keep the front-end window

Start 7 nodes

bin/redis-server 7000/redis.conf
bin/redis-server 7001/redis.conf
bin/redis-server 7002/redis.conf
bin/redis-server 7003/redis.conf
bin/redis-server 7004/redis.conf
bin/redis-server 7005/redis.conf
bin/redis-server 7006/redis.conf

After startup, use the command to check the redis startup

Create cluster

  • First, copy the redis trib under the redis source package RB file to bin directory

    [root@localhost src]# cp redis-trib.rb ../../bin/redis-trib.rb
    
  • Then execute the command to create a cluster in the bin directory

    Note: redis 5 Redis trib. Is not supported after version X To create a cluster with Rb, you can directly use redis cli

    redis-cli --cluster help
    Cluster Manager Commands:
      create         host1:port1 ... hostN:portN   #Create cluster
                     --cluster-replicas <arg>      #Number of slave nodes
      check          host:port                     #Check cluster
                     --cluster-search-multiple-owners #Check whether a slot is assigned to multiple nodes at the same time
      info           host:port                     #View cluster status
      fix            host:port                     #Repair cluster
                     --cluster-search-multiple-owners #Fixed the duplicate allocation of slots
      reshard        host:port                     #Specify any node of the cluster to migrate slots and re divide into slots
                     --cluster-from <arg>          #Which source nodes need to be migrated from? You can complete the migration from multiple source nodes, separated by commas. You can pass the node id of the node or -- from all directly. In this way, the source node is all nodes of the cluster. If you do not pass this parameter, you will be prompted to enter it during the migration process
                     --cluster-to <arg>            #slot is the node id of the destination node to be migrated. Only one destination node can be filled in. If this parameter is not passed, the user will be prompted to enter it during the migration process
                     --cluster-slots <arg>         #The number of slot s to be migrated. If this parameter is not passed, the user will be prompted for input during the migration process.
                     --cluster-yes                 #Specify the confirmation input during migration
                     --cluster-timeout <arg>       #Set the timeout for the migrate command
                     --cluster-pipeline <arg>      #Define the number of keys taken out by the cluster getkeysinslot command at one time. If it is not transmitted, the default value is 10
                     --cluster-replace             #replace directly to the target node
      rebalance      host:port                                      #Specify any node of the cluster to balance the number of cluster node slot s 
                     --cluster-weight <node1=w1...nodeN=wN>         #Specifies the weight of the cluster node
                     --cluster-use-empty-masters                    #Setting allows the primary node that is not assigned a slot to participate. It is not allowed by default
                     --cluster-timeout <arg>                        #Set the timeout for the migrate command
                     --cluster-simulate                             #Simulate the rebalance operation and will not actually perform the migration operation
                     --cluster-pipeline <arg>                       #Define the number of keys taken out by the cluster getkeysinslot command at one time. The default value is 10
                     --cluster-threshold <arg>                      #If the migrated slot threshold exceeds the threshold, perform rebalance
                     --cluster-replace                              #replace directly to the target node
      add-node       new_host:new_port existing_host:existing_port  #Add a node. Add the new node to the specified cluster. The primary node is added by default
                     --cluster-slave                                #The new node acts as a slave node, and a master node is randomly selected by default
                     --cluster-master-id <arg>                      #Assign a master node to the new node
      del-node       host:port node_id                              #Delete a given node and shut down the node service after success
      call           host:port command arg arg .. arg               #Execute relevant commands on all nodes of the cluster
      set-timeout    host:port milliseconds                         #Set cluster node timeout
      import         host:port                                      #Import external redis data into the cluster
                     --cluster-from <arg>                           #Import the data of the specified instance into the cluster
                     --cluster-copy                                 #Specify copy when migrating
                     --cluster-replace                              #Specify replace when migrating
      help           
    
    For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
    
    # redis-cli --cluster create 
    redis-cli --cluster create 192.168.1.188:7000 192.168.1.188:7001 192.168.1.188:7002 192.168.1.188:7003 192.168.1.188:7004 192.168.1.188:7005 --cluster-replicas 1
    

    After creation, the result is as follows:

  1. Cluster use

    View cluster status – select any node in the cluster

    redis-cli --cluster check 192.168.1.188:7000
    


Connect cluster: use redis cli to connect any node of the cluster. You need to use - c to declare the cluster mode

redis-cli -h 192.168.1.188 -p 7000 -c

When accessing the value, the CRC16 algorithm will be used to calculate the slot of the key, and the calculated slot value will be redirected to the specified node for relevant operations.

Restart redis cluster and keep the original cluster configuration and data

# First, kill all redis processes
pkill -9 redis
# Restart each redis node through the configuration file
bin/redis-server 7000/redis.conf
# After each node is started, the cluster state will be restored automatically
bin/redis-cli -c -h 172.31.7.188 -p 7000
172.31.7.188:7000> hset abc aaa bbb
-> Redirected to slot [7638] located at 172.31.7.188:7001
(integer) 1

5.5 Spring Boot integration redis cluster

  1. Import dependency

          <!--introduce redis Related dependency-->
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-data-redis</artifactId>
                <!--exclude lettuce,Because use lettuce Always reporting errors when-->
                <exclusions>
                    <exclusion>
                        <groupId>io.lettuce</groupId>
                        <artifactId>lettuce-core</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
    		<!--Introduced here jedis,Because use lettuce Always reporting errors when-->
            <dependency>
                <groupId>redis.clients</groupId>
                <artifactId>jedis</artifactId>
            </dependency>
            <!--introduce redis session manager Related dependency-->
            <dependency>
                <groupId>org.springframework.session</groupId>
                <artifactId>spring-session-data-redis</artifactId>
            </dependency>
            <dependency>
                <groupId>org.apache.commons</groupId>
                <artifactId>commons-pool2</artifactId>
            </dependency>
    
  2. Add * * @ enablereredishttpsession * * annotation on the startup class, which indicates that the session will be managed by redis

    @SpringBootApplication
    @EnableRedisHttpSession
    public class RedisSessionManageApplication {
    
        public static void main(String[] args) {
            SpringApplication.run(RedisSessionManageApplication.class, args);
        }
    
    }
    
    
  3. Write configuration file

    server.port=8080
    server.servlet.context-path=/
    
    # In the form of host:port, nodes are directly separated by.
    spring.redis.cluster.nodes=172.31.7.188:7000,172.31.7.188:7001,172.31.7.188:7002,172.31.7.188:7003,172.31.7.188:7004,172.31.7.188:7005
    
  4. Write test class

    @Controller
    @RequestMapping("test")
    public class TestController {
        @RequestMapping("test")
        public void test(HttpServletRequest httpServletRequest,HttpServletResponse response) throws IOException {
            List<String> list = (List<String>) httpServletRequest.getSession().getAttribute("list");
            if(list==null){
                list = new ArrayList<String>();
            }
            list.add("xxxxxxxxx");
            httpServletRequest.getSession().setAttribute("list",list);
            response.getWriter().println("size:" + list.size());
            response.getWriter().print("sessionId:"+httpServletRequest.getSession().getId());
        }
    }
    
  5. Test

    • The management of the session is completely left to redis. Every time you modify the attribute in the session, you need to set it to ensure that it is refreshed to redis.
    • Check the redis node and find that different sessions are saved on different nodes, that is, the distributed storage of sessions.

Tags: Redis

Posted by alcoholic1 on Sat, 07 May 2022 05:03:47 +0300