Article reprint statement: please attach a link to the original text for reprint
0. Environmental preparation
The following is my local environment:
- SpringBoot2.3.x
- Mybatis3.x
- Redis5.x
- Set up a redis environment on the local machine or server, and start it successfully! Here I use redis5.x deployed on Aliyun student computer
- IDEA2020.x, Eclipse2020 is also possible, the editor selection does not matter!
1. The new version of springboot and IDEA editors step on the pit (you can skip this chapter directly, and look back when you encounter similar errors)
If you are using IDEA2020 and springboot2.3.x version and above during the actual operation, the following problems may occur as I did:
Question 1:
SpringBoot startup error: -Property 'configuration' and 'configLocation' can not specified with together
This is the first time I encountered this problem, because I have been using the yaml configuration file to configure mybatis, as shown in the figure:
[External link image transfer failed, the origin site may have anti-leech mechanism, it is recommended to save the image and upload it directly (img-P5Zu44MJ-1605598172460)(D:\Note\Notes\Middleware\Redis\Redis as Mybatis secondary cache. assets\image-20201117134026847.png)]
In practice, I use the SSM method and introduce the mybatis-comfig.xml global configuration:
So when I run the springboot project, I get an error!
Solution:
Check the application.yaml file. If it is true that configuration and config-location appear in the configuration file at the same time, put the content of the configuration configuration into the configuration file pointed to by config-location, restart the project again, and the file is resolved!
Suggest:
- When SpringBoot integrates mybatis, it is recommended to put all mybatis configurations into mybatis-config, so that the content of the application.yaml file will be concise and clear!
Question 2
The @EnableAutoConfiguration annotation is red
This is actually a problem that IDEA automatically recognizes, not an error. The solution:
The role of the @EnableAutoConfiguration annotation
Reference article The role of the @EnableAutoConfiguration annotation
Similarly, if the following problem occurs:
Solution:
Including other similar problems (idea recognizes the red report, you can use the combined function key ALT+ENTER), select it as shown in the figure:
Uncheck the corresponding checkbox to:
OK, let's get to the point now!
2. SpringBoot integrates Redis as the secondary cache of Mybatis
Question: What is the first/second level cache of mybatis?
For details, please refer to the article: Talking about MyBatis L3 Cache
- The first level cache is: sqlSession, sql establishes a connection to the data cache that closes the connection
- The second level cache is: the global cache
2.1 Database SQL:
CREATE TABLE `score_flow` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT COMMENT 'primary key id', `score` bigint(19) unsigned NOT NULL COMMENT 'User points flow', `user_id` int(11) unsigned NOT NULL COMMENT 'User primary key id', `user_name` varchar(30) NOT NULL DEFAULT '' COMMENT 'username', PRIMARY KEY (`id`), KEY `idx_userid` (`user_id`) ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8mb4; CREATE TABLE `sys_user` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `user_name` varchar(11) CHARACTER SET utf8mb4 DEFAULT NULL COMMENT 'username', `image` varchar(11) CHARACTER SET utf8mb4 DEFAULT NULL COMMENT 'profile picture', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=utf8; CREATE TABLE `user_score` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT COMMENT 'primary key', `user_id` int(11) unsigned NOT NULL COMMENT 'user ID', `user_score` bigint(19) unsigned NOT NULL COMMENT 'account credits', `name` varchar(30) NOT NULL DEFAULT '' COMMENT 'username', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8;
2.2 Springboot related configuration: application.yaml
# port server: port: 8080 # Project access name servlet: context-path: /demo #========================================Database related configuration========== =========================== spring: #=====================================Redis========================================= redis: # Redis database index (default 0) database: 0 # Redis server address host: 8.XXXXXX.136 # Redis server connection port port: 6379 # Redis server connection password (default is empty) password: cspXXXXXX29 jedis: pool: # The maximum number of connections in the connection pool (use a negative value to indicate no limit) max-active: 8 # Connection pool maximum blocking wait time (use a negative value to indicate no limit) max-wait: -1 # Maximum idle connections in the connection pool max-idle: 8 # Minimum idle connection in connection pool min-idle: 0 # Connection timeout (milliseconds) timeout: 8000 #=====================================Mysql========================================= datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/test2?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=UTC username: root password: root type: com.alibaba.druid.pool.DruidDataSource # Druid minIdle: 5 maxActive: 100 initialSize: 10 maxWait: 60000 timeBetweenEvictionRunsMillis: 60000 minEvictableIdleTimeMillis: 300000 validationQuery: select 'x' testWhileIdle: true testOnBorrow: false testOnReturn: false poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 50 removeAbandoned: true filters: stat # ,wall,log4j # Configure the filters for monitoring statistics interception. After removal, the monitoring interface sql cannot be counted. 'wall' is used for firewalls. connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000 # Turn on mergeSql feature via connectProperties property; slow SQL logging useGlobalDataSourceStat: true # Merge monitoring data from multiple DruidDataSource s druidLoginName: wjf # Log in to druid's account druidPassword: wjf # login druid password cachePrepStmts: true # Enable L2 cache # Open the console to print the sql log mybatis: # Configure mapper file scanning mapper-locations: com.haust.redisdemo.mapper/*.xml # Configure entity class scanning type-aliases-package: com.haust.redisdemo.domain # Specify the global mybatis configuration file location config-location: classpath:/mybatis-config.xml
2.3 mybatis-config.xml configuration:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" "http://mybatis.org/dtd/mybatis-3-config.dtd"> <configuration> <settings> <!-- Print sql statement --> <setting name="logImpl" value="STDOUT_LOGGING"/> <!-- Make the global mapper enable or disable caching. --> <setting name="cacheEnabled" value="true"/> <!-- Enable or disable lazy loading globally. When disabled, all associated objects are loaded on the fly. --> <setting name="lazyLoadingEnabled" value="true"/> <!-- When enabled, objects with lazy loading properties will fully load any properties when called. Otherwise, each property will be loaded on demand. --> <setting name="aggressiveLazyLoading" value="true"/> <!-- Whether to allow a single sql return multiple datasets (Depends on driver compatibility) default:true --> <setting name="multipleResultSetsEnabled" value="true"/> <!-- Is it possible to use aliases for columns (Depends on driver compatibility) default:true --> <setting name="useColumnLabel" value="true"/> <!-- allow JDBC Generate the primary key. Requires drive support. If set to true,This setting will force the use of the generated primary key, There are some drives that are not compatible but still work. default:false --> <setting name="useGeneratedKeys" value="false"/> <!-- specify MyBatis How to automatically map the columns of the data base table NONE: not covert PARTIAL:part FULL:all --> <setting name="autoMappingBehavior" value="PARTIAL"/> <!-- This is the default execution type ( SIMPLE: Simple; REUSE: Actuator may be reused prepared statements statement; BATCH: Executors can repeat statements and batch updates) --> <setting name="defaultExecutorType" value="SIMPLE"/> <!-- Set the timeout, which determines the number of seconds the driver waits for a response from the database --> <setting name="defaultStatementTimeout" value="25"/> <!-- Get the number for the driven result set ( fetchSize)Set a prompt value. This parameter can only be overridden in query settings --> <setting name="defaultFetchSize" value="100"/> <!-- Allow pagination in nested statements ( RowBounds). If allowed, set to false --> <setting name="safeRowBoundsEnabled" value="false"/> <!-- Convert fields using camel case. --> <setting name="mapUnderscoreToCamelCase" value="true"/> <!-- Set the local cache range session:data sharing statement:Statement scope (This way there will be no sharing of data ) defalut:session --> <setting name="localCacheScope" value="SESSION"/> <!-- The default is OTHER,in order to solve oracle insert null The error reported should be set to NULL --> <setting name="jdbcTypeForNull" value="NULL"/> <setting name="lazyLoadTriggerMethods" value="equals,clone,hashCode,toString"/> </settings> </configuration>
2.4 Main startup class
@SpringBootApplication @EnableAutoConfiguration @MapperScan("com.haust.redisdemo.mapper") public class XdRedisDemoApplication { public static void main(String[] args) { SpringApplication.run(XdRedisDemoApplication.class, args); } }
2.5 User entity class
@Data @AllArgsConstructor @NoArgsConstructor @Accessors(chain = true) /** * User entity class */ public class User implements Serializable {// Serialization interface must be implemented! /** * serial number version number */ private static final long serialVersionUID = -4415438719697624729L; /** * userid */ private String id; /** * username */ private String userName; }
2.6 UserMapper.java and UserMapper.xml
/** * @Auther: csp1999 * @Date: 2020/11/17/10:36 * @Description: UserMapper */ @Repository public interface UserMapper { void insert(User user); void update(User user); void delete(@Param("id") String id); User find(@Param("id") String id); List<User> query(@Param("userName") String userName); void deleteAll(); }
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" > <mapper namespace="com.haust.redisdemo.mapper.UserMapper"> <select id="query" resultType="com.haust.redisdemo.domain.User"> select id ,user_name from sys_user where 1=1 <if test="userName != null"> and user_name like CONCAT('%',#{userName},'%') </if> </select> <insert id="insert" parameterType="com.haust.redisdemo.domain.User"> insert sys_user(id,user_name) values(#{id},#{userName}) </insert> <update id="update" parameterType="com.haust.redisdemo.domain.User"> update sys_user set user_name = #{userName} where id=#{id} </update> <delete id="delete" parameterType="string"> delete from sys_user where id= #{id} </delete> <select id="find" resultType="com.haust.redisdemo.domain.User" parameterType="string"> select id,user_name from sys_user where id=#{id} </select> <delete id="deleteAll"> delete from sys_user </delete> </mapper>
2.7 redis operation tool class
This tool class is relatively convenient for operating redis. In fact, it encapsulates RedisTemplete. You can choose a tool class without encapsulation and use RedisTemplete directly.
/** * @Auther: csp1999 * @Date: 2020/11/17/10:08 * @Description: redis Operation tool class */ @Component public class RedisUtil { @Autowired private RedisTemplate redisTemplate; private static double size = Math.pow(2, 32); /** * write cache * * @param key * @param offset Bit 8Bit=1Byte * @return */ public boolean setBit(String key, long offset, boolean isShow) { boolean result = false; try { ValueOperations<Serializable, Object> operations = redisTemplate.opsForValue(); operations.setBit(key, offset, isShow); result = true; } catch (Exception e) { e.printStackTrace(); } return result; } /** * write cache * * @param key * @param offset * @return */ public boolean getBit(String key, long offset) { boolean result = false; try { ValueOperations<Serializable, Object> operations = redisTemplate.opsForValue(); result = operations.getBit(key, offset); } catch (Exception e) { e.printStackTrace(); } return result; } /** * write cache * * @param key * @param value * @return */ public boolean set(final String key, Object value) { boolean result = false; try { ValueOperations<Serializable, Object> operations = redisTemplate.opsForValue(); operations.set(key, value); result = true; } catch (Exception e) { e.printStackTrace(); } return result; } /** * Write cache setting aging time * * @param key * @param value * @return */ public boolean set(final String key, Object value, Long expireTime) { boolean result = false; try { ValueOperations<Serializable, Object> operations = redisTemplate.opsForValue(); operations.set(key, value); redisTemplate.expire(key, expireTime, TimeUnit.SECONDS); result = true; } catch (Exception e) { e.printStackTrace(); } return result; } /** * Delete the corresponding value in batches * * @param keys */ public void remove(final String... keys) { for (String key : keys) { remove(key); } } /** * delete the corresponding value * * @param key */ public void remove(final String key) { if (exists(key)) { redisTemplate.delete(key); } } /** * Determine if there is a corresponding value in the cache * * @param key * @return */ public boolean exists(final String key) { return redisTemplate.hasKey(key); } /** * read cache * * @param key * @return */ public Object get(final String key) { Object result = null; ValueOperations<Serializable, Object> operations = redisTemplate.opsForValue(); result = operations.get(key); return result; } /** * hash add * * @param key * @param hashKey * @param value */ public void hmSet(String key, Object hashKey, Object value) { HashOperations<String, Object, Object> hash = redisTemplate.opsForHash(); hash.put(key, hashKey, value); } /** * Hash to get data * * @param key * @param hashKey * @return */ public Object hmGet(String key, Object hashKey) { HashOperations<String, Object, Object> hash = redisTemplate.opsForHash(); return hash.get(key, hashKey); } /** * list add * * @param k * @param v */ public void lPush(String k, Object v) { ListOperations<String, Object> list = redisTemplate.opsForList(); list.rightPush(k, v); } /** * List acquisition * * @param k * @param l * @param l1 * @return */ public List<Object> lRange(String k, long l, long l1) { ListOperations<String, Object> list = redisTemplate.opsForList(); return list.range(k, l, l1); } /** * collection add * * @param key * @param value */ public void add(String key, Object value) { SetOperations<String, Object> set = redisTemplate.opsForSet(); set.add(key, value); } /** * Collection acquisition * * @param key * @return */ public Set<Object> setMembers(String key) { SetOperations<String, Object> set = redisTemplate.opsForSet(); return set.members(key); } /** * Ordered collection add * * @param key * @param value * @param scoure */ public void zAdd(String key, Object value, double scoure) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); zset.add(key, value, scoure); } /** * Get an ordered set * * @param key * @param scoure * @param scoure1 * @return */ public Set<Object> rangeByScore(String key, double scoure, double scoure1) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); redisTemplate.opsForValue(); return zset.rangeByScore(key, scoure, scoure1); } //Load data into redis on first load public void saveDataToRedis(String name) { double index = Math.abs(name.hashCode() % size); long indexLong = new Double(index).longValue(); boolean availableUsers = setBit("availableUsers", indexLong, true); } //Load data into redis on first load public boolean getDataToRedis(String name) { double index = Math.abs(name.hashCode() % size); long indexLong = new Double(index).longValue(); return getBit("availableUsers", indexLong); } /** * Ordered collection to get rank * * @param key collection name * @param value value */ public Long zRank(String key, Object value) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); return zset.rank(key, value); } /** * Ordered collection to get rank * * @param key */ public Set<ZSetOperations.TypedTuple<Object>> zRankWithScore(String key, long start, long end) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); Set<ZSetOperations.TypedTuple<Object>> ret = zset.rangeWithScores(key, start, end); return ret; } /** * Ordered collection add * * @param key * @param value */ public Double zSetScore(String key, Object value) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); return zset.score(key, value); } /** * Add Score to Sorted Set * * @param key * @param value * @param scoure */ public void incrementScore(String key, Object value, double scoure) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); zset.incrementScore(key, value, scoure); } /** * Ordered collection to get rank * * @param key */ public Set<ZSetOperations.TypedTuple<Object>> reverseZRankWithScore(String key, long start, long end) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); Set<ZSetOperations.TypedTuple<Object>> ret = zset.reverseRangeByScoreWithScores(key, start, end); return ret; } /** * Ordered collection to get rank * * @param key */ public Set<ZSetOperations.TypedTuple<Object>> reverseZRankWithRank(String key, long start, long end) { ZSetOperations<String, Object> zset = redisTemplate.opsForZSet(); Set<ZSetOperations.TypedTuple<Object>> ret = zset.reverseRangeWithScores(key, start, end); return ret; } }
Inject the RedisTemplate into the IOC container in RedisConfig:
/** * @Auther: csp1999 * @Date: 2020/11/14/18:44 * @Description: Redis Related configuration classes */ @Configuration //@EnableCaching // Enable caching public class RedisConfig { /** * Inject redisTemplate into IOC * * @param factory * @return */ @Bean public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory factory) { RedisTemplate<String, String> redisTemplate = new RedisTemplate<>(); // RedisTemplate into RedisConnectionFactory factory redisTemplate.setConnectionFactory(factory); return redisTemplate; } }
2.8 Using redis cache in UserController
method one:
/** * @Auther: csp1999 * @Date: 2020/11/17/11:38 * @Description: */ @RestController public class UserController { /** * Cache salt value: key */ private static final String key = "userCache_"; @Resource private UserMapper userMapper; @Resource private RedisUtil redisUtil; /** * Method 1 to obtain user information according to id: * First check from the redis cache, if there is, take it out, if not, check it from the database (after checking it, save it to the cache) * Note: The serialization method must be consistent when set value and get value * * @param id * @return */ @RequestMapping("/getUserCache") public User getUseCache(String id) { // step1: first get the value from redis User user = (User) redisUtil.get(key + id); // step2: If you can't get it, get the value from DB if (user == null) { User userDB = userMapper.find(id); System.out.println("fresh value from DB id:" + id); // step3: refresh the redis value when the DB is not empty if (userDB != null) { redisUtil.set(key + id, userDB); return userDB; } } return user; } }
Assuming that the user information record already exists in the database:
Let's visit the interface (first visit):
Check out the console print:
As shown in the figure, when the user information is queried based on the id for the first time, the user information does not exist in the redis cache, so go directly to the database to query!
Next, we clear the console print information and refresh the http://localhost:8080/demo/getExpire?id=1 link to simulate the second visit:
The effect is as shown in the figure:
It can be seen from the figure that the sql log is not printed, so the user information is not obtained from the database in this visit, but the user information obtained directly from the redis cache!
The benefits of adding caching: Anyone who has learned redis and mysql should know that MySQL reads data in disk, while redis reads data in memory (fast), in the case of large data volume and high access volume , the back-end hotspot interface of the project does not need to query the relevant records in the database every time it is called. If there is relevant data in the cache, it will be fetched from the cache first, which will improve the efficiency!
Method two:
Annotations that simplify redis caching operations are provided in springboot:
-
1. The use of springboot cache: it can be combined with redis, ehcache and other caches
- @CacheConfig(cacheNames="userInfoCache") must be unique in the same redis
- @Cacheable( check): to divide cacheable methods - that is, methods whose results are stored in the cache, so that on subsequent calls (with the same parameters), the value in the cache is returned without actually executing the method;
- @CachePut (modification, addition): The @CachePut annotation can be used when the cache needs to be updated without interfering with method execution. That is, always execute the method and put its result in the cache (according to the @CachePut option)
- @CacheEvict( delete): useful for removing stale or unused data from the cache, indicating whether a cache-wide eviction needs to be performed rather than just an entry eviction
-
2. Integration steps of springboot cache:
-
1) Introduce pom.xml dependency:
<!-- springboot cache --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency>
-
2) RedisConfig enables caching annotation: @EnableCaching
@Configuration @EnableCaching // enable cache public class RedisConfig {
-
3) Add the el expression of SpEL spring to the method
UserService.java
/** * User Add, delete, modify and check the table */ @Service // When the method in this class specifies to use the cache, the default name is userInfoCache @CacheConfig(cacheNames = "userInfoCache") // open transaction @Transactional(propagation = Propagation.REQUIRED, readOnly = false, rollbackFor = Exception.class) public class UserService { @Autowired private UserMapper userMapper; /** * Because there must be a return value in order to save it to the database. * If some fields of the saved object need to be generated by the database, * When saving objects into the database, there is no need to put them in the cache. * <p> * When a certain piece of data is added to the database, the cache also increases * * @param user * @return */ // #p0 indicates that the first parameter is used as the key in redis // If it is #p1, it means the second parameter is used as the key in redis... There is only one parameter user here // #p0.id means to get the user's id as the key in redis @CachePut(key = "#p0.id") // There must be a return value, otherwise no data will be placed in the cache public User insertUser(User user) { userMapper.insert(user); // There may be only a few valid fields in the user object, and other field values are generated by the database, such as id return userMapper.find(user.getId()); } /** * The @CachePut annotation can be used when the cache needs to be updated without interfering with method execution. * That is to say, when the corresponding content in the database is updated, the cache needs to be updated synchronously, and this annotation can be used * * @param user * @return */ @CachePut(key = "#p0.id") public User updateUser(User user) { userMapper.update(user); // It may just update a few fields, so check the database and take out all the data. return userMapper.find(user.getId()); } /** * Using the @Cacheable annotation will query the cache first, if it exists in the cache, the method of querying the database will not be executed * <p> * Use springboot cache default cache configuration * * @param id * @return */ @Nullable// If NULL value can be passed in, mark it as @Nullable, if not, mark it with @Nonnull @Cacheable(key = "#p0") public User findById(String id) { System.err.println("according to id=" + id + "Get the user object, get it from the database"); Assert.notNull(id, "id not empty"); return userMapper.find(id); } /** * Delete the cache whose name is userInfoCache and whose key is equal to the specified id * When a piece of data is deleted from the database, it is also deleted from the cache * * @param id */ @CacheEvict(key = "#p0") public void deleteById(String id) { userMapper.delete(id); } /** * Clear all caches under the cache name userInfoCache (see the annotation on the class name) * If the data fails, the cache will not be cleared */ @CacheEvict(allEntries = true) public void deleteAll() { userMapper.deleteAll(); } }
Use UserService in UserController to operate redis cache:
/** * Method 2 to obtain user information based on id: * * userService Added springboot cache cache related notes * @param id * @return */ @RequestMapping("/getByCache") public User getByCache(String id) { User user = userService.findById(id); return user; }
It can be seen that way 2 simplifies the code of way 1!
-
Method three:
Question: What is the problem with springboot cache:
-
First: It is too simple to generate key s, such as: userCache::3, which is easy to cause conflicts
-
Second: the expiration time cannot be set, and the default expiration time is never expired (if there is too much data and it does not expire, it will cause memory leaks)
-
Third: configure the serialization method, the default is to serialize JDKSerialazable
Solution:
springboot cache customizations:
1) Custom KeyGenerator: Solve the problem that the default key generated by springboot cache is too simple, and it is easy to conflict with userCache::3;
2) Customize the cacheManager and set the cache expiration time: Solve the problem that the springboot cache cannot set the expiration time by default, and the default expiration time is permanent and does not expire;
3) The custom serialization method is Jackson or Gson (we can use jackson here): JDKSerialazable, the default serialization method of springboot cache, is not applicable. Why should we change the default serial number method? Because the default serialization method of boot may not support the serialization of variables such as date time and null value, it will cause some garbled problems;
step:
1. Add configuration in RedisConfig:
/** * Custom KeyGenerator: Solve the problem that the default key generated by springboot cache is too simple, which is easy to cause duplication and conflict * * @return */ @Bean public KeyGenerator simpleKeyGenerator() { return (o, method, objects) -> {// o: class method: method objects: method parameters /** * We can customize KeyGenerator in the following way (guarantee uniqueness): * class name + method name + parameters * eg: UserInfoList::UserService.findByIdTtl[1] * * Extension: Whether the JVM locates the same method in a way similar to this */ StringBuilder stringBuilder = new StringBuilder(); stringBuilder.append(o.getClass().getSimpleName()); stringBuilder.append("."); stringBuilder.append(method.getName()); stringBuilder.append("["); for (Object obj : objects) { stringBuilder.append(obj.toString()); } stringBuilder.append("]"); return stringBuilder.toString(); }; } /** * Set cache expiration time * * @param redisConnectionFactory * @return */ @Bean public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) { return new RedisCacheManager( RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory), // If the specified key is not configured, this default strategy will be used, and the expiration time will be 600s this.getRedisCacheConfigurationWithTtl(600), // If the specified key is configured, the specified key strategy will be used this.getRedisCacheConfigurationMap() ); } // Map that specifies the corresponding key expiration time policy: key: key value value: cache expiration time private Map<String, RedisCacheConfiguration> getRedisCacheConfigurationMap() { Map<String, RedisCacheConfiguration> redisCacheConfigurationMap = new HashMap<>(); // When the key is UserInfoList: Expiration time 100s redisCacheConfigurationMap.put("UserInfoList", this.getRedisCacheConfigurationWithTtl(100)); // When the key is UserInfoListAnother: Expiration time 18000s == 5h redisCacheConfigurationMap.put("UserInfoListAnother", this.getRedisCacheConfigurationWithTtl(18000)); return redisCacheConfigurationMap; } // Specify the jackson serialization method private RedisCacheConfiguration getRedisCacheConfigurationWithTtl(Integer seconds) { Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class); ObjectMapper om = new ObjectMapper(); om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL); jackson2JsonRedisSerializer.setObjectMapper(om); RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig(); redisCacheConfiguration = redisCacheConfiguration.serializeValuesWith( RedisSerializationContext .SerializationPair .fromSerializer(jackson2JsonRedisSerializer) ).entryTtl(Duration.ofSeconds(seconds)); return redisCacheConfiguration; }
Add the method in UserService.java:
/** * Using the @Cacheable annotation will query the cache first, and if it exists in the cache, the method will not be executed * <p> * Customized springboot cache cache related configuration * * @param id * @return */ @Nullable @Cacheable(value = "UserInfoList", keyGenerator = "simpleKeyGenerator") public User findByIdTtl(String id) { // log print System.err.println("according to id=" + id + "Get the user object, get it from the database"); Assert.notNull(id, "id not empty"); return userMapper.find(id); }
Use in controller:
/** * Three ways to obtain user information based on id: * Customized springboot cache cache related configuration * Has expiration time policy * Customized key: UserInfoList::UserService.findByIdTtl[1] * The custom serialization method is jackson * @param id * @return */ @RequestMapping(value = "/getExpire", method = RequestMethod.GET) public User findByIdTtl(String id) { User user = new User(); try { user = userService.findByIdTtl(id); } catch (Exception e) { System.err.println(e.getMessage()); } return user; }
Test access to this interface:
When the data is displayed, it means that the backend has read the data from the database/cache. Let's take a look at the declaration cycle of the corresponding key in the redis cache:
When the key in the redis cache expires:
We visit the interface again to see the effect:
It can be seen from the figure that the redis cache does not have the number of users to be queried at this time, and it is queried from the database at this time!
5. Summary
If the article is helpful to you, please like or follow and support, thank you~