catalogue
Process of deploying two-level cache
Cache user's popular post list
Optimize the Service layer DiscussPostService
Complete the pressure test with JMeter
Add listener (aggregate report)
Login credentials are not suitable for local caching, because in A distributed environment, users access server A and then server B. the server and credentials are related to each other. The performance of distributed cache is slightly lower than that of local cache, which is mainly caused by network overhead. The data queried by the database needs to be updated to the cache and avoid accessing the database as much as possible. redis can span servers and cache any type of data.
Process of deploying two-level cache
Check whether there is data in the local cache. If not, you need to access redis. If not, you need to access redis, and then synchronize the newly accessed data to redis and the local cache. With the secondary cache, the probability of accessing the database will be greatly reduced and the performance of the website will be greatly improved. The elimination strategy of cache is based on time, frequency and space.
Cache user's popular post list
Data characteristics suitable for caching: low change frequency.
Guide Package
<dependency> <groupId>com.github.ben-manes.caffeine</groupId> <artifactId>caffeine</artifactId> <version>2.8.4</version> </dependency>
Configure relevant parameters
The number of cached posts (it doesn't need to be too large, and users won't see the data after multiple pages), set the expiration time of the cache (regular elimination), and the embodiment of the elimination mechanism: active elimination (if the data changes and is modified, you need to clear the cache, and then read and load from the database) regular elimination
#caffeine caffeine.posts.max-size=15 caffeine.posts.expire-seconds=180
Optimize the Service layer DiscussPostService
Core interface Cache:
Two sub interfaces:
- LoadingCache: synchronize the Cache and queue up to fetch data.
- AsyncCache: asynchronous Cache, which supports concurrent access to data.
Declare two caches to cache popular posts and the number of cached posts respectively
//Post list cache private LoadingCache<String, List<DiscussPost>> postListCache; //Total number of Posts cache private LoadingCache<Integer, Integer> postRowCache;
Where the cache needs to be checked: the cache is not used in all places. It can only be used when the user's id is 0 and the access is a popular post.
public List<DiscussPost> findDiscussPosts(int userId, int offset, int limit, int orderMode){ //If users view their own posts, they do not go through the cache if(userId == 0 && orderMode == 1){//Direct access to the cache (access to the cache only when visiting the home page and visiting popular posts) return postListCache.get(offset + ":" + limit); } logger.info("load post list from DB"); return discussPostMapper.selectDiscussPosts(userId, offset, limit, orderMode); } public int findDiscussPostRows(int userId){ if(userId == 0){ return postRowCache.get(userId); } logger.info("load post list from DB"); return discussPostMapper.selectDiscussPostRows(userId); }
Cache initialization
//Because you only need to initialize once @PostConstruct public void init(){ //Initialize post list cache postListCache = Caffeine.newBuilder() .maximumSize(maxSize) .expireAfterWrite(expireSeconds, TimeUnit.SECONDS) .build(new CacheLoader<String, List<DiscussPost>>() { @Nullable @Override public List<DiscussPost> load(@NonNull String key) throws Exception { if(key == null || key.length() == 0){ throw new IllegalArgumentException("Parameter error"); } String[] params = key.split(":"); if(params == null || params.length != 2){ throw new IllegalArgumentException("Parameter error"); } int offset = Integer.valueOf(params[0]); int limit = Integer.valueOf(params[1]); //L2 cache logger.info("load post list from DB"); return discussPostMapper.selectDiscussPosts(0, offset, limit, 1); } }); //Initialize the total number of Posts cache postRowCache = Caffeine.newBuilder() .maximumSize(maxSize) .expireAfterWrite(expireSeconds, TimeUnit.SECONDS) .build(new CacheLoader<Integer, Integer>() { @Nullable @Override public Integer load(@NonNull Integer key) throws Exception { logger.info("load post list from DB"); return discussPostMapper.selectDiscussPostRows(key); } }); }
Complete the pressure test with JMeter
Create thread group
Simulate 100 users (create 100 threads) and continue to execute for 60s. It will stop automatically after 60 seconds.
Add sampler
orderModel is set to 1
Add unified random timer
If there is no timer, once the thread is started, it will continuously access the server (which can easily lead to server paralysis), define random intervals and simulate the natural state
Add listener (aggregate report)
View test results through aggregate reports
Throughput: how many requests can the server process per second.
If the number of threads is relatively small, the server can handle requests in a relatively idle state. In order to find the performance bottleneck, the number of threads needs to be increased until the throughput begins to decline. In other words, the bottleneck is reached when the thread increases but the throughput decreases. On this basis, test the effect of cache.
After caching is enabled
The difference is about 18-19 times.