spring cache - (CACHE penetration, cache breakdown, cache avalanche, hot data)

Note: This article is a little long, so I suggest you download the source code to learn. (please collect if necessary! Please state the source of reprint, thank you!)

Code download: https://gitee.com/hong99/spring/issues/I1N1DF

background

Following the above< spring cache - distributed cache>;

About jmeter configuration

jmeter is an Apache r application. It is an open source software. It is designed for 100% pure Java applications to load test functional behavior and measure performance. It was originally designed to test Web applications, but has since been extended to other test functions.

Official website: https://jmeter.apache.org/

use: https://jmeter.apache.org/usermanual/index.html

Problems caused by caching?

Distributed caching improves the system performance very efficiently, but it may cause the following problems.

What is hot data (or hot key)

Suddenly, dozens or even millions of data request the same redis key at the same time, resulting in very concentrated traffic and bandwidth limit. Finally, this redis server goes down.

Hot Key found?

    1. Through business means, some ordinary activities predict which keys may become hot keys in advance;

    2. Through the collection method, such as aop+agent monitoring, collect the client and data layer data for prediction;

    3. For example, some large hash keys can be split. Instead of putting them into all one Key, they can be split into different small keys;

Solution:

    1. Important key s are not set to expire, otherwise it may cause cache breakdown and directly call db

    2. Establish multi-level buffer, such as distributed using reids and local using guava to speed up the speed, and set the corresponding LFU strategy;

    3. When necessary, some useless services can be degraded and limited accordingly; (refer to the following)

Reference article:

    https://help.aliyun.com/document_detail/101108.html

Conclusion: as long as you purchase Alibaba cloud or other cloud services, you can basically see the corresponding parameter indicators on the surface. For this sudden tens of millions of request hot key s, it is generally best to combine the business. SMS or monitoring alarm can be added in some frequent links, which is conducive to timely handling feedback. It can also be done in combination with the following current limiting and service degradation.

What is cache penetration

The query is based on non-existent data, which leads to database search every time, and qps reaches ten thousand or even one million, which directly hangs the database.

Simulate cache penetration

Press 10000 users through jmeter and request in 60 seconds.

    com.hong.spring.controller.UserController#findById


/**
 *
 * Function Description: query by id
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/8/31 17:29
 */
@RequestMapping("findById/{id}")
public DataResponse<User> findById(@PathVariable("id")Integer id){
    if(null==id){
        return DataResponse.BuildFailResponse("Parameter cannot be empty!");
    }
    try {
        //Define a thread pool object with 20 concurrent threads
        ExecutorService service = Executors.newFixedThreadPool(20);
        Runnable runnable = new Runnable() {
            @Override
            public void run() {
                userService.findById(id);
            }
        };
        for (int i = 0; i < 10000; i++) {
            //Submit the thread and execute it concurrently
            service.submit(runnable);
        }
        return userService.findById(id);
    }catch (Exception e){
        log.error("findById->Query failed{}",e);
        return DataResponse.BuildFailResponse("Error in query. Please try again!");
    }
}

jmeter configuration

result

Process stuck

can't find...

redis hung up directly

Then it collapsed

CPU

Solution

1. Set null value cache;

Note: when adding this data, you need to delete the original id and then put it in, refresh the cache, otherwise it will lead to the scene that the cached data is inconsistent with the database.

  •  
redisCacheManager.set("user_"+id, JSONObject.toJSONString(user));

2. Configure the number of ip requests per second through nginx;

reference resources: https://www.cnblogs.com/aoniboy/p/4730354.html  https://www.cnblogs.com/my8100/p/8057804.html 3. Through bloom filter;

com.hong.spring.controller.UserController


// Create a bloom filter object
BloomFilter filter = BloomFilter.create(
        Funnels.integerFunnel(),
        1500,
        0.01);
{
    for(int i=0;i<44;i++){
        filter.put(i);
    }
}

/**
 *
 * Function Description: query by id
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/8/31 17:29
 */
@RequestMapping("findById2/{id}")
public DataResponse<User> findById2(@PathVariable("id")Integer id){
    if(null==id){
        return DataResponse.BuildFailResponse("Parameter cannot be empty!");
    }
    try {
        // Determine whether the specified element exists
        log.info("Include this id: "+filter.mightContain(id));
        // Add element to bloom filter
        if(!filter.mightContain(id)){
            return DataResponse.BuildFailResponse("The data does not exist");
        }
        return userService.findById(id);
    }catch (Exception e){
        log.error("findById->Query failed{}",e);
        return DataResponse.BuildFailResponse("Error in query. Please try again!");
    }
}

result

jmeter configuration

jmeter all successful

Summary: bloom filter is a data structure based on probability. Its time and space are far higher than those of ordinary algorithms. It has very high performance and can effectively prevent cache breakdown. Of course, guava is used in this paper, and redis and other implementations are also available. It can be divergent through this idea or talk about yourself in private.

Disadvantages of Bloom filter:

False recognition rate: that is, it may exist but cannot be matched or there is no match.

Deletion difficulty: this deletion is quite troublesome;

Reference article:

https://juejin.im/post/6844903832577654797

https://blog.csdn.net/Revivedsun/article/details/94992323

https://bbs.huaweicloud.com/blogs/136683

What is cache breakdown

In the case of high concurrency, a large number of requests query the same key at the same time. Just because the key fails, all requests are called to the database, resulting in the service hanging. This is called cache breakdown.

Analog buffer breakdown

com.hong.spring.service.IUserService#findById2


/**
 *
 * Function Description: query by id (CACHE breakdown)
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/9/2 14:40
 */
DataResponse<User> findById2(Integer id);

com.hong.spring.service.impl.UserServiceImpl#findById2


@Override
public DataResponse <User> findById2(Integer id) {
    //log.info("cache breakdown, enter database query");
    if(null==id){
        return DataResponse.BuildFailResponse("Required parameters cannot be empty!");
    }
    User user=null;
    if(redisCacheManager.hasKey("user2_"+id)){
        log.info("Query cache has value");
        String userStr = (String)redisCacheManager.get("user2_" + id);
        if(null!=userStr && !StringUtils.isEmpty(userStr)){
            user = JSONObject.parseObject(userStr, User.class);
        }
    }

    if(null==user){
        log.info("Query database!");
        user = userMapper.findById(id);
        if(null!=user){
            redisCacheManager.set("user2_"+id, JSONObject.toJSONString(user),3);
        }
    }

    return DataResponse.BuildSuccessResponse(user);
}

junit test

com.hong.spring.service.UserServiceTest


// Total requests
public static int clientTotal = 50000;
// Number of threads executing concurrently
public static int threadTotal = 200;
public static int count = 0;

com.hong.spring.service.UserServiceTest#findByUser2


/**
 *
 * Function Description: simulate cache breakdown
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/9/2 14:45
 */
@Test
public void findByUser2() throws InterruptedException {
    //Cache data
    userService.findById2(2);

    ExecutorService executorService = Executors.newCachedThreadPool();
    //Semaphore, used here to control the number of concurrent threads
    final Semaphore semaphore = new Semaphore(threadTotal);
    //Locking, counter decrement can be realized
    final CountDownLatch countDownLatch = new CountDownLatch(clientTotal);
    for (int i = 0; i < clientTotal ; i++) {
        executorService.execute(() -> {
            try {
                userService.findById2(2);
                //This method is used to obtain execution licenses. When the total number of licenses not released does not exceed 200,
                //Allow passage, otherwise the thread blocks and waits until permission is obtained.
                semaphore.acquire();
                add();
                //Release license
                semaphore.release();
            } catch (Exception e) {
                //log.error("exception", e);
                e.printStackTrace();
            }
            //Locking minus one
            countDownLatch.countDown();
        });
    }
    countDownLatch.await();//Thread blocking. The blocking is not released until the blocking value is 0, and the execution continues
    executorService.shutdown();
    log.info("count:{}", count);
}

private static void add() {
    count++;
}

Result: you can find the service and hang up instantly

Through the log, it is found that the first query is normal, and then put it into the cache

Then, when the cache expires, all requests are sent to db (very scary) normal db holds 3000 ~ 5000 requests, but I set 50000

jmeter simulation test

Configuration (50000 times in 5 seconds)

result

At the beginning, there was no problem. Later, the database directly queried exceptions, timeout, exceeding the number of connections and so on

Solution

1. There is no expiration time for key keys (deleted or updated by function);

2. Add a local cache (consistency needs to be considered). When redis fails, directly use the local cache to pit a wave first;

You can refer to: spring cache - Local

2. Add locks. Consider the setnx distributed locks of redis, while the single machine can directly use ordinary locks, such as: (synchronized, Lock)

redis distributed lock (manual implementation)

com.hong.spring.service.impl.UserServiceImpl perfects code to add locks


@Override
public DataResponse <User> findById2(Integer id) {
    //log.info("cache breakdown, enter database query");
    if(null==id){
        return DataResponse.BuildFailResponse("Required parameters cannot be empty!");
    }
    String name = Thread.currentThread().getName();
    String key  = "user2_" + id;
    String lockName=key+".lock";
    Boolean lock = redisCacheManager.setNx(lockName, lockName);
    int count=0;
    log.info("thread {}Lock state{}",name,lock);
    User user=null;
    try {
        //Sleep for 100 milliseconds
        while (!lock && count<10){
            Thread.sleep(100);
            count++;
            log.info("thread {}The first{}Acquire lock once",name,count);
            lock =redisCacheManager.setNx(lockName,lockName);
        }
        if(!lock && count==10){
            log.info("Request more than 10 times for business processing....");
            return DataResponse.BuildFailResponse("Frequent requests, please try again!");
        }
        if(redisCacheManager.hasKey("user2_"+id)){
            log.info("thread {}Query cache has value",name);
            String userStr = (String)redisCacheManager.get("user2_" + id);
            if(null!=userStr && !StringUtils.isEmpty(userStr)){
                user = JSONObject.parseObject(userStr, User.class);
            }
        }
        if(null==user){
            log.info("thread {}Start locking! Start querying database",name);
            user = userMapper.findById(id);
            if(null!=user){
                log.info("thread {}Put values in cache",name);
                redisCacheManager.set("user2_"+id, JSONObject.toJSONString(user),3);
            }
        }
    }catch (Exception e){
        log.error("abnormal{}",e);
    }finally {
        log.info("thread {}Unlocked successfully!",name);
        redisCacheManager.unLock(lockName);
    }
    return DataResponse.BuildSuccessResponse(user);
}

jmeter configuration (10 requests per second)

result

From the results, you can only check the database once, and others are obtained from the cache


18:33:28.492 [http-nio-8081-exec-6] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-6 Lock state true
18:33:28.492 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-4 Lock state false
18:33:28.497 [http-nio-8081-exec-6] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-6 Start locking! Start querying database
18:33:28.564 [http-nio-8081-exec-5] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-5 Lock state false
18:33:28.576 [http-nio-8081-exec-6] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-6 Put values in cache
18:33:28.593 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-4 Acquire lock for the first time
18:33:28.603 [http-nio-8081-exec-6] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-6 Unlocked successfully!
18:33:28.664 [http-nio-8081-exec-5] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-5 Acquire lock for the first time
18:33:28.665 [http-nio-8081-exec-5] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-5 Query cache has value
18:33:28.676 [http-nio-8081-exec-7] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-7 Lock state false
18:33:28.684 [http-nio-8081-exec-5] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-5 Unlocked successfully!
18:33:28.694 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-4 Acquire lock for the second time
18:33:28.695 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-4 Query cache has value
18:33:28.695 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-4 Unlocked successfully!
18:33:28.777 [http-nio-8081-exec-7] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-7 Acquire lock for the first time
18:33:28.778 [http-nio-8081-exec-7] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-7 Query cache has value
18:33:28.779 [http-nio-8081-exec-7] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-7 Unlocked successfully!
18:33:28.786 [http-nio-8081-exec-8] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-8 Lock state true
18:33:28.787 [http-nio-8081-exec-8] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-8 Query cache has value
18:33:28.787 [http-nio-8081-exec-8] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-8 Unlocked successfully!
18:33:28.898 [http-nio-8081-exec-9] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-9 Lock state true
18:33:28.898 [http-nio-8081-exec-9] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-9 Query cache has value
18:33:28.899 [http-nio-8081-exec-9] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-9 Unlocked successfully!
18:33:29.010 [http-nio-8081-exec-10] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-10 Lock state true
18:33:29.011 [http-nio-8081-exec-10] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-10 Query cache has value
18:33:29.012 [http-nio-8081-exec-10] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-10 Unlocked successfully!
18:33:29.119 [http-nio-8081-exec-1] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-1 Lock state true
18:33:29.119 [http-nio-8081-exec-1] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-1 Query cache has value
18:33:29.120 [http-nio-8081-exec-1] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-1 Unlocked successfully!
18:33:29.231 [http-nio-8081-exec-2] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-2 Lock state true
18:33:29.231 [http-nio-8081-exec-2] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-2 Query cache has value
18:33:29.232 [http-nio-8081-exec-2] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-2 Unlocked successfully!
18:33:29.340 [http-nio-8081-exec-3] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-3 Lock state true
18:33:29.341 [http-nio-8081-exec-3] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-3 Query cache has value
18:33:29.342 [http-nio-8081-exec-3] INFO  com.hong.spring.service.impl.UserServiceImpl - thread  http-nio-8081-exec-3 Unlocked successfully!

Summary: it can be said that cache breakdown often fails to consider the specific expiration time, which leads to the failure of the scene with a large number of users at the moment, and all traffic has been hit to db, which may seriously hang db directly. Therefore, it is suggested to consider concurrent scenarios when using cache, and lock key scenarios uniformly.

Reference article:

Local lock solution: https://www.jianshu.com/p/87896241343c

Distributed lock:

https://jinzhihong.github.io/2019/08/12/%E6%B7%B1%E5%85%A5%E6%B5%85%E5%87%BA-Redis-%E5%88%86%E5%B8%83%E5%BC%8F%E9%94%81%E5%8E%9F%E7%90%86%E4%B8%8E%E5%AE%9E%E7%8E%B0-%E4%B8%80/

What is cache avalanche

When a large-scale cache failure occurs at one time, that is, a large number of Keys expire at the same time, resulting in all requests being transferred to db, and db is crushed instantly.

Simulated cache avalanche

code implementation

New com hong. spring. service. IUserService#findById3


/**
 *
 * Function Description: query by id (CACHE avalanche)
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/9/3 10:49
 */
DataResponse<User> findById3(Integer id);
newly added com.hong.spring.service.impl.UserServiceImpl#findById3
@Override
public DataResponse <User> findById3(Integer id) {
    if(null==id){
        return DataResponse.BuildFailResponse("Required parameters cannot be empty!");
    }

    User user;
    if(redisCacheManager.hasKey("user_" + id)){
        String userStr = (String)redisCacheManager.get("user_" + id);
        if(null!=userStr && !StringUtils.isEmpty(userStr)){
            user = JSONObject.parseObject(userStr, User.class);
        }
    }

    user = userMapper.findById(id);
    if(null!=user){
        redisCacheManager.set("user_"+id, JSONObject.toJSONString(user),5000);
    }

    return DataResponse.BuildSuccessResponse(user);
}

junit test:

com.hong.spring.service.UserServiceTest#findByUser3


@Test
public void findByUser3() throws InterruptedException {
    //Cache data
    userService.findById3(10);
    userService.findById3(11);

    ExecutorService executorService = Executors.newCachedThreadPool();
    //Semaphore, used here to control the number of concurrent threads
    final Semaphore semaphore = new Semaphore(threadTotal);
    //Locking, counter decrement can be realized
    final CountDownLatch countDownLatch = new CountDownLatch(clientTotal);
    for (int i = 0; i < clientTotal ; i++) {
        executorService.execute(() -> {
            try {
                userService.findById3(10);
                userService.findById3(11);
                //This method is used to obtain execution licenses. When the total number of licenses not released does not exceed 200,
                //Allow passage, otherwise the thread blocks and waits until permission is obtained.
                semaphore.acquire();
                add();
                //Release license
                semaphore.release();
            } catch (Exception e) {
                //log.error("exception", e);
                e.printStackTrace();
            }
            //Locking minus one
            countDownLatch.countDown();
        });
    }
    countDownLatch.await();//Thread blocking. The blocking is not released until the blocking value is 0, and the execution continues
    executorService.shutdown();
    log.info("count:{}", count);
}

result

redis timeout

db directly hang up

Configure jmeter pressure measurement simulation

Database configuration (remember to configure it back or restart it ~)

set global max_connections=3;

result

Request to switch to half a day, unavailable status

tomcat starts to report 500. The service waiting here is equivalent to the service hanging up

After pressing for a long time, I found that although db didn't hang up, all the services were seriously unavailable If you're online

mysql hung up

Solution

1. Avoid the expiration of a large number of key s at the same time. Try to pass the random time in the expiration time, and try to be separated by more than a few minutes;


Integer random = Integer.valueOf(RandomStringUtils.randomNumeric(2));
redisCacheManager.set("user_"+id, JSONObject.toJSONString(user),random);

2. Fuse degradation and current limiting;

Downgrade: generally, large companies such as Taobao and jd.com have corresponding downgrades, such as after-sales or hiding some important things to reduce traffic. Of course, it needs to be considered in the design of business level;

Current limiting: divided into two types: nginx and setinal

nginx: Demo

Download: http://nginx.org/en/download.html

Basic tutorial: https://www.runoob.com/w3cnote/nginx-setup-intro.html

Official website: http://nginx.org/

ng configuration (enable 81 and 82 services, and then forward them through nginx. Use port 80 to limit that only one pass is allowed per second. If it exceeds, go directly to the prompt!)


#user  nobody;
worker_processes  1;


events {
    worker_connections  1024;
}


http {
  # More than 1 time per second
  limit_req_zone $binary_remote_addr zone=allips:10m rate=1r/s;
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
  upstream tomcat_cluster{
     server localhost:8081;
     server localhost:8082;
  }

    server {
        listen       80;
        server_name  localhost;
        location / {
      #limit_conn one 1;
            limit_req zone=allips burst=1 nodelay;
            proxy_pass http://tomcat_cluster; 
            #Set agent
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
      #proxy_connect_timeout 1s;
            #proxy_read_timeout 36000s;
            #proxy_send_timeout 36000s; 
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    error_page 500 502 503 504 /503;
    location = /503 {
      default_type application/json;    
      add_header Content-Type 'text/html; charset=utf-8';
      return 200 '{"code":0,"msg":"Please try again during peak hours..."}';
        }
    location /status {  
      stub_status on;     #Indicates that the working status statistics function of stubbstatus is enabled.
      access_log on;     #access_log off;  Close access_log logging function.
      #auth_basic "status";                 #auth_basic is an authentication mechanism of nginx.
      #auth_basic_user_file conf/htpasswd;  #Used to specify the location of the password file.
    }
    }

}

Request: http://localhost/

  •  
{"code":0,"msg":"Please try again during peak hours..."}

Nginx summary: nginx can easily handle this kind of interception. When users call back directly after exceeding the number of requests per second, it can effectively intercept malicious requests. Of course, it can also be configured to directly add them to the blacklist after reaching the number of times. Nginx is usually used as the first layer to filter some ddos attacks. Of course, it can also be configured as a cluster. There is no need to try here. It can be unified and improved in the future.

Sentinel: Demo

Implementation of code annotation

In POM XML import jar package

<!--introduce Sentinel-->
<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-core</artifactId>
    <version>1.8.0</version>
</dependency>

com.hong.spring.config.AopConfiguration

#Configure initialization limits

package com.hong.spring.config;

import com.alibaba.csp.sentinel.annotation.aspectj.SentinelResourceAspect;
import com.alibaba.csp.sentinel.slots.block.RuleConstant;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRule;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRuleManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import javax.annotation.PostConstruct;
import java.util.ArrayList;
import java.util.List;

/**
 *
 * Function Description: support the use of sentinel
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/9/4 18:28
 */
@Configuration
public class AopConfiguration {

    @Bean
    public SentinelResourceAspect sentinelResourceAspect() {

        return new SentinelResourceAspect();
    }

    @PostConstruct
    private void initRules() throws Exception {
        FlowRule rule1 = new FlowRule();
        rule1.setResource("findById5");//@The name of the value of sentinelResource
        rule1.setGrade(RuleConstant.FLOW_GRADE_QPS);  // Rule type
        rule1.setCount(1);   // The maximum number of calls per second is 1
        List<FlowRule> rules = new ArrayList<>();
        rules.add(rule1);
        // Load control rules into Sentinel
        FlowRuleManager.loadRules(rules);
    }
}

applicationContext-mybatis_plus_redis_cache.xml is added as follows

<!-- This configuration should be configured in component-scan in the future -->

<aop:aspectj-autoproxy proxy-target-class="true" />

com.hong.spring.service.impl.UserServiceImpl#handlerById

#Configuration name and restrictions.


@Override
@SentinelResource(value = "findById5",blockHandler  = "findById5Fallback")
public DataResponse <User> handlerById(Integer id) {
    if(null==id){
        return DataResponse.BuildFailResponse("Required parameters cannot be empty!");
    }
    User user;
    String userStr = (String)redisCacheManager.get("user_" + id);
    if(null==userStr || StringUtils.isEmpty(userStr)){
      user = userMapper.findById(id);
        if(null!=user){
            redisCacheManager.set("user_"+id, JSONObject.toJSONString(user));
        }
    }else{
        user = JSONObject.parseObject(userStr, User.class);
    }

    return DataResponse.BuildSuccessResponse(user);
}

#Limit return results


public static DataResponse findById5Fallback(Integer id, BlockException ex){
    log.info("It's current limiting!");
    return DataResponse.BuildFailResponse("Please try again during peak hours...");
}

com.hong.spring.controller.UserController#findById5


/**
 *
 * Function Description: pass sentinel test
 *
 * @param:
 * @return:
 * @auther: csh
 * @date: 2020/9/4 16:01
 */
@RequestMapping("findById5/{id}")
public DataResponse<User> findById5(@PathVariable("id")Integer id){
    if(null==id){
        return DataResponse.BuildFailResponse("Parameter cannot be empty!");
    }
    try {
        return userService.handlerById(id);
    }catch (Exception e){
        log.error("findById->Query failed{}",e);
        return DataResponse.BuildFailResponse("Error in query. Please try again!");
    }
}

Then start the project and request the address twice in a row:

http://localhost:8081/user/findById5/4

Result (the code is too intrusive in this way!)

00:15:28.974 [http-nio-8081-exec-1] INFO  com.hong.spring.controller.IndexController - home page
00:15:29.007 [http-nio-8081-exec-3] INFO  com.hong.spring.controller.IndexController - home page
00:30:57.715 [http-nio-8081-exec-4] INFO  com.hong.spring.service.impl.UserServiceImpl - It's current limiting!

Console mode

Download package: https://github.com/alibaba/Sentinel/releases

Or location in the project

Then execute:


java -Dserver.port=8080 -Dcsp.sentinel.dashboard.server=localhost:8080 -Dproject.name=sentinel-dashboard -jar sentinel-dashboard-1.8.0.jar

After operation

 pom.xml introduces new jar


<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-transport-simple-http</artifactId>
    <version>${sentinel.version}</version>
</dependency>
<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-spring-webmvc-adapter</artifactId>
    <version>${sentinel.version}</version>
</dependency>
<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-annotation-aspectj</artifactId>
    <version>${sentinel.version}</version>
</dependency>
<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-web-servlet</artifactId>
    <version>${sentinel.version}</version>
</dependency>
<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-transport-common</artifactId>
    <version>${sentinel.version}</version>
</dependency>

Configure tomcat

  

 

-Dcsp.sentinel.dashboard.server=127.0.0.1:9000
-Dproject.name=spring-8081

Then, after running, open the console. http://localhost:9000/ The first time you need to log in to the account, the password is called sentinel

It is found that spring-8081 is the same as our tomcat!

to configure

result

Reference article:

Official website: https://github.com/alibaba/Sentinel/wiki

To sum up, sentinel is a very powerful and powerful tool with many functions. It can help developers ensure the stability of microservices from multiple dimensions, such as degradation, current limiting, flow shaping, fuse degradation, system load protection, hotspot protection and so on. In short, it is very powerful and easy to use. This section is only the function of the tip of the iceberg when used. We will study it further in the future.

3. The key cache can be set not to expire. When it is updated, it can be updated synchronously; (ibid.)

4. Like cache breakdown, add distributed to improve;

last

Cache penetration, cache breakdown, cache avalanche and hot data are still common problems in the industry. Many systems have not started so much business, and their R & D has not been considered in place, resulting in very serious accidents. Therefore, for some large quantities, it is recommended to adopt appropriate schemes. Of course, it is only a reference for some schemes, which are carried out in the form of clusters like nginx, sentinel and redis, In case the service doesn't hang up, but nginx really doesn't achieve high availability Please continue to pay attention to the subsequent unification and improvement of these related clusters. Thank you!

(if you have any questions, please leave a message or add QQ: 718231657)

Reference article:

        https://blog.csdn.net/zeb_perfect/article/details/54135506

        https://juejin.im/post/6844903807797690376

        https://blog.csdn.net/zhengzhaoyang122/article/details/82184029

        https://cloud.tencent.com/developer/article/1058203

Tags: Java Redis Spring Distribution

Posted by jahwobbler on Wed, 18 May 2022 08:43:34 +0300