# Why does Redis use approximate LRU algorithm to eliminate data instead of real LRU?

In< What if the Redis data cache is full? >We know that after the Redis cache is full, the data can be deleted through the elimination strategy to make room for new data.

The elimination strategy is as follows:

key for setting expiration time

The data range eliminated by volatile TTL, volatile random, volatile LRU and volatile LFU strategies is the data with expiration time set.

All key s

Allkeys LRU, allkeys random and allkeys LFU will be eliminated when there is insufficient memory, regardless of whether the expiration time is set for these key value pairs.

This means that it will be deleted even before its expiration time. Of course, if the expiration time has passed, it will be deleted even if it is not selected by the elimination strategy.

Volatile TTL and volatile random are very simple, focusing on volatile LRU and volatile LFU, which involve LRU algorithm and LFU algorithm.

Today, brother code will take you to solve the LRU algorithm of Redis

## Approximate LRU algorithm

What is LRU algorithm?

The whole process of LRU algorithm is least continuously used. As the name suggests, it is to eliminate the data according to the algorithm that has not been used for the longest time.

The core idea is "if the data is accessed recently, the probability of stable distribution in the future is also higher".

We organize all the data into a linked list:

• MRU: indicates the header of the linked list, representing the most frequently accessed data recently;
• LRU: indicates the end of the linked list and represents the least frequently used data recently.

It can be found that LRU updates and inserts new data at the beginning of the list, and deletes data at the end of the list.

The accessed data will be moved to the MRU end, and the data before the accessed data will be moved back one bit accordingly.

Can I use a single linked list?

If you choose a single linked list and delete this node, you need to go through O(n) to find the precursor node. Therefore, the two-way linked list can also be completed by O(1) when deleted.

Does Redis use this LRU algorithm to manage all cached data?

No, because the LRU algorithm needs to use the linked list to manage all the data, it will cause a lot of additional space consumption.

In addition, a large number of nodes are accessed, which will lead to frequent movement of linked list nodes, thus reducing the performance of Redis.

Therefore, redis simplifies the algorithm. Redis LRU algorithm is not a real LRU. Redis samples a small number of keys and eliminates the keys that have not been accessed for a long time in the sampled data.

This means that Redis cannot eliminate the longest accessed data in the database.

An important point of Redis LRU algorithm is that the number of samples can be changed to adjust the accuracy of the algorithm to make it approximate to the real LRU algorithm, while avoiding memory consumption, because only a small number of samples need to be sampled each time, not all data.

The configuration is as follows:

```maxmemory-samples 50
```

Operating principle

Do you remember that there is an lru field in the data structure redisObject, which is used to record the timestamp of the last access of each data.

```typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
/* LRU time (relative to global lru_clock) or
* LFU data (least significant 8 bits frequency
* and most significant 16 bits access time).
*/
unsigned lru:LRU_BITS;
int refcount;
void *ptr;
} robj;
```

When Redis eliminated the data, it randomly selected N data to put into the candidate set for the first time, and eliminated the data with the lowest lru field value.

When the data needs to be eliminated again, the data will be re selected and put into the candidate set created for the first time, but there is a selection criterion: the lru value of the data entering the set must be less than the smallest lru value in the candidate set.

If the number of new data entering the candidate set reaches the value set by maxmemory samples, the data with the least lru in the candidate set will be eliminated.

In this way, the number of linked list nodes is greatly reduced, and the linked list nodes do not need to be moved every time the data is accessed, which greatly improves the performance.

## Implementing LRU Cahce in Java

The LinkedHashMap is fully implemented by Java, which can be implemented in the form of combination or inheritance. The "code brother" is completed in the form of combination.

```public class LRUCache<K, V> {
private Map<K, V> map;
private final int cacheSize;

public LRUCache(int initialCapacity) {
map = new LinkedHashMap<K, V>(initialCapacity, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > cacheSize;
}
};
this.cacheSize = initialCapacity;
}
}
```

The key point is to set the construction parameter accessOrder to true on the third constructor of LinkedHashMap, which means that the access order is maintained inside LinkedHashMap.

In addition, you need to rewrite removeEldestEntry(). If this function returns true, it means to remove the node that has not been accessed for the longest time, so as to eliminate the data.

#### Self realization

Where the code is from LeetCode 146. LRU Cache It's from the. There are comments in the code.

```import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

/**
* Put the longest unused element at the head of the chain and the element just added or accessed at the end of the chain
*/
class LRUCache {
class Node {
int key, value;
Node pre, next;

Node(int key, int value) {
this.key = key;
this.value = value;
pre = this;
next = this;
}
}

private final int capacity;// Capacity of LRU Cache
private Node dummy;// The dummy node is a redundant node, the next node of dummy is the first node of the linked list, and the pre node of dummy is the last node of the linked list
private Map<Integer, Node> cache;//Save the key node pair. Node is a bidirectional linked list node

public LRUCache(int capacity) {
this.capacity = capacity;
dummy = new Node(0, 0);
cache = new ConcurrentHashMap<>();
}

public int get(int key) {
Node node = cache.get(key);
if (node == null) return -1;
remove(node);
return node.value;
}

public void put(int key, int value) {
Node node = cache.get(key);
if (node == null) {
if (cache.size() >= capacity) {
cache.remove(dummy.next.key);
remove(dummy.next);
}
node = new Node(key, value);
cache.put(key, node);
} else {
cache.remove(node.key);
remove(node);
node = new Node(key, value);
cache.put(key, node);
}
}

/**
* Add a new node at the end of the linked list
*
* @param node New node
*/
dummy.pre.next = node;
node.pre = dummy.pre;
node.next = dummy;
dummy.pre = node;
}

/**
* Delete the node from the bidirectional linked list
*
* @param node Node to delete
*/
private void remove(Node node) {
node.pre.next = node.next;
node.next.pre = node.pre;
}
}

```

Don't be stingy with praise. When others do well, give them positive feedback. Pay less attention to voting with praise, but pay more attention to voting with trade.

Judging whether a person is awesome or not depends not on how many people praise him on the Internet, but on how many people are willing to trade with him or appreciate, pay and place an order.

Because praise is too cheap, and those who are willing to trade with him are the real trust and support.

Code brother has written nearly 23 + Redis articles, presented a lot of books, and received a lot of praise and a small amount of appreciation. Thank my readers who once praised me. Thank you.

I'm "code brother". You can call me Liangzi. Please praise the good article. I'll see you in the next article on LFU algorithm.

## Good historical articles

reference

https://redis.io/docs/manual/eviction/

http://antirez.com/news/109

https://time.geekbang.org/column/article/294640

https://halfrost.com/lru_lfu_interview/

https://blog.csdn.net/csdlwzy/article/details/95635083

Posted by digioz on Wed, 11 May 2022 13:45:52 +0300