Eviction policies
The maxmemory-policy configuration directive specifies the specific behavior Redis takes when the maxmemory limit is reached.
There are several policies to choose from:
-
When the memory limit is reached, no new values are recorded. When a database employs replication, the primary database is affected.
- allkeys-lru: Keeps the most recently used keys while discarding the least recently used (LRU) keys.
- allkeys-lfu: Keeps frequently used keys while discarding the least frequently used (LFU) keys.
- With the expire field set to true, volatile-lru removes the least recently used keys.
- With the expire field set to true, volatile-lfu removes the least frequently used keys.
- allkeys-random: Removes keys at random to make room for new data.
- With the expire parameter set to true, volatile-random removes keys at random.
-
With the expire field set to true and the smallest remaining time-to-live (TTL) value, volatile-ttl removes the least often used keys.
If no keys to evict fit the conditions, the policies volatile-lru, volatile-lfu, volatile-random, and volatile-ttl function like no eviction.
The proper eviction strategy depends on your application's access pattern; however, you can change it at runtime while the application is running and use the Redis INFO output to track the cache misses and hits frequency to fine-tune your setup.
As a general rule of thumb,
- When you predict a power-law distribution in the popularity of your requests, use the allkeys-lru policy. You anticipate that a subset of items will be used significantly more frequently than the remainder. If you're unsure, this is a solid option.
- If you have cyclic access where all the keys are scanned continually or expect the distribution to be uniform, use the allkeys-random option.
-
If you wish to give Redis suggestions about which cache items are suitable candidates for expiration by utilizing varied TTL values when creating your cache objects, use the volatile-ttl.
When you wish to use a single instance for both caching and having a set of persistent keys, the volatile-lru, and volatile-random policies come in handy. However, it is preferable to run two Redis instances to alleviate such a situation in most cases.
It's also worth noting that assigning an expired value to a key consumes memory. Therefore adopting a policy like allkeys-lru is more memory efficient because the key is evicted under memory pressure without the need for an expired configuration.
How the eviction process works
It is critical to comprehend how the eviction procedure works:
- A client issues a new command, which results in the addition of extra data.
- Redis monitors memory consumption and evicts keys under the policy if it exceeds the maxmemory limit.
-
Then a new command is issued, and so forth.
As a result, we keep running over the memory limit and then evicting keys to get back under the restriction.
If a command uses a lot of memory for a long time (for example, storing a large set intersection into a new key), the memory limit can be exceeded by a significant amount.
Approximated LRU algorithm
The Redis LRU algorithm isn't a replica. This means Redis cannot select the best candidate for eviction, i.e., the access that has been used the most recently. Instead, it will approximate the LRU method by choosing a small number of keys and evicting the best (with the oldest access time) among them.
However, with Redis 3.0, the process has been enhanced to include a pool of good eviction candidates. This improved the method's performance, allowing it to simulate the behavior of a true LRU algorithm more nearly.
What's unique about the Redis LRU algorithm is that you can fine-tune its precision by adjusting the amount of samples checked for each eviction. The following configuration directive controls this parameter:
maxmemory-samples 5
Because it consumes more memory, Redis does not employ a pure LRU implementation. However, with a Redis-based application, the approximation is nearly the same. The following graph shows how Redis' LRU estimate compares to genuine LRU.
The test that produced the graphs above filled a Redis server with several keys. From the first to the final key, it was accessed. Using an LRU method, the first keys are the best candidates for eviction. Later, another 50% of keys are introduced, forcing half of the existing keys to be evicted.
Three types of dots may be seen in the graphs, producing three unique bands.
- The light grey band represents objects that were evicted.
- The grey band represents objects that were not evicted.
-
Objects were added to the green band.
In a theoretical LRU implementation, we expect the first half of the old keys to be expired.
As you can see, Redis 3.0 performs better with 5 samples than Redis 2.8. However, Redis 2.8 retains most objects among the most recently accessed. The approximation for Redis 3.0 with a sample size of 10 is pretty near to the theoretical performance of Redis 3.0.
It's important to remember that LRU is merely a model for predicting the likelihood of a certain key being accessed in the future. Furthermore, if your data access pattern is similar to a power law, most of the accesses will be in the set of keys that the LRU approximation method can handle.
In simulations, we discovered that the difference between genuine LRU and Redis approximation was low or non-existent when utilizing a power-law access pattern.
To closely approach genuine LRU, you can increase the sample size to 10 at the penalty of slightly more CPU utilization and see if this makes a difference in your cache misses rate.
It's quite easy to experiment with different sample size numbers using the CONFIG SET maxmemory-samples count command.
The new LFU mode
The Least Frequently Used eviction mode is available starting with Redis 4.0. This mode may operate better (give a better hits/misses ratio). In LFU mode, Redis will attempt to track the frequency of item access, evicting things that are only utilized seldom. This means that the keys you use frequently have a better chance of sticking in your mind.
The following policies can be used to configure the LFU mode:
- volatile-lfu Evict the keys with an expire set using estimated LFU.
-
allkeys-lfu Using estimated LFU, evict any key.
LFU is similar to LRU in that it employs a probabilistic counter known as a Morris counter to estimate the object access frequency using only a few bits per object and a decay period to diminish the counter over time. We don't want to treat keys as commonly accessed at some point in the future, even if they were previously, so the algorithm can adjust to a change in the access pattern.
That information is sampled in the same way LRU selects an eviction candidate (as mentioned in the preceding section of this article).
Unlike LRU, however, LFU has several configurable parameters: how quickly should a frequently accessed item drop in rank if it is no longer accessed? The Morris counters range can also be tweaked to fit the algorithm to certain use scenarios further.
Redis is set up to do the following by default:
- Around one million queries should be enough to saturate the counter.
-
Every minute, degrade the counter.
Those should be reasonable values that have been experimentally evaluated, but the user may want to experiment with these configuration parameters to find the best ones.
An example is redis.conf file in the source distribution has instructions on tuning these parameters. They are, in a nutshell:
- lfu-log-factor 10
-
lfu-decay-time 1
The decay time is self-explanatory; it is the number of minutes that a counter should decay if sampled and determined to be older than that value. A specific value of 0 signifies that the counter will constantly decay every time it is scanned, which is rarely beneficial.
Frequently Asked Questions
-
What is Redis?
Redis is a key-value data storage and cache that is open-source. It's also known as a data structure server because the keys include hashes, sets, lists, sorted sets, and strings.
-
What language is Redis written in?
Redis is a cache solution and session management system developed in ANSI C. It generates one-of-a-kind keys for storing data.
-
What is the meaning of Redis?
Redis stands for REmote DIctionary Server.
-
What are the ways to interact with Redis?
After the server has been installed, you can either use the Redis Client given by the Redis installation or open a command prompt and type the following command: redis-cli
-
What is the usage of Redis?
Redis is a key-value store database that may be used as a NoSQL database or a memory cache store to boost performance when delivering data from system memory.
Conclusion
In this article, we have learned the key eviction in Redis. We hope that this blog will help you understand the concept of the Eviction in Redis, and if you would like to learn more about it, check out our other blogs on redis, mongo-dB, database, and operational database.
Attempt our Online Mock Test Series on Coding Ninjas Studio now!
Happy Coding!