Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Implementation of LRU cache
2.1.
Data structures needed to implement LRU
2.2.
Algorithm
2.3.
Example
3.
Implementation-1
4.
Implementation-2
4.1.
Output
5.
Frequently Asked Questions
5.1.
What does LRU stand for?
5.2.
What is the LRU algorithm for Cache?
6.
Conclusion
Last Updated: Mar 27, 2024

LRU Cache Implementation

Introduction

An operating system is responsible for running more than one process at a time. That’s why it needs to manage the memory efficiently. There are various ways like FIFO, LIFO, and LRU to accomplish this.

 

Least Recently Used (LRU) is a widely used technique. It arranges data in order of use, making it easy to see which ones haven't been used in the longest time. This was the drawback of earlier algorithms like FIFO and LIFO etc.

 

You must have seen in shops, the items having higher demand are placed in outer showcases. Similarly, the processes which are frequently processed are stored in the LRU cache. 

 

In this article, we’ll discuss the implementation of the LRU cache. Before that, you can revise your concepts of Memory management techniques in OS.

Implementation of LRU cache

There is not particular Data Structure that implements the LRU algorithm. We need a combination of available data structures to implement this technique.

Data structures needed to implement LRU

 

LRU cache can be implemented by combining these two data structures: 

 

  1. A Doubly Linked List implementation of Queue: The most-recently-used item will be at the head of the list and the least-recently-used item at the tail. 
  2. A hash set: To store the references of a key in the cache.

Recommended: Try the problem yourself before moving on to the solution

Also see, Linked List

Algorithm

A process is stored in memory frames. And the processes are divided into pages. Every time a process executes, its pages are brought to the LRU Cache.
 

If a page needed by the processor is not present in the LRU Cache, we bring that in it. We simply add a new node in the queue and change its address in the hash table to implement this. If the LRU cache(doubly linked list) is full, we must replace a page with the required one.
 

If the page is already present somewhere in the Cache, we have to take it to the front. Let's understand this with the help of one example.

Example

 

Example

 

Initially, there are no pages in the memory. The cache is implemented as a queue, so, the insertion and deletion will take place as shown in the diagram below:

 

illustration image

 

The processing of reference strings using the given LRU Cache is shown below. 
 

Figure 1  

                                                                                                              

Figure 2

 

 Figure 3


You can also read about the Longest Consecutive Sequence.

Implementation-1

The implementation of the above algorithm is given below. 

 

  • We’ve utilized a linked list taking advantage of its standard queue implementation. It uses a linked list internally to model a queue or a deque. 
  • We've used HashSet to store references of pages in Cache. 
     
import java.util.*;



public class LRU_Cache

{



//Doubly linked list implementation of Queue

	private Deque<Integer> LRU_Queue;



//Hash set to store references of key in cache

	private HashSet<Integer> hashSet;



//Total number of frames in an LRU Cache

	private final int TOTAL_FRAMES;



//Constructor for initialization

	LRU_Cache(int capacity)

	{

		LRU_Queue =  new LinkedList<>();

		hashSet =  new HashSet<>();

		TOTAL_FRAMES = capacity;

	}



//Calling pages one by one

	public void call(int page)

	{

//If the page is not present in the LRU Cache

		if (!hashSet.contains(page))

		{

			if (LRU_Queue.size() == TOTAL_FRAMES)

			{

//Finding and removing the last page

				int last = LRU_Queue.removeLast();

				hashSet.remove(last);

			}

		}

//The page is present in the cache

		else {

			LRU_Queue.remove(page);

		}

//Inserting that page in the front of Cache

		LRU_Queue.push(page);

//Updating the address in the hash set

		hashSet.add(page);

	}



	public void display()

	{

		Iterator<Integer> itr = LRU_Queue.iterator();

		System.out.println("The LRU Cache contains:");

		while (itr.hasNext())

		{

			System.out.println("|_" + itr.next() + "_|");

		}

	}



	public static void main(String[] args) {

		LRU_Cache cache =  new LRU_Cache(3);

		cache.call(1);

		cache.call(2);

		cache.call(3);

		cache.call(4);

		cache.call(1);

		cache.call(2);

		cache.display();

		cache.call(5);

		cache.call(1);

		cache.call(2);

		cache.call(4);

		cache.call(3);

		cache.display();

	}

}
You can also try this code with Online Java Compiler
Run Code

 

Output

The LRU Cache contains:

|_2_|

|_1_|

|_4_|

The LRU Cache contains:

|_3_|

|_4_|

|_2_|

Time Complexity

The time complexity of the above approach depends on the addition and deletion in the linked list.

  • Adding an element at the beginning will take O(1) time. 
  • Removing an element will take O(N) time.

Space Complexity

The space complexity of this approach is O(N), where N is the total number of frames in the cache. 

Also see, Difference Between HahsMap and HashTable in Java

Implementation-2

In this approach, we are using a hash map of keys and double linked nodes. This approach is more efficient than the first one. 
 

import java.util.*;

class Cache_Node {
	int key;
	int value;
	Cache_Node prev_node;
	Cache_Node next_node;

	public Cache_Node(int key,  int value)
	{
		this.key = key;
		this.value = value;
	}
}

//A class to implement LRU Cache
class LRUCache {
	private HashMap<Integer, Cache_Node> map;
	private int number_of_frames, count;
	private Cache_Node head, tail;

	public LRUCache(int number_of_frames)
	{
		this.number_of_frames = number_of_frames;
		map =  new HashMap<>();
		head = tail = null ;
		count =  0;
	}

	public void deleteNode(Cache_Node node)
	{
		Cache_Node prev = node.prev_node;
		Cache_Node next = node.next_node;
		if (prev != null) {
			prev.next_node = next;
		}

		if (next != null) {
			next.prev_node = prev;
		}

	}

	public void addToHead(Cache_Node node)
	{
		if (head == null) {
			head = tail = node;
			return;
		}
		node.next_node = head ;  
		if (head != null) {
			head.prev_node = node;
		}
		head = node;


	}

	public int get(int key)
	{
		if (map.get(key) != null) {
			Cache_Node node =  map.get(key);
			int result = node.value;
			deleteNode(node);
			addToHead(node);
			return result;
		}


		return  -1;


	}

	public void set(int key,  int value)
	{
		if (map.get(key) != null) {
			Cache_Node node =  map.get(key);
			node.value = value;
			deleteNode(node);
			addToHead(node);
		}
		else {
			Cache_Node node =  new Cache_Node(key, value);
			map.put(key, node);
			if (count < number_of_frames) {
				count++;
				addToHead(node);
			}
			else {
				Cache_Node tail2 = tail;
				tail = tail.prev_node;
				tail.next_node = null;

				deleteNode(tail2);
				map.remove(tail2.key);

				addToHead(node);
			}
		}
	}

	public void printCache() {
		Cache_Node temp = head;
		while (temp != null) {
			System.out.println("| " + temp.key +  ": " + temp.value +  " |");
			temp = temp.next_node;
		}
	}
}

public class LRU_Test {
	public static void main(String[] args)
	{
//Creates a cache with three frames
		LRUCache cache =  new LRUCache(3);

		cache.set(1,  10);
		cache.set(2,  20);
		cache.set(3,  30);
		System.out.println("Current sequence in cache");
		System.out.println(" -------");
		cache.printCache();
		System.out.println(" -------");
		System.out.println("Value for the key: 2 is " + cache.get(2));
		cache.set(4,  40);
		System.out.println("Current sequence in cache");
		System.out.println(" -------");
		cache.printCache();
		System.out.println(" -------");
		System.out.println("Value for the key: 1 is " + cache.get(1));
		System.out.println("Value for the key: 3 is " + cache.get(3));
		System.out.println("Value for the key: 4 is " + cache.get(4));

		System.out.println("Current sequence in cache");
		System.out.println(" -------");
		cache.printCache();
		System.out.println(" -------");
	}
}
You can also try this code with Online Java Compiler
Run Code

 

Output

Current sequence in cache
-------
| 3: 30 |
| 2: 20 |
| 1: 10 |
-------
Value for the key: 2 is 20
Current sequence in cache
-------
| 4: 40 |
| 2: 20 |
| 3: 30 |
-------
Value for the key: 1 is -1
Value for the key: 3 is 30
Value for the key: 4 is 40
Current sequence in cache
-------
| 4: 40 |
| 3: 30 |
| 2: 20 |
-------

Time Complexity 

The hash map makes the time of the get operation to be O(1). The list of double linked nodes makes the nodes adding/removal operations O(1).

 

Space Complexity

The space complexity of this approach is O(N), where N is the total number of nodes in the cache.

 

Now, let’s move on to the frequently asked questions on this topic. 

Check out this problem - Queue Implementation

Frequently Asked Questions

What does LRU stand for?

LRU stands for Least Recently Used. It is a technique to store elements in the Cache.

What is the LRU algorithm for Cache?

This caching method keeps items that have been recently used near the top of the Cache. The LRU inserts a new item to the top of the Cache whenever it is accessed. When the cache limit is reached, things that have been accessed less recently are deleted from the Cache beginning at the bottom.

Conclusion

In this article, we’ve learned about the implementation of LRU Cache, which is an important topic of OS. If you find any difficulty understanding this, we recommend you first understand memory management techniques in OS.

You can also prepare the top Operating System Interview Questions and their answers asked by leading tech companies here.

Recommended Reading:

Also check out some of the Guided Paths on topics such as Data Structure and Algorithms, Competitive Programming, Operating Systems, Computer Networks, DBMS, System Design, Basics of Java, etc. as well as some Contests, Test Series, Interview Bundles, and some Interview Experiences curated by top Industry Experts only on Coding Ninjas Studio.

Happy Learning!

Live masterclass