Implementation-1
The implementation of the above algorithm is given below.
- We’ve utilized a linked list taking advantage of its standard queue implementation. It uses a linked list internally to model a queue or a deque.
-
We've used HashSet to store references of pages in Cache.
import java.util.*;
public class LRU_Cache
{
//Doubly linked list implementation of Queue
private Deque<Integer> LRU_Queue;
//Hash set to store references of key in cache
private HashSet<Integer> hashSet;
//Total number of frames in an LRU Cache
private final int TOTAL_FRAMES;
//Constructor for initialization
LRU_Cache(int capacity)
{
LRU_Queue = new LinkedList<>();
hashSet = new HashSet<>();
TOTAL_FRAMES = capacity;
}
//Calling pages one by one
public void call(int page)
{
//If the page is not present in the LRU Cache
if (!hashSet.contains(page))
{
if (LRU_Queue.size() == TOTAL_FRAMES)
{
//Finding and removing the last page
int last = LRU_Queue.removeLast();
hashSet.remove(last);
}
}
//The page is present in the cache
else {
LRU_Queue.remove(page);
}
//Inserting that page in the front of Cache
LRU_Queue.push(page);
//Updating the address in the hash set
hashSet.add(page);
}
public void display()
{
Iterator<Integer> itr = LRU_Queue.iterator();
System.out.println("The LRU Cache contains:");
while (itr.hasNext())
{
System.out.println("|_" + itr.next() + "_|");
}
}
public static void main(String[] args) {
LRU_Cache cache = new LRU_Cache(3);
cache.call(1);
cache.call(2);
cache.call(3);
cache.call(4);
cache.call(1);
cache.call(2);
cache.display();
cache.call(5);
cache.call(1);
cache.call(2);
cache.call(4);
cache.call(3);
cache.display();
}
}
You can also try this code with Online Java Compiler
Run Code
Output
The LRU Cache contains:
|_2_|
|_1_|
|_4_|
The LRU Cache contains:
|_3_|
|_4_|
|_2_|
Time Complexity
The time complexity of the above approach depends on the addition and deletion in the linked list.
- Adding an element at the beginning will take O(1) time.
- Removing an element will take O(N) time.
Space Complexity
The space complexity of this approach is O(N), where N is the total number of frames in the cache.
Also see, Difference Between HahsMap and HashTable in Java
Implementation-2
In this approach, we are using a hash map of keys and double linked nodes. This approach is more efficient than the first one.
import java.util.*;
class Cache_Node {
int key;
int value;
Cache_Node prev_node;
Cache_Node next_node;
public Cache_Node(int key, int value)
{
this.key = key;
this.value = value;
}
}
//A class to implement LRU Cache
class LRUCache {
private HashMap<Integer, Cache_Node> map;
private int number_of_frames, count;
private Cache_Node head, tail;
public LRUCache(int number_of_frames)
{
this.number_of_frames = number_of_frames;
map = new HashMap<>();
head = tail = null ;
count = 0;
}
public void deleteNode(Cache_Node node)
{
Cache_Node prev = node.prev_node;
Cache_Node next = node.next_node;
if (prev != null) {
prev.next_node = next;
}
if (next != null) {
next.prev_node = prev;
}
}
public void addToHead(Cache_Node node)
{
if (head == null) {
head = tail = node;
return;
}
node.next_node = head ;
if (head != null) {
head.prev_node = node;
}
head = node;
}
public int get(int key)
{
if (map.get(key) != null) {
Cache_Node node = map.get(key);
int result = node.value;
deleteNode(node);
addToHead(node);
return result;
}
return -1;
}
public void set(int key, int value)
{
if (map.get(key) != null) {
Cache_Node node = map.get(key);
node.value = value;
deleteNode(node);
addToHead(node);
}
else {
Cache_Node node = new Cache_Node(key, value);
map.put(key, node);
if (count < number_of_frames) {
count++;
addToHead(node);
}
else {
Cache_Node tail2 = tail;
tail = tail.prev_node;
tail.next_node = null;
deleteNode(tail2);
map.remove(tail2.key);
addToHead(node);
}
}
}
public void printCache() {
Cache_Node temp = head;
while (temp != null) {
System.out.println("| " + temp.key + ": " + temp.value + " |");
temp = temp.next_node;
}
}
}
public class LRU_Test {
public static void main(String[] args)
{
//Creates a cache with three frames
LRUCache cache = new LRUCache(3);
cache.set(1, 10);
cache.set(2, 20);
cache.set(3, 30);
System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
System.out.println("Value for the key: 2 is " + cache.get(2));
cache.set(4, 40);
System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
System.out.println("Value for the key: 1 is " + cache.get(1));
System.out.println("Value for the key: 3 is " + cache.get(3));
System.out.println("Value for the key: 4 is " + cache.get(4));
System.out.println("Current sequence in cache");
System.out.println(" -------");
cache.printCache();
System.out.println(" -------");
}
}
You can also try this code with Online Java Compiler
Run Code
Output
Current sequence in cache
-------
| 3: 30 |
| 2: 20 |
| 1: 10 |
-------
Value for the key: 2 is 20
Current sequence in cache
-------
| 4: 40 |
| 2: 20 |
| 3: 30 |
-------
Value for the key: 1 is -1
Value for the key: 3 is 30
Value for the key: 4 is 40
Current sequence in cache
-------
| 4: 40 |
| 3: 30 |
| 2: 20 |
-------
Time Complexity
The hash map makes the time of the get operation to be O(1). The list of double linked nodes makes the nodes adding/removal operations O(1).
Space Complexity
The space complexity of this approach is O(N), where N is the total number of nodes in the cache.
Now, let’s move on to the frequently asked questions on this topic.
Check out this problem - Queue Implementation
Frequently Asked Questions
What does LRU stand for?
LRU stands for Least Recently Used. It is a technique to store elements in the Cache.
What is the LRU algorithm for Cache?
This caching method keeps items that have been recently used near the top of the Cache. The LRU inserts a new item to the top of the Cache whenever it is accessed. When the cache limit is reached, things that have been accessed less recently are deleted from the Cache beginning at the bottom.
Conclusion
In this article, we’ve learned about the implementation of LRU Cache, which is an important topic of OS. If you find any difficulty understanding this, we recommend you first understand memory management techniques in OS.
You can also prepare the top Operating System Interview Questions and their answers asked by leading tech companies here.
Recommended Reading:
Also check out some of the Guided Paths on topics such as Data Structure and Algorithms, Competitive Programming, Operating Systems, Computer Networks, DBMS, System Design, Basics of Java, etc. as well as some Contests, Test Series, Interview Bundles, and some Interview Experiences curated by top Industry Experts only on Coding Ninjas Studio.
Happy Learning!