Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Page Replacement Algorithms 
3.
What is LRU? 
4.
Example of LRU Algorithm 
5.
Algorithm of LRU
6.
Implementation of LRU Algorithm in Java
6.1.
Java
7.
Pseudocode of LRU Algorithm
8.
Implementation of LRU Algorithm in OS
9.
Advantages of LRU Page Replacement Algorithm
10.
Disadvantages of LRU Page Replacement Algorithm
11.
Frequently Asked Questions
11.1.
What is a page fault?
11.2.
Why LRU is better than FIFO?
11.3.
What happens when the page fault rate becomes too high or too low?
11.4.
What is the goal of the page replacement algorithms?
11.5.
How do FIFO and LRU page replacement algorithms compare?
12.
Conclusion
Last Updated: Mar 28, 2024
Easy

LRU Page Replacement Algorithm

Master Python: Predicting weather forecasts
Speaker
Ashwin Goyal
Product Manager @

Introduction

As we are aware, RAM is known for its high performance but comes at a high cost for even small storage space. On the other hand, hard disks are cheap but quite slow. 

By not focusing on the downside of RAM, there are some algorithms that are designed in such a way that there will be no need to expand the size of RAM. 

Imagine you have a small-sized RAM of 2GB, but your computer is running a large program that requires 3GB to function

Instead of upgrading your RAM, the LRU algorithm manages the existing RAM's storage capacity by replacing the least recently used pages with the most recently accessed pages, ensuring that the most critical data is always available in the memory for quick access.

Isn’t it amazing? 

LRU Page Replacement Algorithm

This article mainly focuses on explaining the concept of the LRU Page Replacement Algorithm in detail. 

So, grab your seat and let us get started with our new recipe today. 

Page Replacement Algorithms 

Page replacement is a common memory management technique. The concept is straightforward, i.e, the main memory (RAM) cannot hold all of the referred pages in memory. As a result, whenever a new page is referenced, an existing page in the main memory gets replaced by the newly called page.  

There are two terms that are important to understand: 

  • Page Fault: A page fault occurs when a running program tries to access a page of memory that is not present in the main memory (RAM). Remember the term main memory here. 
     
  • Page Hit: On the other hand, if the page already exists in the main memory (RAM), that condition is called Page Hit. 
     

Any page replacement algorithm works on replacing the page with a newly desired page to minimize the number of page faults

These are three types of Page Replacement Algorithms

  • First In First Out (FIFO) Page Replacement Algorithm
     
  • LRU Page Replacement Algorithm
     
  • Optimal Page Replacement Algorithm
     

Let us start with our topic today, i.,e LRU Page Replacement Algorithm

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

What is LRU? 

In the above section also, we have discussed that every Page Replacement Algorithm works on replacing the pages with new ones. The interesting thing is to understand how they replace the pages. 

So, LRU stands for Least Recently Used. As the name suggests, it is based on the concept that if any page fault occurs, it replaces the least recently used page with the newly desired one. 

The LRU algorithm keeps track of the order in which pages are accessed and maintains a list of pages in memory, ordered from most recently accessed to least recently accessed. 

When a new page needs to be loaded into memory, and there is no free space, the algorithm scans the list from the beginning and replaces the page that was accessed least recently.

In other words, any page that has been inactive for an extended period of time is more likely to remain inactive. As a result, it is preferable to replace that page.

The LRU algorithm works on the principle of locality of reference, which states that programs tend to access a small portion of memory at any given time. Therefore, pages that were accessed recently are more likely to be accessed again in the near future than pages that were accessed a long time ago.

Let us take an example to understand the algorithm in a better way. 

Example of LRU Algorithm 

Consider the below reference string.  

reference string

Given that there are a total of 4 memory locations available.

4 slots

Let us start with our first string: 7, since initially all slots are empty in the memory while inserting the pages, we will be getting the page faults, i.e., not present in the main memory (RAM). 

reference string

While inserting the next page : 0, again page fault is going to happen. 

reference string

Next : 1, again page fault as the page is not present in main memory. 

reference string

Next: 2, again page fault. 

reference string

Next is 0. Since it is already present in the memory, therefore no need to replace it with any other page. This condition is known as Page Hit. 

reference string

Next is 3, since there is no memory for the next page, the LRU here comes into the picture. According to the LRU, the least recently used page will get replaced by the new one. If you go back and see then 7 is the least recently used, hence we can replace it with 3. 

reference string

Next is - 0. It is already present in the memory, hence, Page Hit. 

reference string

Next is 4, neither it is present nor the space is vacant, hence the LRU comes into picture here. 

Think here, which page will get replaced? The least recently used is 1 here. 

reference string

Next up is 2, yes you are right => it is a Page Hit. 

reference string

You can play around with the rest of the pages.  

The final result will look something like this: 

reference string

Total no. of Page Faults: 7

Total no. of Page Hits: 8

Let us now discuss the algorithm for solving the LRU.

Recommended topic,  V Model in Software Engineering

Algorithm of LRU

You can follow this algorithm to implement the LRU Page Replacement Algorithm: 

Step 1: Define the size of the page table (number of pages that can be held in memory at once).

Step 2: Initialize two variables: page faults and page hits, both initially set to 0.

Step 3: Initialize an empty list to hold the pages currently in memory.

Step 4: Iterate through the reference string.

Step 5: For each page request, check if the page is already in the list of pages currently in memory:

  • If the page is in memoryremove it from its current position in the list and append it to the end of the list (since it was the most recently used page).
     
  • If the page is not in memoryadd it to the end of the list if there is space (i.e., the list is not yet full), or 
    else remove the least recently used page (i.e., the first page in the list) and replace it with the new page at the end of the list.


Step 6: If a page was replaced in step 5b, increment the page fault counter.

Step 7: After all page requests have been processed, calculate the number of page hits. 

Step 8: Finally, print the number of page faults and page hits to the console.

Implementation of LRU Algorithm in Java

  • Java

Java

/* Java program to implement 
  LRU Page Replacement Algorithm
*/
import java.util.ArrayList;
import java.util.List;

public class LRUPageReplacement {

   public static void main(String[] args) {
      
       // page faults
       int faults = 0;
      
       // page hits
       int hits = 0;
      
       // defining the table size
       int tableSize = 4;
      
       // reference string
       int[] referenceString = {7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2};
      
       /* the list that stores the current pages in memory
          and where page replacement is executed.
       */
       List<Integer> pages = new ArrayList<>();


       // iterating through the ref string
       for (int page : referenceString) {
          
           /* check if the page already exists in the list,
              if yes, we remove the page and add it
              to the end of the list.
           */
           if (pages.contains(page)) {
              
               // removing
               pages.remove(Integer.valueOf(page));
              
               // appending
               pages.add(page);
           }
           // if page is not there
           else {
               /* we first check the length of the page list.
                  if still space left in the list, we first fill it
               */
               if (pages.size() < tableSize) {
                   pages.add(page);
                  
               } else {
                   /* if the page list is filled. We remove the
                      first page.
                   */
                   pages.remove(0);
                   // and then we append the page to the end of the list.
                   pages.add(page);
               }
               // Increment 1 in Page faults
               faults += 1;
           }
       }
       // calculate page hits
       hits = referenceString.length - faults;
      
       System.out.println("Total page faults = " + faults);
       System.out.println("Total page Hits = " + hits);
      
   }
}


Output 

Total page faults = 7
Total page Hits = 8

Pseudocode of LRU Algorithm

Here is a simple pseudocode representation of the Least Recently Used (LRU) Cache algorithm:

  1. Initialize an empty cache (consider using a data structure that allows quick insertion, deletion and lookup, such as a hashmap or dictionary).
  2. When a request comes in:
    1. If the item is found in the cache:
      1. Remove the item from its current position.
      2. Add the item to the top (most recent position) of the cache.
    2. If the item is not found in the cache:
      1. If the cache is full:
        1. Remove the item from the bottom (least recently used position) of the cache.
      2. Add the item to the top (most recent position) of the cache.
  3. Repeat step 2 for all incoming requests.

Implementation of LRU Algorithm in OS

There are various ways to implement the LRU algorithm in an operating system. To store the pages, one method is to use a linked list. The most recently used page is at the head of the linked list, while the least recently used page is at the tail. A new page is added to the linked list's head whenever one is added. A page's last_used property is updated with the current time whenever it is accessed. The page at the end of the linked list is evicted when a page needs to be deleted.

Using a hash table is a further approach of LRU algorithm implementation. Page numbers and page frames are mapped in the hash table. The hash table is expanded each time a new page is added. A page's last_used property is updated with the current time whenever it is accessed. The page with the oldest last_used field is the one that gets removed when a page needs to be deleted.

A linked list and a hash table can also be used to implement the LRU method. The hash table is used to rapidly look up pages by their page number, and the linked list stores the pages in order of their last_used field.

Advantages of LRU Page Replacement Algorithm

Minimizes the number of page faults: The main advantage of the LRU algorithm is that it is optimal in the sense that it minimizes the number of page faults that occur. 

Optimizes memory usage: The LRU algorithm keeps the pages that are most frequently accessed in memory, which optimizes memory usage and ensures that frequently used pages are available quickly.

Simple implementation: The LRU algorithm is easy to understand and implement. It requires only a linked list or a priority queue to keep track of the order in which pages are accessed.

Disadvantages of LRU Page Replacement Algorithm

Computationally Expensive: It can be computationally expensive to maintain the access history of each page, especially in systems with a large number of pages. 

Performance Issue: It may perform poorly in cases where the access patterns of a program do not follow the principle of locality of reference.

Limited scalability: The LRU algorithm may not be scalable for large-scale systems because it requires maintaining a linked list or priority queue that can become prohibitively large.

Frequently Asked Questions

What is a page fault?

A page fault is a type of error that occurs in the computer memory when a program accesses a page (block) of data that is not currently present in the main memory or RAM.

Why LRU is better than FIFO?

For page replacement in memory management, LRU (Least Recently Used) is typically preferable to FIFO (First-In-First-Out) because LRU considers recent usage trends and preserves more relevant pages. In contrast, FIFO may evict valuable pages too soon, especially under dynamic workloads.

What happens when the page fault rate becomes too high or too low?

If the page fault rate is way too high, it ensures that the process has too few frames allocated to it. Boundaries can be eradicated from the process if the page fault rate falls down the lower limit. Similarly, if the page fault rate surpasses the upper limit, more frames can be allocated to the process. 

What is the goal of the page replacement algorithms?

Page replacement algorithms are a vital part of virtual memory management. It supports the operating system in deciding whether the memory page can be moved out or make space for that required page. However, the long-run objective of all page replacement algorithms is to minimize the number of page faults. 

How do FIFO and LRU page replacement algorithms compare?

The OS memory management algorithms FIFO (First-In-First-Out) and LRU (Least Recently Used) replace pages. While LRU replaces the least recently used page while considering consumption trends, FIFO always replaces the oldest page. 

Conclusion

Kudos, you made it here. To summarise the article, we have discussed the LRU Page Replacement Algorithm that works on the principle of locality of reference. We have also discussed the Algorithm and Implementation in Java to perform the same. At last, we discussed its advantages and disadvantages. 

Recommended Reading:


Do check out The Interview guide for Product Based Companies as well as some of the Popular Interview Problems from Top companies like AmazonAdobeGoogleUberMicrosoft, etc., on Coding Ninjas Studio.

Also, check out some of the Guided Paths on topics such as Data Structure and AlgorithmsCompetitive ProgrammingOperating SystemsComputer Networks, DBMS, and System Design, etc. as well as some Contests, Test SeriesInterview Bundles, and some Interview Experiences curated by top Industry Experts only on Coding Ninjas Studio.

Previous article
V Model in Software Engineering
Next article
Software Development Life Cycle
Live masterclass