Code360 powered by Coding Ninjas X Code360 powered by Coding Ninjas X
Last Updated: Jul 15, 2024

Critical Section in OS

Author Divyansh Jain
0 upvote
Table of contents
Master SQL for Data Analyst roles at Deloitte, EY, PWC, KPMG
16 Jul, 2024 @ 01:30 PM
Anuj Dhoundiyal
Manager - Data analytics @

Concurrent access to shared resources can cause unexpected or misleading outcomes in parallel programming, hence areas of the program where the shared resource is used must be protected in ways that prevent concurrent access. The critical section, often known as the critical region, is considered a protected area in a particular code segment. 

Operating Systems

Accessing or modifying the same data may lead to data inconsistency. Hence, ensuring data consistency needs mechanisms to verify that cooperative processes run smoothly. Cooperative processes are those that can affect the execution of other processes and vice versa. 

Also see: Multiprogramming vs Multitasking, Open Source Operating System

What is a Critical Section Problem in OS?

The critical-section problem in operating system is the starting point for our consideration of process synchronization. Consider a system with n processes (P0, P1, …, Pn-1). Every process has a critical section of code in which the process may change common variables, update a table, write a file, and so on. When one process is running in its critical section, no other process is permitted to run in that area.

The term "critical section" refers to a code segment that is accessed by several programs. The critical section includes shared variables or resources that must be synchronized in order to ensure data consistency. 

It's a characteristic of a program that seeks to access shared resources. Any resource in a computer, such as a CPU, data structure, any IO device, or memory location.

There are two methods that control the entry and exit from the critical section: 

The wait() method controls the entry to the critical section, whereas the signal() function controls the exit.

The following is a schematic that depicts the crucial section:-

Critical Section

The following are the four most important parts of the critical section:

  • Entry Section: It is a step in the process that determines whether or not a process may begin.
  • Critical Section: This section enables a single process to access and modify a shared variable.
  • Exit Section: Other processes waiting in the Entry Section are able to enter the Critical Sections through the Exit Section. It also ensures that a process that has completed its execution gets deleted via this section.
  • Remainder Section: The Remainder Section refers to the rest of the code that isn't in the Critical, Entry, or Exit sections.

Solutions to the Critical Section Problem

Any proposed synchronization technique dealing with the critical section problem must fulfill the following requirements:

1. Mutual Exclusion

The system must make certain that-

  • The processes have mutually exclusive access to the critical section.
  • At any one time, only one process should be present inside the critical section.
    Mutual exclusion is a form of binary semaphore that is used to manage access to a shared resource. To prevent prolonged priority inversion concerns, it features a priority inheritance mechanism. 

2. Progress

When no one is in the critical section and someone wants entry, this solution is utilized. Then, in a certain amount of time, those processes not in their reminder section should select who should go in. If no process is in its critical section and one or more threads desire to execute it, any of these threads must be permitted to do so.

3. Bounded Waiting

When a process requests to be placed in the critical section, there is a limit to the number of processes that may be placed in that area. As a result, after the limit has been reached, the system must enable the process to enter its critical section.

Note: The essential requirements are Mutual Exclusion and Progress. All synchronizing mechanisms must meet these requirements. The optional criteria are bounded waiting. However, if feasible, this criterion should be met.

In Process Synchronization, the critical section plays an essential part in resolving the problem. The following are the key approaches with respect to solving the critical section problem:

Peterson’s Solution

This is a software-based solution to critical section problems that are extensively employed. Peterson's solution was created by a computer scientist named Peterson, as the name itself suggests.

When a process is at a critical section, this approach allows the other process to just execute the remaining code and vice versa. This strategy also assists in ensuring that only one process may execute in the critical section at any given moment.

All three requirements are preserved in this solution:

  • Mutual exclusion ensures that only one process has access to the vital area at any one moment.
  • Progress is also reassuring because a process that is not in the crucial area cannot prevent other processes from entering it.
  • Every process is given a fair chance to join the Critical section, ensuring bounded waiting.



FLAG[i] = true
while( (count != i) AND (CS is !free) ){ wait;
FLAG[i] = false
count = j; //after this, choose another process to access CS



  • Suppose there are N processes (P1, P2,... PN) and each process must enter the Critical Section at some point.
  • A FLAG[] array of size N is kept, with the value false by default. As a result, anytime a process needs to reach the critical section, it must set the critical section flag to true. If Process Pi wishes to enter, for example, it will set FLAG[i]=TRUE.
Illustration Image
  • The count variable shows the process number that is presently waiting to be entered into the CS.
  • While quitting, the process that enters the critical section would modify the count to a different number from the list of ready processes.
  • Example: When a count equal to 2 is reached, P2 enters the Critical section, and while exiting it sets the count to 3, and hence P3 leaves the wait loop.

Synchronization Hardware

For critical section code, several systems include hardware assistance. If we could prevent interruptions from occurring while a shared variable or resource is being updated, we could easily address the critical section problem in a single-processor system.

We could be certain that the present sequence of instructions would be permitted to run in order without being pre-empted in this way. Unfortunately, in a multiprocessor system, this method is not possible.

In a multiprocessor environment, disabling interrupts might take a long time since the message is sent to all processors. The arrival of threads into the critical section is delayed as a result of the message transmission lag, and system efficiency decreases as a result.

Mutex Locks

Since synchronizing hardware is a difficult approach to install for everyone, Mutex Locks were created as a strict software method. 

In this method, a LOCK is gained over the important resources used inside the critical section in the entrance part of the code. This lock is released in the exit section.

Semaphore Solution

Semaphore is a non-negative variable that is shared across threads. It's a different algorithm or solution to the problem of the critical section. It's a thread that's waiting for a semaphore, which can be signaled by another thread.

For process synchronization, it employs two atomic operations: 1) wait() and 2) signal().

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job

Uses of Critical section

Critical sections in data structures

The read-write conflicting variables are divided between threads, with a copy in each thread. Data structures such as linked lists, trees, and hash tables have connected data variables that cannot be divided across threads, making parallelism difficult to implement.

To avoid this, one technique is to keep the complete data structure in a critical section and handle just one action at a time. Another option is to lock the critical section node in use so that other actions do not utilize the same node. As a result, using the critical section assures that the code produces the intended results.

Critical sections in computer networking

In computer networking, critical sections are also required. Data may not arrive in an ordered fashion when it reaches network sockets. Let's pretend that program X on the computer needs to take data from the socket, reorganize it, and check for any errors. No other application should access the same socket for that data while this program is working on it. As a result, the socket's data is secured by a critical section, allowing program X to use it exclusively.

Kernel-level critical sections

Critical section often blocks thread and process migration across processors, as well as interruptions and other processes and threads from preempting processes and threads. Nesting is common in critical parts. Multiple essential parts can be accessed and exited at little cost because of nesting.

Because the critical sections may only operate on the processor into which they are inserted, synchronization is only needed inside the executing processor. This enables important portions to be entered and exited at virtually no cost. There is no need for inter-processor synchronization. All that is required is instruction stream synchronization. Most CPUs offer the necessary synchronization simply by stopping the current execution state.

The usage of critical sections as a long-term locking primitive is not recommended. The software lockout problem is based on kernel-level critical sections.

Check out this problem - Count Inversions

Strategies For Avoiding Problems

  • Mutual Exclusion: Ensure only one process can be in the critical section at a time. This is often achieved using synchronization mechanisms like mutexes, semaphores, or monitors.
  • Progress: If no process is in the critical section and others are waiting, the next process should be allowed to enter promptly to prevent starvation.
  • Bounded Waiting: There should be a limit on how long a process can wait to enter the critical section to avoid indefinite blocking.

Examples of critical sections in real-world applications

  • Banking Transactions: Updating account balances to prevent overdrafts or double-spending requires exclusive access to account data (critical section).
  • Inventory Management Systems: When multiple users update stock levels concurrently, a critical section ensures accurate inventory by preventing race conditions.
  • Traffic Light Controllers: The logic for changing traffic light signals to maintain orderly flow benefits from critical sections to avoid conflicting instructions.

Advantages of Critical Sections in Process Synchronization

  • Data Integrity: Critical sections guarantee consistent data updates by preventing multiple processes from modifying shared resources simultaneously.
  • Race Condition Prevention: They eliminate race conditions where the outcome depends on unpredictable timing of concurrent access, leading to incorrect results.
  • Orderly Execution: Critical sections establish a well-defined order for process execution within the shared resource access region.

Disadvantages of Critical Sections in Process Synchronization

  • Overhead: Implementing critical sections introduces synchronization overhead, which can impact performance, especially in highly concurrent systems.
  • Priority Inversion: Lower-priority processes might be blocked by higher-priority ones waiting to enter the critical section, potentially leading to priority inversion.
  • Deadlocks: If processes hold locks on resources waiting for each other, a deadlock can occur, halting all involved processes.

Frequently Asked Questions

How to solve critical section problems using the bakery algorithm?

To explain in a few words, each process is assigned a number (which may or may not be unique), with the lowest number being served first.

What are the two approaches to handling critical section problems?

Preemptive and non-preemptive kernels. A preemptive kernel permits a process to be preempted while in kernel mode. A non-preemptive kernel does not allow a kernel-mode process to be preempted. A kernel-mode process will continue to run until it departs kernel mode, blocks, or willingly surrenders CPU control.

What can the critical section be defined by?

A code segment where shared resources are accessed, requiring exclusive execution by one process at a time.

What is race condition and critical section in OS?

An unpredictable outcome due to concurrent access to shared resources without proper synchronization.

What are the primary requirements of Synchronizing mechanisms?

Mutual exclusion and Progress are included in the primary requirements to be accomplished necessarily. While the bounded wait is considered as an optional requirement.


To summarize the article, we discussed every aspect related to the critical section problem in the execution of the process. We discussed various solutions to the critical section problem and its uses also. We learned the three criteria related to process synchronization. But the knowledge never stops, so to better understand the Operating Systems, you can go through many articles on our platform.

Recommended Readings: 

Do check out The Interview guide for Product Based Companies as well as some of the Popular Interview Problems from Top companies like Amazon, Adobe, Google, Uber, Microsoft, etc. on Code360.

Also check out some of the Guided Paths on topics such as Data Structure and Algorithms, Competitive Programming, Operating Systems, Computer Networks, DBMS, System Design, etc. as well as some Contests, Test Series, Interview Bundles, and some Interview Experiences curated by top Industry Experts.

Happy Learning Ninja :)