Table of contents
1.
Introduction
2.
What is multiprocessor scheduling?
3.
Types of Multiprocessor Scheduling in OS
3.1.
Processor Affinity 
3.1.1.
Soft Affinity
3.1.2.
Hard Affinity
3.2.
Load Balancing
3.2.1.
Push Migration
3.2.2.
Pull Migration
3.3.
Multicore Processors 
3.3.1.
Coarse-grained multithreading
3.3.2.
Fine-grained multithreading
3.4.
Symmetric Multiprocessor
3.5.
Asymmetric Multiprocessing
4.
Virtualization and Threading 
5.
Advantages of Multiple-Processor Scheduling
6.
Disadvantages of Multiple-Processor Scheduling
7.
Frequently Asked Questions
7.1.
What is multiprocessor scheduling in operating system?
7.2.
What are the three types of processor scheduling?
7.3.
What is multi core scheduling?
7.4.
What is the function of processor scheduling?
7.5.
Why scheduling is important in OS?
8.
Conclusion
Last Updated: Mar 29, 2024
Easy

Multiple Processors Scheduling in Operating System

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

multiprocessor scheduling in os

Multiprocessor Scheduling involves multiple CPUs, due to which Load sharing is possible. Load Sharing is the balancing of load between numerous processors. It is more complex as compared to Single- Processor Scheduling. 

In this article, we will learn about Multiple Processors Scheduling in Operating Systems, Types of Multiple-Processor Scheduling, multiple-processor scheduling work? etc.

Recommended Topic, FCFS Scheduling Algorithm

What is multiprocessor scheduling?

Multiprocessor scheduling is the art of managing multiple CPUs (Central Processing Units) in a computer system. When you have more than one processor, the operating system needs a way to decide:

  • Which processes should run on which CPUs
  • For how long each process should be allowed to run on a CPU before being swapped out for another

The goal of multiprocessor scheduling is to maximize the overall performance and efficiency of the system. This means keeping all the CPUs busy with useful work while ensuring fair allocation of resources to different processes.

Types of Multiprocessor Scheduling in OS

Types of Multiprocessor Scheduling in OS

There are two types of Multiple-Processor-Scheduling

  1. Symmetric Multiprocessing: In Symmetric Multiprocessing, all processors are self-scheduling. The scheduler for each processor checks the ready queue and selects a process to execute. Each of the processors works on a duplicate copy of the Operating System and communicates with each other. If one of the processors goes down, the rest of the system keeps working.
     
  2. Asymmetric Multiprocessing: In asymmetric Multiprocessing, all the I/O operations and scheduling decisions are performed by a single processor called the Master server, and the remaining processors process the user code only. The need for data sharing can be reduced by using this method.

Processor Affinity 

Every process that has an affinity for the processor on which it will run is called Processor Affinity.

Let's try to understand the concept of Processor Affinity.

  • When a process runs on a processor, the data accessed by the process most recently is populated in the cache memory of this processor. The following data access calls by the process are often satisfied by the cache memory.
  • However, suppose this process is migrated to another processor for some reason. In that case, the content of the cache memory of the first processor is invalidated, and the second processor's cache memory has to be repopulated.
  • To avoid the cost of invalidating and repopulating the cache memory, the Migration of processes from one processor to another is avoided.

 

There are two types of affinity:

Soft Affinity

In soft affinity, the system will try to keep running a process on the same processor but does not guarantee it.

Hard Affinity

In hard affinity, the system allows the process to specify the subset of processors on which it may run, i.e., each process can run only some of the processors.

Load Balancing

Load Balancing means distributing workload so that the processors have an even workload in a symmetric multiprocessor system.

  • In a multiprocessor system, all processors may not have the same workload. Some may have a long ready queue, while others may be sitting idle. 
  • Load balancing is needed only for systems where each processor has its private queue of the process which are eligible to execute.
  • On SMP(symmetric Multiprocessing), it is essential to keep the workload balanced among all processors to utilize the benefits of having more than a single processor entirely; one or more processors will sit idle while other processors have high workloads along with lists of processors awaiting the CPU.

There are two approaches for load balancing :

Push Migration

In push migration, a task regularly checks the load on every processor. If an imbalance is found, it evenly distributes each processor's load switching the processes from overloaded to less busy or idle processors.

Pull Migration

Pull Migration happens when an idle processor pulls a waiting task from a busy processor for its execution.

Multicore Processors 

In MultiCore processors, multiple processor cores are placed on the same physical chip. 

  • Symmetric Multiprocessing systems that use multicore processors are faster and consume less power than systems in which each processor has its physical chip. Multicore processors can complicate the scheduling problems.
  • ​​When the processor accesses memory, it waits for a lot of time to make the data available. This type of situation is known as MEMORY STALL.
  • It occurs for different reasons, such as cache miss and accessing the data that is not in the cache memory. In these cases, the processor can spend up to fifty percent of its time waiting for the data to become available from memory. 
  • Recent hardware designs have implemented multithreaded processor cores in which two or more hardware threads are assigned to each core to solve this problem. Therefore if one thread stalls while waiting for the memory, the core can switch to another thread.

 

There are two methods to multithread a processor :

Coarse-grained multithreading

In this method, when an event like a memory stall occurs, the processor switches to another thread to begin execution. The switching cost is higher as the instruction pipeline must be terminated before other threads can begin execution.

Fine-grained multithreading

In this multithreading, the processor switches between threads at a much finer level, mainly at the boundary of an instruction cycle. The cost of switching is much lower than coarse-grained multithreading. 

Symmetric Multiprocessor

Symmetric Multiprocessing is a computer architecture design that provides multiple identical processors, or CPUs, that share the same memory and are equally capable of executing tasks. In an SMP system, each processor has equal access to all the resources and performs tasks independently, with no processor having a specific role or hierarchy.

Key Characteristics:

  • Multiple CPUs: An SMP system typically includes two or more CPUs, which work together to enhance processing power and throughput.
     
  • Shared Memory: All CPUs in an SMP system share a common memory space, allowing for seamless data exchange and communication.
     
  • Load Balancing: SMP systems distribute tasks among processors efficiently, ensuring balanced workloads and optimal resource utilization.
     
  • Synchronization: SMP systems employ synchronization mechanisms to prevent conflicts when multiple processors access shared resources simultaneously.

Asymmetric Multiprocessing

Asymmetric Multiprocessing is a computer architecture design that employs multiple processors, or CPUs, with different capabilities, roles, or hierarchical levels within the system. Unlike Symmetric Multiprocessing (SMP), where all CPUs are equal and share resources equally, AMP systems have CPUs with distinct functions and responsibilities.

Virtualization and Threading 

Virtualization is the process of running multiple operating systems on a computer system. So a single CPU can also act as a multiprocessor. This can be achieved by having a host and other guest operating systems.

  • Different applications run on other operating systems without interfering with one another.
  • A virtual machine is a virtual environment that functions as a virtual computer with its CPU, network interface, memory, and storage, created on a physical hardware system.
  • In a time-sharing OS, 100ms(millisecond) is allocated to each time slice to give users a reasonable response time. But it takes more than 100ms, maybe 1 second or more. This results in a poor response time for users logged into the virtual machine.
  • Since the virtual operating systems receive a fraction of the available CPU cycles, the clocks in virtual machines may be incorrect. This is because their timers do not take longer to trigger than they do on dedicated CPUs.

 

The concept of threading allows multiple threads to control and execute within the same process. This is commonly referred to as multithreading, and threads are also referred to as lightweight processes. Switching between threads in the same process is less expensive than switching between separate processes because they share state and stack.

Since you get some idea of the Multiple-Processor Scheduling, We will close the article now with faqs.

Advantages of Multiple-Processor Scheduling

  • Efficient Multitasking: Improved multitasking capabilities with the ability to handle multiple tasks simultaneously.
     
  • Resource Utilization: Maximizes the use of system resources, optimizing efficiency.
     
  • Fault Tolerance: Redundant processors can provide system reliability in case of failures.
     
  • Scalability: Easy scalability by adding more processors to meet growing demands.

Disadvantages of Multiple-Processor Scheduling

  • Increased Complexity: Managing multiple processors adds complexity to scheduling and resource allocation.
     
  • Synchronization Overhead: Coordinating tasks among processors can introduce overhead.
     
  • Software Compatibility: Some software may not be optimized for multiprocessor systems.
     
  • Cost: Multiprocessor systems can be more expensive to build and maintain.

Frequently Asked Questions

What is multiprocessor scheduling in operating system?

The goal of multiple processor scheduling, aka multiprocessor scheduling, is to create a system's scheduling mechanism that contains several processors. In this process, multiple CPUs split the workload (load sharing) to enable the concurrent execution of multiple processes.

What are the three types of processor scheduling?

Process scheduling manages which jobs enter memory (long-term), shuffles them between memory and storage (medium-term), and picks which ready job runs on the CPU (short-term).

What is multi core scheduling?

Multi-core scheduling manages tasks across multiple CPUs for efficient execution.

What is the function of processor scheduling?

Processor scheduling allocates CPU time to processes, ensuring efficient resource use and fair process execution.

Why scheduling is important in OS?

Scheduling in OS is crucial for managing processes effectively, optimizing CPU utilization, and ensuring fair resource allocation.

Conclusion

In this article, we have extensively discussed Multiple-Processor Scheduling. We have discussed types of Multiple-Processor Scheduling, Process affinity, Load balancing, Multicore Processors, Virtualization, and Threading.

After reading about the properties of Multiple-Processor Scheduling, are you not feeling excited to read/explore more articles on the topic of operating system? Don't worry; Coding Ninjas has you covered. 

Recommended Readings:

 


Also check out some of the Guided Paths on topics such as Data Structure and Algorithms, DBMSSystem Design, etc. as well as some Contests, Test SeriesInterview Bundles, and some Interview Experiences curated by top Industry Experts only on Coding Ninjas Studio.

 

Happy Learning, Ninja!🥷✨

Live masterclass