Introduction
Concurrency in Software Engineering can be defined as the execution of numerous sequential instructions simultaneously. This can be noticed from the Operating System’s end as a byproduct of multiple process threads being run parallelly. This occurs due to these threads communicating with each other by passing messages or through shared memories.

Even though concurrency facilitates better performance and utilisation of resources, it can end up causing a lot of errors or problems due to the extensive sharing of system resources.
However, even with its few drawbacks, concurrency in OS allows multiple applications to be run simultaneously which certainly makes up for the potential optimisation, allocation or locating errors. There can also be challenging situations such as deadlocks during concurrency where sub-systems or units are waiting for assets (resources) to become free or waiting for other units to finish.
Let us understand why concurrency is important and what benefits it offers while we also touch upon the problems or issues we can face with concurrency.
Also see: Multiprogramming vs Multitasking, Open Source Operating System.
Principles
When discussing the principles of concurrency, we must first understand a few examples of concurrency. For instance, overlapped and interleaved processes can both be identified as processes engaging in concurrency. Overlapping in operating systems refers to I/O units or devices functioning parallelly, thus overlapping with other CPU functions while operating. This leads to better utilisation of CPU and provides seamless data transfer.
Interleaving on the other hand highly improves the performance of storage or data accessing processes. This is done by allowing sequentially accessed data to be arranged in sectors that are non-sequential in nature. We can see concurrency all around us now, especially with most systems having shifted to multi-core processors that facilitate parallel processing.
This fundamentally allows multiple processes or threads (which access the same sector, same declared variable or same space in the memory) to be run at the same time. This can even enable actions such as writing and reading the same file or utilising the same unit for two different objectives. However, we cannot predict the completion time of each process, thus requiring us to use algorithms to estimate the time taken for concurrent processes and then decide upon sequences that we can implement or incorporate.
For instance, I/O devices can perform a two-way function where each process requires a different amount of time. If this TAT (turn around time) can be determined, we can prepare the operating system or program to carry on a function that relies on both in the most effective way possible. For example, let us take printers into consideration, where the output and input time is vastly different, thus, this requires a well-defined buffer to effectively carry on simultaneous processes at the same time. With proper planning, one can avoid common concurrency issues.
Here are a few factors that determine the time required to finish concurrent processes:
- The activities the other processes are involved with
- The operating system and how the OS handles ‘interrupts’, overlapping or resource starvation
- The scheduling policy of the OS and default prioritisation setups