Concurrency:
Concurrency in operating systems refers to the ability of an OS to manage and execute multiple
tasks or processes simultaneously. It allows multiple tasks to overlap in execution, giving the
appearance of parallelism even on single-core processors. Concurrency is achieved through
various techniques such as multitasking, multithreading, and multiprocessing. Multitasking
involves the execution of multiple tasks by rapidly switching between them. Each task gets a
time slot, and the OS switches between them so quickly that it seems as if they are running
simultaneously.
Multithreading takes advantage of modern processors with multiple cores. It allows different
threads of a process to run on separate cores, enabling true parallelism within a single process.
Multiprocessing goes a step further by distributing multiple processes across multiple physical
processors or cores, achieving parallel execution at a higher level.
The need for concurrent execution arises from the desire to utilize computer resources
efficiently. Here are some key reasons why concurrent execution is essential:
1.Resource Utilization:
Concurrency ensures that the CPU, memory, and other resources are used optimally. Without
concurrency, a CPU might remain idle while waiting for I/O operations to complete, leading to
inefficient resource utilization.
2.Responsiveness:
Concurrent systems are more responsive. Users can interact with multiple applications
simultaneously, and the OS can switch between them quickly, providing a smoother user
experience.
3.Throughput:
Concurrency increases the overall throughput of the system. Multiple tasks can progress
simultaneously, allowing more work to be done in a given time frame.
4.Real-Time Processing:
Certain applications, such as multimedia playback and gaming, require real-time processing.
Concurrency ensures that these applications can run without interruptions, delivering a seamless
experience.
Principles of Concurrency in Operating Systems
To effectively implement concurrency, OS designers adhere to several key principles:
1.Process Isolation:
Each process should have its own memory space and resources to prevent interference between
processes. This isolation is critical to maintain system stability.
2.Synchronization:
Concurrency introduces the possibility of data races and conflicts. Synchronization mechanisms
like locks, semaphores, and mutex are used to coordinate access to shared resources and ensure
data consistency.
3.Deadlock Avoidance:
OSs implement algorithms to detect and avoid deadlock situations where processes are stuck
waiting for resources indefinitely. Deadlocks can halt the entire system.
Problems in Concurrency
While concurrency offers numerous benefits, it also introduces a range of challenges and
problems:
1.Race Conditions:
They occur when multiple threads or processes access shared resources simultaneously without
proper synchronization. In the absence of synchronization mechanisms, race conditions can lead
to unpredictable behavior and data corruption. This can result into data inconsistencies,
application crashes, or even security vulnerabilities if sensitive data is involved.
2.Deadlocks:
A deadlock arises when two or more processes or threads become unable to progress as they are
mutually waiting for resources that are currently held by each other. This situation can bring the
entire system to a standstill, causing disruptions and frustration for users.
3.Priority Inversion:
Priority inversion occurs when a lower-priority task temporarily holds a resource that a higher-
priority task needs. This can lead to delays in the execution of high-priority tasks, reducing
system efficiency and responsiveness.
4.Resource Starvation:
Resource starvation occurs when some processes are unable to obtain the resources they need,
leading to poor performance and responsiveness for those processes. This can happen if the OS
does not manage resource allocation effectively or if certain processes monopolize resources.
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process
system to ensure that they access shared resources in a controlled and predictable manner. It aims
to resolve the problem of race conditions and other synchronization issues in a concurrent
system.
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data
due to concurrent access. To achieve this, various synchronization techniques such as
semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity,
and to avoid the risk of deadlocks and other synchronization problems. Process synchronization
is an important aspect of modern operating systems, and it plays a crucial role in ensuring the
correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of
other processes.
Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
Advantages of Process Synchronization
Ensures data consistency and integrity
Avoids race conditions
Prevents inconsistent data due to concurrent access
Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
Adds overhead to the system
This can lead to performance degradation
Increases the complexity of the system
Can cause deadlocks if not implemented properly.
Inter process communication
Inter process communication (IPC) is one of the key mechanisms used by operating systems to
achieve these goals. IPC helps processes communicate with each other without having to go
through user-level routines or interfaces. It allows different parts of a program to access shared
data and files without causing conflicts among them. In inter-process communication, messages
are exchanged between two or more processes. Processes can be on the same computer or on
different computers. In this article, we will discuss IPC and its need, and different approaches for
doing IPC. Inter process communication (IPC) is a process that allows different processes of a
computer system to share information. IPC lets different programs run in parallel, share data, and
communicate with each other. It’s important for two reasons: First, it speeds up the execution of
tasks, and secondly, it ensures that the tasks run correctly and in the order that they were
executed.
1. Message Passing
Another important way inter-process communication occurs with other processes is via message
passing. When two or more processes participate in inter-process communication, each process
sends messages to the others via Kernel. Here is an example of sending messages between two
processes: – Here, the process sends a message like “M” to the OS kernel. This message is then
read by Process B. A communication link is required between the two processes for successful
message exchange. There are several ways to create these links.
2. Shared memory
Shared memory is a memory shared between all processes by two or more processes established
using shared memory. This type of memory should protect each other by synchronizing access
between all processes. Both processes, like A and B, can set up a shared memory segment and
exchange data through this shared memory area. Shared memory is important for these reasons-
It is a way of passing data between processes.
Shared memory is much faster and more reliable than these methods.
Shared memory allows two or more processes to share the same copy of the data.
Suppose process A wants to communicate with process B and needs to attach its address space to
this shared memory segment. Process A will write a message to the shared memory, and Process
B will read that message from the shared memory. So, processes are responsible for ensuring
synchronization so that both processes do not write to the same location at the same time.
Critical Section:
The critical section refers to a specific part of a program where shared resources are accessed,
and concurrent execution may lead to conflicts or inconsistencies. It is essential for the operating
system to provide mechanisms like locks and semaphores to ensure proper synchronization and
mutual exclusion in the critical section. These safeguards prevent concurrent processes from
interfering with each other, maintaining the integrity of shared resources.
When there is more than one process accessing or modifying a shared resource at the same time,
then the value of that resource will be determined by the last process. This is called the race
condition. Consider an example of two processes, p1 and p2. Let value=3 be a variable present in
the shared resource.
The critical section problem is to make sure that only one process should be in a critical section
at a time. When a process is in the critical section, no other processes are allowed to enter the
critical section. This solves the race condition.
To effectively address the Critical Section Problem in operating systems, any solution must meet
three key requirements:
1. Mutual Exclusion: This means that when one process is executing within its critical
section, no other process should be allowed to enter its own critical section. This ensures
that shared resources are accessed by only one process at a time, preventing conflicts and
data corruption.
2. Progress: When no process is currently executing in its critical section, and there is a
process that wishes to enter its critical section, it should not be kept waiting indefinitely.
The system should enable processes to make progress, ensuring that they eventually get a
chance to access their critical sections.
3. Bounded Waiting: There must be a limit on the number of times a process can execute
in its critical section after another process has requested access to its critical section but
before that request is granted. This ensures fairness and prevents any process from being
starved of critical section access.
In computer programming, a mutual exclusion (mutex) is a program object that prevents
multiple threads from accessing the same shared resource simultaneously. A shared
resource in this context is a code element with a critical section, the part of the code that
should not be executed by more than one thread at a time. For example, a critical section
might update a global variable, modify a table in a database or write a file to a
network server. In such cases, access to the shared resource must be controlled to prevent
problems to the data or the program itself.
Mutual Exclusion:
In a multithreaded program, a mutex is a mechanism used to ensure that multiple
concurrent threads do not try to execute a critical section of code simultaneously. If a
mutex is not applied, the program might be subject to a race condition, a situation in
which multiple threads try to access a shared resource at the same time. When this
happens, unintended results can occur, such as data being read or written incorrectly or
the program misbehaving or crashing.
To understand how a mutex works, consider a multithreaded program that contains a
method for updating a variable used as a counter. The method, which contains the critical
section, is responsible for reading the variable's current value, incrementing that value by
1 and writing the new value to memory. If two threads run the method one at a time in
consecutive order, the first thread will add 1 to the original value -- 1 in this case --
resulting in a new value of 2, which is written to memory.
Semaphores
Semaphores refer to the integer variables that are primarily used to solve the critical
section problem via combining two of the atomic procedures, wait and signal, for the
process synchronization.
Types of Semaphores
Semaphores are of the following types:
Binary Semaphore:
Mutex lock is another name for binary Semaphore. It can only have two possible values:
0 and 1, and its value is set to 1 by default. It’s used to implement numerous processes to solve
critical section problems..
Counting Semaphore:
Counting Semaphore’s worth can be found anywhere in the world. It’s used to restrict
access to a resource with many copies..
The definitions of signal and wait are given below:
Wait
It decrements the value of its A argument in case it is positive. In case S is zero or
negative, then no operation would be performed.
wait(A)
while (A<=0);
A–;
Signal
this operation increments the actual value of its argument A.
signal(A)
A++;
Pros of Semaphores
Given below are some of the pros of semaphores:
Only one process is allowed to enter the critical part because of semaphores. They adhere
closely to the mutual exclusion principle. Also, they are far more efficient as compared to
the other synchronization methods.
Since the processor time is not wasted in checking whether a condition is met so as to
allow a process to access its critical section, there is ultimately no resource wastage due
to busy waiting in the semaphores.
Semaphores are implemented in the microkernel’s machine-independent code. They are,
therefore, machine-independent.
Cons of Semaphores
The following are some of the downsides of semaphores:
Due to how complex semaphores are, the wait and signal actions must be implemented in
the proper order to avoid deadlocks.
Semaphores are very impractical for their usage at the final scale since they reduce
modularity. This mainly occurs because the wait, as well as the signal procedures, bar the
system from forming an organized layout.
Semaphores can cause a priority inversion, with low-priority processes getting access to
the important portion first and high-priority processes getting access afterwards.
Deadlock
In operating system, deadlock refers to a state where two or more processes are unable to
proceed because each is waiting for a resource that another process holds. It is a situation where
a group of processes becomes permanently blocked, preventing any further progress. Deadlocks
can occur when processes compete for limited resources, such as memory, devices, or CPU time,
and they are unable to release the resources they hold until they acquire the resources they are
waiting for.
Deadlocks are characterized by a circular dependency of processes, where each process is
waiting for a resource that is held by another process in the cycle. This creates a state of impasse,
where none of the processes can proceed, resulting in a system deadlock.
When a deadlock occurs, all of the processes involved are stuck and unable to make progress.
This can lead to poor system performance and potentially cause the system to become
unresponsive. To prevent deadlocks, an operating system can use various techniques such as lock
order, timeout-based mechanisms, and deadlock detection and recovery algorithms. Deadlock
prevention methods are designed to ensure that a set of conditions known as the "necessary
conditions for deadlock" can never be met. Deadlock detection and recovery methods, on the
other hand, monitor the state of the system and detect when a deadlock has occurred, and then
take action to resolve the deadlock.
Necessary Condition For Deadlock
There are certain conditions that must be followed for a deadlock to occur as just not getting
resources on time does not create a deadlock in os. There are four necessary condition for
deadlock in os. Those conditions are explained below:
1. Mutual Exclusion: This means all the resources can not use the same resource simultaneously
i.e, a resource can only be used by a single process at a single time frame. By summarising at
least one resource must be non-sharable for a deadlock to occur.
2. Hold and Wait: In this, a process is holding atleast a single resource and is waiting to acquire
another resource which is acquired by some other process.
3. No Preemption: We cannot forcefully take the resources from the process the process must
leave the resources voluntarily after the complete execution.
4. Circular Wait: There must be a circular chain of processes, where each process is waiting for a
resource held by the next process in the chain. Let’s understand this with an example suppose we
have two processes p1 and p2 and p1 is holding the resource1 and p2 wants resource 1 and p1 is
waiting for resource 2 so if it gets resource 2 it will complete the execution and release both the
resources but resource 2 is held by p2 as it is waiting for resource 1 so this is one example of
circular wait.
Methods for Handling Deadlock
We need to handle deadlock as they may create a lot of problems There are many methods There
are many methods for handling deadlock that is available. We will discuss all of them in this
section of the blog.
1. Deadlock Prevention: Deadlock prevention is a technique used by operating systems to prevent
deadlocks from occurring by ensuring that the necessary conditions for a deadlock are never met.
This can be achieved through various methods such as lock order, timeout-based mechanisms,
and resource allocation. Deadlock prevention is considered to be a more effective approach than
deadlock detection and recovery as it eliminates the possibility of deadlocks altogether.
However, it can be difficult to implement in practice, and it may not always be possible or
practical to prevent deadlocks completely. This technique is very costly so most organizations do
not use it unless its the top priority of the organization to make the system deadlock-free.
2. Deadlock Avoidance: Deadlock avoidance is a technique used in operating systems to prevent
deadlocks from occurring by ensuring that the necessary conditions for a deadlock are never met.
This can be achieved through various methods such as resource allocation, using a two-phase
lock protocol, and time-out-based mechanisms. Deadlock avoidance is considered to be a more
effective approach than deadlock detection and recovery, as it eliminates the possibility of
deadlocks altogether. However, it can be difficult to implement in practice and may not always
be possible or practical to avoid deadlocks completely. Here we must have all the knowledge of
the resources prior to their allocation so we can see ether this allocation will cause a deadlock or
not.
3. Deadlock detection and recovery: Deadlock detection and recovery is a technique used by
operating systems to detect and resolve deadlocks when they occur. This can be achieved
through various methods such as monitoring the state of the system, using algorithms to detect
when a deadlock has occurred and taking action to resolve the deadlock. Deadlock detection and
recovery are considered to be a less effective approach than deadlock prevention, as it can only
be used to resolve deadlocks after they have occurred. However, it can be useful in cases where
it is not possible or practical to prevent deadlocks altogether.
4. Deadlock Ignorance: Deadlock ignorance is a technique used by operating systems where the
operating system does not take any steps to prevent, detect or recover from deadlocks. It is based
on the assumption that deadlocks will occur infrequently, and that their impact on the system will
be minimal. This approach is generally considered to be less effective than deadlock prevention
or detection and recovery, as it can lead to poor system performance and potentially cause the
system to become unresponsive. It is generally used in systems where the cost of implementing
deadlock prevention or detection and recovery mechanisms is greater than the expected cost of
deadlocks. However, it is not recommended for systems where deadlocks could lead to serious
consequences.
Advantages of Deadlock in Operating System
1. Deadlock can be used to prevent low-priority processes from hogging system resources.
2. Deadlock can be used to prevent processes from accessing resources that they are not authorized
to access.
3. Deadlock can be used to prevent processes from accessing resources that are in use by other
processes.
Disadvantages of Deadlock in Operating System
1. Deadlock can cause a system to become unresponsive, resulting in poor system performance.
2. Deadlock can lead to wasted system resources as processes are stuck waiting for resources that
they can never acquire.
3. Deadlock can cause processes to become stuck in an infinite loop, which can cause the system to
crash.
4. Deadlock can be difficult to detect and resolve, which can make it hard to troubleshoot and fix
problems.
5. Deadlock can be costly to implement, as it requires additional resources and programming to
prevent, detect and recover from deadlocks.
6. Deadlock prevention and detection mechanisms can also introduce additional overheads and
complexity to the system, which can affect its overall performance.
Difference between Starvation and Deadlock
Many a times deadlock and starvation are considered same but they are not same they are
different from each other the key difference between them include:
Deadlock Starvation
Deadlock is a situation where two or
starvation is a situation where a process
more processes are unable to proceed
is unable to acquire the resources it
because each is waiting for one of the
needs to continue execution.
others to release resource
Deadlock is caused by a circular wait,
starvation is caused by a lack of
where each process is waiting for a
resources or by a system that is unable
resource held by the next process in the
Deadlock Starvation
chain to allocate resources fairly.
Deadlock results in all processes involved starvation results in a single process
being stuck and unable to make progress being unable to make progress.
starvation is a state where a process is
Deadlock is a state where the processes
not getting the resources it needs to
are blocked and unable to move forward
make progress.
starvation prevention mechanisms are
Deadlock detection and recovery
used to ensure that resources are
mechanisms are used to detect and
allocated fairly and that no process is
resolve deadlocks
left waiting indefinitely
starvation can occur in any system
Deadlock is a problem that occurs in
where resources are shared among
operating systems and distributed systems
multiple processes.
Deadlock Detection and Recovery
Deadlock detection and recovery is the process of detecting and resolving deadlocks in an
operating system. A deadlock occurs when two or more processes are blocked, waiting for
each other to release the resources they need. This can lead to a system-wide stall, where no
process can make progress.
There are two main approaches to deadlock detection and recovery:
1. Prevention: The operating system takes steps to prevent deadlocks from occurring by
ensuring that the system is always in a safe state, where deadlocks cannot occur. This is
achieved through resource allocation algorithms such as the Banker’s Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must detect and
resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are used to
identify deadlocks, and recovery algorithms, such as the Rollback and Abort algorithm, are
used to resolve them. The recovery algorithm releases the resources held by one or more
processes, allowing the system to continue to make progress.
Difference Between Prevention and Detection/Recovery: Prevention aims to avoid
deadlocks altogether by carefully managing resource allocation, while detection and recovery
aim to identify and resolve deadlocks that have already occurred.
Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of deadlock
detection and recovery approach depends on the specific requirements of the system and the
trade-offs between performance, complexity, and risk tolerance. The operating system must
balance these factors to ensure that deadlocks are effectively detected and resolved.