Unit 3:
Inter Process Communication:
Inter process communication in OS is way by which multiple processes can communicate with
each other to share data and information among themselves.
IPC is a set of techniques for the exchange of information between two or more process..
it is possible only running on one computer or on two or more computers connected by
network.
Inter process Communication or IPC provides a mechanism to exchange data and information
across multiple processes, which might be on single or multiple computers connected by a
network. This is essential for many tasks, such as:
Sharing data
Coordinating activities
Managing resources
Achieving modularity
Synchronization in Inter Process Communication
Synchronization in Inter Process Communication (IPC) is the process of ensuring that multiple
processes are coordinated and do not interfere with each other. This is important because
processes can share data and resources, and if they are not synchronized, they can overwrite
each other's data or cause other problems.
There are a number of different mechanisms that can be used to synchronize processes,
including:
• Mutual exclusion: This is a mechanism that ensures that only one process can access a
shared resource at a time. This is typically implemented using a lock, which is a data
structure that indicates whether a resource is available.
• Condition variables: These are variables that can be used to wait for a certain condition
to be met. For example, a process could wait for a shared resource to become available
before using it.
• Barriers: These are synchronization points that all processes must reach before they can
proceed. This can be used to ensure that all processes have completed a certain task
before moving on to the next task.
• Semaphores: These are variables that can be used to count the number of times a
shared resource is being used. This can be used to prevent a resource from being used
more than a certain number of times at the same time.
• Here are some examples of how synchronization is used in IPC:
• In a database, multiple processes may need to access the same data. Synchronization is
used to ensure that only one process can write to the data at a time, and that other
processes do not read the data while it is being written.
• In a web server, multiple processes may need to handle requests from clients.
Synchronization is used to ensure that only one process handles a request at a time, and
that other processes do not interfere with the request.
• In a distributed system, multiple processes may need to communicate with each other.
Synchronization is used to ensure that messages are sent and received in the correct
order, and that processes do not take actions based on outdated information.
Why Inter Process Communication (IPC) is Required?
• Inter-process communication (IPC) is required for a number of reasons:
• Sharing data: IPC allows processes to share data with each other. This is essential for
many tasks, such as sharing files, databases, and other resources.
• Coordinating activities: IPC allows processes to coordinate their activities. This is
essential for tasks such as distributed computing, where multiple processes are working
together to solve a problem.
• Managing resources: IPC allows processes to manage resources such as memory,
devices, and files. This is essential for ensuring that resources are used efficiently.
• Achieving modularity: IPC allows processes to be developed and maintained
independently of each other. This makes it easier to develop and maintain large and
complex software systems.
• Flexibility: IPC allows processes to run on different hosts or nodes in a network,
providing greater flexibility and scalability in large and complex systems.
Approaches for Inter-Process Communication
Pipes
• Pipes are a simple form of shared memory that allows two processes to communicate
with each other.
• It is a half duplex method (or one way communication) used for IPC between two related
processes .
• One process writes data to the pipe, and the other process reads data from the pipe.
• It is like a scenario like filling water with a tap into a bucket. The filling process is writing
into the pipe and the reading process is retrieving from the pipe.
• Pipes can be either named or anonymous, depending on whether they have a unique
name or not.
Shared Memory
• Shared memory is a region of memory that is accessible to multiple processes. This
allows processes to communicate with each other by reading and writing data from the
shared memory region.
• Shared memory is a fast and efficient way for processes to communicate, but it can be
difficult to use if the processes are not carefully synchronized.
• There are two main types of shared memory:
• Anonymous shared memory: Anonymous shared memory is not associated with
any file or other system object. It is created by the operating system and is only
accessible to the processes that created it.
• Mapped shared memory: Mapped shared memory is associated with a file or
other system object. It is created by mapping a file into the address space of one
or more processes.
Multiple processes can access a common shared memory. Multiple processes communicate by
shared memory, where one process makes changes at a time and then others view the change.
Shared memory does not use kernel.
Message Passing
• Message passing is a method of Inter Process Communication in OS. It involves the
exchange of messages between processes, where each process sends and receives
messages to coordinate its activities and exchange data with other processes.
• Processes can communicate without any shared variables, therefore it can be used in a
distributed environment on a network.
• In message passing, each process has a unique identifier, known as a process ID, and
messages are sent from one process to another using this identifier. When a process
sends a message, it specifies the recipient process ID and the contents of the message,
and the operating system is responsible for delivering the message to the recipient
process. The recipient process can then retrieve the contents of the message and
respond, if necessary.
• Message passing in shared memory has a number of advantages over other IPC
mechanisms. First, it is very fast, as messages are simply copied from one process's
address space to another. Second, it is very flexible, as any type of data can be shared
between processes. Third, it is relatively easy to implement, as it does not require any
special support from the operating system.
• However, message passing in shared memory also has some disadvantages. First, it can
be difficult to ensure that messages are delivered in the correct order. Second, it can be
difficult to manage the size of the message queue. Third, it can be difficult to port to
other platforms, as the implementation of shared memory can vary from one operating
system to another.
Message Queues
• Message queues are a more advanced form of pipes.
• They allow processes to send messages to each other, and they can be used to
communicate between processes that are not running on the same machine.
• Message queues are a good choice for communication between processes that need to
be decoupled from each other.
• In Message Queue IPC, each message has a priority associated with it, and messages are
retrieved from the queue in order of their priority. This allows processes to prioritize the
delivery of important messages and ensures that critical messages are not blocked by
less important messages in the queue.
• Message Queue IPC provides a flexible and scalable method of communication between
processes, as messages can be sent and received asynchronously, allowing processes to
continue executing while they wait for messages to arrive.
• The main disadvantage of Message Queue IPC is that it can introduce additional
overhead, as messages must be copied between address spaces, and the queue must be
managed by the operating system to ensure that it remains synchronized and consistent
across all processes.
We have linked list to store messages in a kernel of OS and a message queue is identified using
"message queue identifier".
Direct Communication
In this, process that want to communicate must name sender or receiver .
A pair of communicating processes must have one link between them.
A link (generally bi-directional) establishes between every pair of communicating processes.
In direct communication, the sender process must know the identifier of the receiver process in
order to send a message to it. This identifier can be a process ID, a port number, or some other
unique identifier. Once the sender process has the identifier of the receiver process, it can send
a message to it directly.
The main advantage of direct communication is that it provides a simple and direct way for
processes to communicate, as processes can access each other’s data directly without the need
for intermediate communication mechanisms.
Indirect Communication
Indirect communication in IPC is a method of communication in which processes do not
explicitly name the sender or receiver of the communication. Instead, processes communicate
through a shared medium such as a message queue or mailbox.
Pairs of communicating processes have shared mailbox.
Link (uni-directional or bi-directional) is established between pairs of processes.
Sender process puts message in the port or mailbox of receiver process and receiver process
takes out (or deletes) the data from mailbox.
The sender and receiver processes do not need to know each other's identifiers in order to
communicate with each other.
The main advantage of indirect communication is that it provides a more flexible and scalable
way for processes to communicate, as processes do not need to have direct access to each
other’s data
FIFO
FIFO (First In First Out) is a type of message queue that guarantees that messages are delivered
in the order they were sent.
It involves the use of a FIFO buffer, which acts as a queue for exchanging data between
processes.
Used to communicate between two processes that are not related.
In the FIFO method, one process writes data to the FIFO buffer, and another process reads the
data from the buffer in the order in which it was written.
Full-duplex method - Process P1 is able to communicate with Process P2, and vice versa.
The main advantage of the FIFO method is that it provides a simple way for processes to
communicate, as data is exchanged sequentially, and there is no need for processes to
coordinate their access to the FIFO buffer.
Deadlock: Deadlock System Model
A deadlock is a situation or condition in which a group of processes are blocked forever because
each process is holding some resources and waiting for other resources that are held by another
process in the group.
The deadlock system model in an OS uses a resource allocation graph (or Wait-For Graph) to
represent processes and resources, showing which process holds which resource and which
process is requesting which resource.
Conditions for Deadlock (Coffman Conditions)
For a deadlock to occur, all four of these conditions must hold simultaneously:
Mutual Exclusion: where a resource can only be used by one process at a time, At least one
resource must be held in a non-shareable state.
Hold and Wait: where a process holds some resources while waiting for others,A process must
hold at least one resource while also waiting for other resources that are currently held by other
processes.
No Preemption: A resource cannot be forcibly taken from a process once it's been granted; the
process must release it voluntarily.
Circular Wait: where a set of processes are waiting for each other in a circular chain. A set of
waiting processes {P0, P1, ..., Pn} must exist where P0 is waiting for a resource held by P1, P1 is
waiting for P2, and so on, with Pn waiting for a resource held by P0.
Deadlock Prevention
Deadlock prevention is a strategy used in computer systems to ensure that different processes
can run smoothly without getting stuck waiting for each other forever.
deadlock prevention means placing restrictions on resource requests so that deadlocks cannot
occur.
we can prevent a Deadlock by eliminating any of the below four conditions.
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
1. Eliminate Mutual Exclusion
Some resources, like a printer, are inherently non-sharable, so this condition is difficult to break.
However, sharable resources like read-only files can be accessed by multiple processes at the
same time.
For non-sharable resources, prevention through this method is not possible.
2. Eliminate Hold and Wait
Hold and wait is a condition in which a process holds one resource while simultaneously waiting
for another resource that is being held by a different process. The process cannot continue until
it gets all the required resources.
There are two ways to eliminate hold and wait:
• By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and
Resource2.
• By eliminating hold: The process has to release all resources it is currently holding before
making a new request.
For Example: Process1 must release Resource2 and Resource3 before requesting
Resource1.
3. Eliminate No Preemption
No preemption means resources can’t be taken away once allocated. To prevent this:
• Processes must release resources voluntarily: A process gives up resources once it
finishes using them.
• Avoid partial allocation: If a process requests resources that are unavailable, it must
release all currently held resources and wait until all required resources are free.
4. Eliminate Circular Wait
Circular wait happens when processes form a cycle, each waiting for a resource held by the
next. To prevent this:
• Impose a strict ordering on resources.
• Assign each resource a unique number.
• Processes can only request resources in increasing order of their numbers.
• This prevents cycles, as no process can go backwards in numbering.
Deadlock Avoidance
Deadlock Avoidance is used by the Operating System to Avoid Deadlock in the System.
The processes need to specify the maximum resources needed by the Operating System
so that the Operating System can simulate the allocation of available resources to the
requesting processes and check if it is possible to satisfy the need of all the processes'
requirements.
Safe State and Unsafe State
A safe state refers to a system state where the allocation of resources to each process ensures
the avoidance of deadlock. The successful execution of all processes is achievable, and the
likelihood of a deadlock is low.
The system attains a safe state when a suitable sequence of resource allocation enables the
successful completion of all processes.
Conversely, an unsafe state implies a system state where a deadlock may occur. The successful
completion of all processes is not assured, and the risk of deadlock is high. The system is
insecure when no sequence of resource allocation ensures the successful execution of all
processes
Deadlock Avoidance Solutions
Deadlock Avoidance can be solved by two different algorithms:
Resource allocation Graph
Banker's Algorithm
Deadlock Avoidance Algorithms
• When resource categories have only single instances of their resources, Resource-
Allocation Graph Algorithm is used.
• In this algorithm, a cycle is a necessary and sufficient condition for deadlock.
• When resource categories have multiple instances of their resources, Banker's Algorithm
is used. In this algorithm, a cycle is a necessary but not a sufficient condition for
deadlock.
Resource-Allocation Graph Algorithm
Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.
It is a directed graph that represents the processes in the system, the resources available, and
the relationships between them.
A process node in the RAG has two types of edges, request edges, and assignment edges.
A request edge represents a request by a process for a resource,
while an assignment edge represents the assignment of a resource to a process.
To determine whether the system is in a safe state or not, the RAG is analyzed to check for
cycles. If there is a cycle in the graph, it means that the system is in an unsafe state, and
granting a resource request can lead to a deadlock. In contrast, if there are no cycles in the
graph, it means that the system is in a safe state, and resource allocation can proceed without
causing a deadlock.
Banker's Algorithm
The banker's algorithm is a deadlock avoidance algorithm used in operating systems. It was
proposed by Edsger Dijkstra in 1965.
The banker's algorithm works on the principle of ensuring that the system has enough resources
to allocate to each process so that the system never enters a deadlock state. It works by keeping
track of the total number of resources available in the system and the number of resources
allocated to each process.
The Banker's algorithm uses a matrix called the "allocation matrix" to keep track of the
resources allocated to each process, and a "request matrix" to keep track of the resources
requested by each process.
It also uses a "need matrix" to represent the resources that each process still needs to complete
its execution.
To determine if a request can be granted, the algorithm checks if there is enough available
resources to satisfy the request, and then checks if granting the request will still result in a safe
state. If the request can be granted safely, the algorithm grants the resources and updates the
allocation matrix, request matrix, and need matrix accordingly. If the request cannot be granted
safely, the process must wait until sufficient resources become available.
1.Initialize the system
• Define the number of processes and resource types.
• Define the total number of available resources for each resource type.
• Create a matrix called the "allocation matrix" to represent the current resource
allocation for each process.
• Create a matrix called the "need matrix" to represent the remaining resource needs for
each process.
2. Define a request
• A process requests a certain number of resources of a particular type.
3. Check if the request can be granted
• Check if the requested resources are available.
• If the requested resources are not available, the process must wait.
• If the requested resources are available, go to the next step.
4. Check if the system is in a safe state
• Simulate the allocation of the requested resources to the process.
• Check if this allocation results in a safe state, meaning there is a sequence of allocations
that can satisfy all processes without leading to a deadlock.
• If the state is safe, grant the request by updating the allocation matrix and the need
matrix.
• If the state is not safe, do not grant the request and let the process wait.
Deadlock Detection in Operating System
• Deadlock Detection is the mechanism of detecting and resolving deadlocks in an
operating system.
• A deadlock occurs when two or more processes are blocked, waiting for each other to
release the resources they need
• Deadlock detection is the process of identifying when processes are stuck waiting for
resources held by other processes.
• The deadlock Detection Algorithm is of two types:
• Wait-for-Graph Algorithm (Single Instance)
• Banker's Algorithm (Multiple Instance) or Resource request algorithm
• Wait-for-Graph Algorithm
• It is a variant of the Resource Allocation graph. In this algorithm, we only have processes
as vertices in the graph. If the Wait-for-Graph contains a cycle then we can say the
system is in a Deadlock state. We need to remove resources while converting from
Resource Allocation Graph to Wait-for-Graph.
• Resource Allocation Graph: Contains Processes and Resources.
• Wait-for-Graph: Contains only Processes after removing the Resources while conversion
from Resource Allocation Graph.
Algorithm
• Below are the steps to follow:
• Step 1: Take the first process (Pi) from the resource allocation graph and check the path
in which it is acquiring resource (Ri), and start a wait-for-graph with that particular
process.
• Step 2: Make a path for the Wait-for-Graph in which there will be no Resource included
from the current process (Pi) to next process (Pj), from that next process (Pj) find a
resource (Rj) that will be acquired by next Process (Pk) which is released from Process
(Pj).
• Step 3: Repeat Step 2 for all the processes.
• Step 4: After completion of all processes, if we find a closed-loop cycle then the system
is in a deadlock state, and deadlock is detected.
Example 1:
Consider a Resource Allocation Graph with 4 Processes P1, P2, P3, P4, and 4 Resources
R1, R2, R3, R4. Find if there is a deadlock in the Graph using the Wait for Graph-based
deadlock detection algorithm.
Step1: First take Process P1 which is waiting for Resource R1, resource R1 is acquired by
Process P2, Start a Wait-for-Graph for the above Resource Allocation Graph.
Step2: Now we can observe that there is a path from P1 to P2 as P1 is waiting for R1
which is been acquired by P2. Now the Graph would be after removing resource R1 looks
like.
Step3: From P2 we can observe a path from P2 to P3 as P2 is waiting for R4 which is
acquired by P3. So make a path from P2 to P3 after removing resource R4 looks like
Step4: From P3 we find a path to P4 as it is waiting for P3 which is acquired by P4. After
removing R3 the graph looks like this.
Step5: Here we can find Process P4 is waiting for R2 which is acquired by P1. So finally
the Wait-for-Graph is as follows:
Finally In this Graph, we found a cycle as the Process P4 again came back to the Process
P1 which is the starting point (i.e., it's a closed-loop).
So, according to the Algorithm if we found a closed loop, then the system is in a
deadlock state. So here we can say the system is in a deadlock state.
Example 2:
Now consider another Resource Allocation Graph with 4 Processes P1, P2, P3, P4, and 3
Resources R1, R2, R3. Find if there is a deadlock in the Graph using the Wait for Graph-
based deadlock detection algorithm.
Step1: First take Process P1 which is waiting for Resource R1, resource R1 is acquired by
Process P2, Start a Wait-for-Graph for the above Resource Allocation Graph.
Step2: Now we can observe that there is a path from P1 to P2 and also from P1 to P4 as
P1 is waiting for R1 which is been acquired by P2 and P1 is also waiting for R2 which is
acquired by P4. Now the Graph would be after removing resources R1 and R2 looks like.
Step3: From P2 we can observe a path from P2 to P3 as P2 is waiting for R3 which is
acquired by P3. So make a path from P2 to P3 after removing resource R3 looks like.
Step4: Here we can find Process P4 is waiting for R3 which is acquired by P3.
So finally, the Wait-for-Graph looks like after removing Resource R3 looks like.
In this Graph, we don't find a cycle as no process came back to the starting point (i.e.,
there is no closed loop). So, According to the Algorithm if we found a closed loop, then
the system is in a deadlock state. But here we didn't find any closed loop so the system is
not in a deadlock state. The system is in a safe state.
Note: In Example 2, even though it looks like a loop but there is no process that has
reached the first process, or starting point again. So there is no closed loop.
Resource Request Algorithm for Deadlock detection
Ex: Problem Setup
Processes = 2 (P0, P1)
Resources = 2 types (R0, R1)
Allocation Matrix (resources already held)
P0 : 1 2
P1 : 2 1
Request Matrix (still needed to finish)
P0 : 2 2
P1 : 3 3
Available Resources
33
Problem Analysis:
Step 1: Initialization
• Work = (3,3) (copy of Available)
• Finish = [False, False] (no process finished yet)
Step 2: Check if any process can finish
Condition: if Request[i] ≤ Work, then process i can finish and release its allocation.
• P0 Request = (2,2)
Compare with Work (3,3): Yes, because (2 ≤ 3, 2 ≤ 3)
→ P0 can run and finish.
• After finishing, release its Allocation (1,2).
• New Work = (3,3) + (1,2) = (4,5)
• Finish[0] = True
• P1 Request = (3,3)
Compare with new Work (4,5): Yes, because (3 ≤ 4, 3 ≤ 5)
→ P1 can also run and finish.
• After finishing, release its Allocation (2,1).
• New Work = (4,5) + (2,1) = (6,6)
• Finish[1] = True
Step 3: Result
• Both processes finished successfully.
• Finish = [True, True]
• System is NOT in Deadlock.
Explanation in Words
1. The system starts with 3 units of R0 and 3 units of R1.
2. P0’s request (2,2) can be satisfied immediately, so it runs and releases resources.
3. After P0 finishes, more resources are available (4,5).
4. Now P1’s request (3,3) can also be satisfied, so it finishes too.
5. Since both processes can eventually complete, no deadlock occurs.
Deadlock Recovery
Deadlock recovery is a critical process that is initiated after a deadlock has been detected in a
computer system.
Deadlock recovery is the process of resolving a deadlock after it has already occurred. Unlike
deadlock prevention (which avoids deadlocks in advance) or deadlock avoidance (which
carefully allocates resources to avoid unsafe states), recovery accepts that deadlocks may occur
and focuses on fixing them.
There are four primary methods of deadlock recovery: process termination, resource pre-
emption, priority inversion, and rollback.
Process Termination
Process termination is a simple method for resolving deadlocks.
In this method, the operating system identifies the processes involved in the deadlock and
terminates one or more processes.
This frees up the resources held by the terminated processes, which can be used by the
remaining processes to continue their execution.
Resource Pre-emption
Resource pre-emption is a more complex method for resolving deadlocks.
In this method, the operating system identifies the resources involved in the deadlock and
selects one or more resources to be pre-empted.
The resources are then taken away from the process holding them and allocated to the waiting
processes.
The pre-empted process is suspended until the required resources become available again.
This method can cause delays in the execution of the pre-empted process and can result in a
suboptimal allocation of resources.
Priority Inversion
Priority inversion is a method for resolving deadlocks in real-time systems.
In this method, the priority of the processes is changed to avoid deadlock situations.
The process holding the required resources is given a higher priority, and the process waiting for
the resources is given a lower priority.
This method can lead to the inversion of priorities, which can cause performance issues and
degrade the performance of the system.
Additionally, this method can also lead to starvation of lower priority processes, as higher
priority processes can keep preempting the resources.
Rollback
Rollback is a method for resolving deadlocks that is commonly used in database systems. In this
method, the system rolls back the transactions of the involved processes to a previous state
where they were not deadlocked. This method requires that the system maintains a log of all
the transactions and the state of the system at different points in time. The system can then roll
back the transactions to the previous state and re-execute them. This method can cause
significant delays in the execution of the transactions and can result in a loss of data.
Deadlock detection involves checking for the presence of deadlocks in the system using
methods such as RAG and WFG.
Deadlock recovery involves breaking the deadlock by releasing the resources held by the
processes in the deadlock using methods such as process termination, resource preemption,
priority inversion, and rollback.