0% found this document useful (0 votes)
19 views48 pages

Cs23431 Os Unit - III

The document discusses process synchronization, critical section problems, and various solutions such as Peterson's solution, mutex locks, and semaphores. It also covers classic synchronization problems like the bounded buffer, dining philosophers, and readers-writers problems, along with their respective solutions. Additionally, it addresses issues like deadlocks, starvation, and priority inversion, emphasizing the importance of mutual exclusion and proper resource management in concurrent processes.

Uploaded by

muhammadanifa292
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views48 pages

Cs23431 Os Unit - III

The document discusses process synchronization, critical section problems, and various solutions such as Peterson's solution, mutex locks, and semaphores. It also covers classic synchronization problems like the bounded buffer, dining philosophers, and readers-writers problems, along with their respective solutions. Additionally, it addresses issues like deadlocks, starvation, and priority inversion, emphasizing the importance of mutual exclusion and proper resource management in concurrent processes.

Uploaded by

muhammadanifa292
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

1

PROCESS SYNCHRONIZATION:

Process Synchronization is defined as the process of sharing system resources by cooperating


processes in such a way that, Concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data.
EXAMPLE:
❖ Consider the producer consumer process that contains a variable called counter.
❖ Counter is incremented every time we add a new item to the buffer and is decremented every time
we remove one item from the buffer.
❖ The code for the producer process is
while (true) {
/* produce an item in next produced */
while (counter == BUFFER SIZE)
; /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
counter++;
}
❖ The code for the consumer process is
while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}
❖ Suppose that the value of the variable counter is currently 5 and that the producer and consumer
processes concurrently execute the statements “counter++” and “counter--”.
❖ Following the execution of these two statements, the value of the variable counter may be 4, 5, or
6!
❖ The only correct result, though, is counter == 5, which is generated correctly if the producer and
consumer execute separately.
❖ When several processes access and manipulate the same data concurrently the outcome of
the execution depends on the particular order in which the access takes place, is called a
race condition.
❖ To guard against the race condition we need to ensure that only one process at a time can be
manipulating the variable counter.

CRITICAL SECTION PROBLEM:


❖ The critical-section problem is to design a protocol that the processes can use to cooperate.
❖ Each process has a segment of code, called a critical section, in which the process may be
changing common variables, updating a table, writing a file, and so on.
❖ When one process is executing in its critical section, no other process is allowed to execute in its
critical section. That is, no two processes are executing in their critical sections at the same time.
❖ The structure of critical section problem is
2

❖ Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
❖ The critical section may be followed by an exit section.
❖ The remaining code is the remainder section.
❖ A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections
2. Progress. If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder sections
can participate in deciding which will enter its critical section next.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

SOLUTIONS TO CRITICAL SECTION PROBLEM:

PETERSON’S SOLUTION:
❖ Peterson’s solution is restricted to two processes that alternate execution between their critical
sections and remainder sections.
❖ The processes are numbered P0 and P1. For convenience, when presenting Pi, we use Pj to denote
the other process.
❖ Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
❖ The variable turn indicates whose turn it is to enter its critical section. That is, if turn == i, then
process Pi is allowed to execute in its critical section.
❖ The flag array is used to indicate if a process is ready to enter its critical section. For example, if
flag[i] is true, this value indicates that Pi is ready to enter its critical section.

❖ To enter the critical section, process Pi first sets flag[i] to be true and then sets turn to the value j,
thereby checking that if the other process wishes to enter the critical section, it can do so.
❖ Similarly to enter the critical section, process Pj first sets flag[j] to be true and then sets turn to
the value i, thereby checking that if the other process wishes to enter the critical section.
❖ The solution is correct and thus provides the following.
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.
MUTEX LOCKS:
❖ Mutex locks are used to protect critical regions and thus prevent race conditions
❖ A process must acquire the lock before entering a critical section; it releases the lock when it exits
the critical section.
3

❖ The acquire()function acquires the lock, and the release() function releases the lock.
❖ A mutex lock has a boolean variable available whose value indicates if the lock is available or
not.
❖ If the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
❖ A process that attempts to acquire an unavailable lock is blocked until the lock is released.
❖ The definition of acquire() is as follows:
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
❖ The definition of release() is as follows:
release()
{
available = true;
}
❖ The main disadvantage of the implementation given here is that it requires busy waiting.
❖ While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the call to acquire().
❖ This type of mutex lock is also called a spinlock because the process “spins” while waiting for
the lock to become available.
❖ Busy waiting wastes CPU cycles that some other process might be able to use productively.
❖ Spinlocks do have an advantage, however, in that no context switch is required when a process
must wait on a lock, and a context switch may take considerable time

SEMAPHORES:
❖ A semaphore S is an integer variable that, is accessed only through two standard atomic
operations: wait() and signal().
❖ The wait() operation was originally termed P and the meaning is to test, the signal() was
originally called V and the meaning is to increment.
❖ The definition of wait() is as follows:
wait(S) {
while (S <= 0)
; // busy wait
S--;
}

❖ The definition of signal() is as follows:


signal(S) {
S++;
}
❖ Operating systems often distinguish between counting and binary semaphores.
❖ The value of a counting semaphore can range over an unrestricted domain.
❖ The value of a binary semaphore can range only between 0 and 1. Thus, binary semaphores
behave similarly to mutex locks.
❖ Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances.
❖ In this case the semaphore is initialized to the number of resources available.
❖ Each process that wishes to use a resource performs a wait() operation on the semaphore
4
❖ When a process releases a resource, it performs a signal() operation.
❖ When the count for the semaphore goes to 0, all resources are being used. After that, processes
that wish to use a resource will block until the count becomes greater than 0.

SEMAPHORE IMPLEMENTATION:
❖ The main disadvantage of semaphore is that it requires busy waiting. When one process is in its
critical section any other process that tries to enter the critical section must loop continuously in
the entry code.
❖ The mutual exclusion implementation with semaphores is given by
do
{
wait(mutex);
//critical section
signal(mutex);
//remainder section
}while(TRUE);
❖ To overcome the need for busy waiting, we can modify the wait() and signal() operations.
❖ When a process executes the wait() operation and finds that the semaphore value is not positive, it
must wait. However, rather than engaging in busy waiting, the process can block itself.
❖ The block operation places a process into a waiting queue associated with the semaphore.
❖ Then control is transferred to the CPU scheduler, which selects another process to execute.
❖ A process that is blocked, waiting on a semaphore S, should be restarted when some other process
executes a signal() operation.
❖ The process is restarted by a wakeup() operation, which changes the process fromthe waiting
state to the ready state.
❖ To implement semaphores under this definition, we define a semaphore as follows:
typedef struct {
int value;
struct process *list;
} semaphore
❖ Each semaphore has an integer value and a list of processes list. When a process must wait on a
semaphore, it is added to the list of processes.
❖ A signal() operation removes one process from the list of waiting processes and awakens that
process.
❖ The wait() semaphore operation can be defined as
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
❖ The signal() semaphore operation can be defined as
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
❖ The Block() and wakeup() operations are provided by the operating system as system calls.
5
DEADLOCKS AND STARVATION:

❖ The implementation of a semaphore with waiting queue may result in a situation where two or
more process are waiting indefinitely for an event that can be caused only by one of the waiting
process. This situation is called as deadlock.
❖ Example: Consider a system consisting of two process p0 and p1, each accessing two semaphores
that is set to value 1.

❖ P0 executes wait(S) and p1 executes Wait(Q).


❖ When P0 executes wait(Q), it must wait until P1 executes signal(Q).
❖ Similarly, when P1 executes wait(S), it must wait until P0 executes signal(S).
❖ Here the signal() operations cannot be executed, P0 and P1 are deadlocked.

PRIORITY INVERSION:
❖ Assume we have three processes—L, M, and H—whose priorities follow the order L < M < H.
❖ The process H requires resource R, which is currently being accessed by process L.
❖ Process H would wait for L to finish using resource R. Suppose that process M becomes
runnable, thereby preempting process L.
❖ Indirectly, a process with a lower priority—process M—has affected process H that is waiting for
L to release resource R.This problem is known as priority inversion
❖ Priority-inheritance can solve the problem of priority inversion.
❖ According to this protocol, all processes that are accessing resources needed by a higher-priority
process inherit the higher priority until they are finished with the resources that are requested.
❖ When they are finished, their priorities revert to their original values.

CLASSIC PROBLEMS OF SYNCHRONIZATION:

1. Bounded Buffer problem:


1. In this problem, the producer process produces the data and the consumer processes consumes the
data. Both of the process share the following data structures:
int n;
Semaphore mutex = 1;
Semaphore empty = n;
Semaphore full = 0
❖ Assume that the pool consists of n buffers, each capable of holding one item.
❖ The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized
to the value 1.
❖ The empty and full semaphores count the number of empty and full buffers.
❖ The semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.
❖ Code for producer process:
6

❖ Code for Consumer process:

2. THE DINING-PHILOSOPHERS PROBLEM:


❖ Consider five philosophers who spend their lives thinking and eating. The philosophers share a
circular table surrounded by five chairs, each belonging to one philosopher.
❖ In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.
❖ When a philosopher gets hungry she tries to pick up the two chopsticks that are closest to her.
❖ A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor.

❖ When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
the chopsticks. When she is finished eating, she puts down both chopsticks.
❖ One simple solution is to represent each chopstick with a semaphore.
❖ A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore. She
releases her chopsticks by executing the signal () operation on the appropriate semaphores.
❖ The shared data is semaphore chopstick [5]; where all the elements of the chopsticks are
initialized to 1.
7

❖ Several possible remedies to the deadlock problem are replaced by:


• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this,
she must pick them up in a critical section).
• Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left
chopstick and then her right chopstick, whereas an even numbered philosopher picks up her right
chopstick and then her left chopstick.

3. THE READERS WRITERS PROBLEM:


❖ A database is to be shared among several concurrent processes.
❖ Some of these processes may want only to read the database, whereas others may want to update
(that is, to read and write) the database.
❖ We distinguish between these two types of processes by referring to the former as readers and to
the latter as writers.
❖ If two readers access the shared data simultaneously, no effects will result.
❖ However, if a writer and some other process (either a reader or a writer) access the database
simultaneously, problems may occur.
❖ To ensure that these difficulties do not arise, we require that the writers have exclusive access to
the shared database while writing to the database. This synchronization problem is referred to as
the readers–writers problem.
❖ In the solution to the first readers–writers problem, the reader processes share the following data
structures:
Semaphore rwmutex = 1;
Semaphore mutex = 1;
int read count = 0;
❖ The semaphores mutex and rwmutex are initialized to 1; read count is initialized to 0. The
semaphore rwmutex is common to both reader and writer

❖ Code for writer process:


do {
wait(rw mutex);
...
/* writing is performed */
...
signal(rw mutex);
} while (true);
❖ The mutex semaphore is used to ensure mutual exclusion when the variable read count is updated.
❖ The read count variable keeps track of how many processes are currently reading the object.
❖ The semaphore rwmutex functions as a mutual exclusion semaphore for the writers.
❖ Code for readers process:
do {
wait(mutex);
read count++;
8
if (read count == 1)
wait(rw mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read count == 0)
signal(rw mutex);
signal(mutex);
} while (true);
9

MONITORS:
❖ Although semaphores provide a convenient and effective mechanism for process synchronization,
using them incorrectly can result in timing errors that are difficult to detect.
❖ EXAMPLE: Suppose that a process interchanges the order in which the wait() and signal()
operations on the semaphore mutex are executed, resulting in the following execution:
signal(mutex);
...
critical section
...
wait(mutex);
❖ In this situation, several processes may be executing in their critical sections simultaneously,
violating the mutual-exclusion requirement.
❖ Suppose that a process replaces signal(mutex) with wait(mutex). That is, it executes
wait(mutex);
...
critical section
...
wait(mutex);
❖ In this case, a deadlock will occur. To deal with such errors one fundamental high-level
synchronization constructs called the monitor type is used.
❖ A monitor type is an ADT that includes a set of programmer defined operations that are provided
with mutual exclusion within the monitor.
❖ The monitor type also declares the variables whose values define the state of an instance of that
type, along with the bodies of functions that operate on those variables.

❖ Thus, a function defined within a monitor can access only those variables declared locally within
the monitor and its formal parameters. Similarly, the local variables of a monitor can be accessed
by only the local functions.
❖ The monitor construct ensures that only one process at a time is active within the monitor.
10

❖ The monitors also provide mechanisms of synchronization by the condition construct. A


programmer who needs to write a tailor-made synchronization scheme can define one or more
variables of type condition:
Condition x, y;
❖ The only operations that can be invoked on a condition variable are wait() and signal().
❖ The operation x.wait(); means that the process invoking this operation is suspended until another
process invokes x.signal();
❖ The x.signal() operation resumes exactly one suspended process
❖ Now suppose that, when the x.signal() operation is invoked by a process P, there exists a
suspended process associated with condition x.
❖ Clearly, if the suspended process Q is allowed to resume its execution, the signaling process P
must wait. Otherwise, both P and Q would be active simultaneously within the monitor.
❖ Note, however, that conceptually both processes can continuewith their execution. Two
possibilities exist:
1. Signal and wait. P either waits until Q leaves the monitor or waits for another condition.
2. Signal and continue. Q either waits until P leaves the monitor or waits for another condition.

Dining-Philosophers Solution Using Monitors:


❖ Consider five philosophers who spend their lives thinking and eating. The philosophers share a
circular table surrounded by five chairs, each belonging to one philosopher.
11

❖ In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.
❖ When a philosopher gets hungry she tries to pick up the two chopsticks that are closest to her.
❖ A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor.
❖ When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
the chopsticks. When she is finished eating, she puts down both chopsticks.
❖ The solution imposes the restriction that a philosopher may pick up her chopsticks only if both of
them are available.

❖ To code this solution, we need to distinguish among three states in which we may find a
philosopher. For this purpose, we introduce the following data structure:
enum {THINKING, HUNGRY, EATING} state[5];
❖ Philosopher i can set the variable state[i] = EATING only if her two neighbors are not eating:
(state[(i+4) % 5] != EATING) and(state[(i+1) % 5] != EATING).
❖ Also declare condition self[5]; that allows philosopher i to delay herself when she is hungry but is
unable to obtain the chopsticks she needs.
❖ Each philosopher, before starting to eat, must invoke the operation pickup().
❖ This may result in the suspension of the philosopher process.
❖ After the successful completion of the operation, the philosopher may eat. Following this, the
philosopher invokes the putdown() operation.
❖ Thus, philosopher i must invoke the operations pickup() and putdown() in the following
sequence:
DiningPhilosophers.pickup(i);
eat
DiningPhilosophers.putdown(i);
❖ This solution ensures that no two neighbors are eating simultaneously and that no deadlocks will
occur.
Unit II

DEADLOCK
What is deadlock?
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time,
the process enters a waiting state. Waiting processes may never again change state,
because the resources they have requested are held by other waiting processes. This
situation is called a deadlock.
🡪 A set of processes is in a deadlock state when every process in the set is waiting for an
event that can be caused only by another process in the set. Because all the processes are
waiting, none of them will ever cause any of the events that could wake up any of the
other members of the set, and all the processes continue to wait forever.
Example:
Two processes each want to record a scanned document on a CD. Process A requests
permission to use the scanner and is granted it. Process B is programmed differently and
requests the CD recorder first and is also granted it. Now A asks for the CD recorder, but
the request is denied until B releases it. Unfortunately, instead of releasing the CD
recorder B asks for the scanner. At this point both processes are blocked and will remain
so forever. This situation is called a deadlock.

Resources of computer System:


Reusable resource:
Used by one process at a time. Processes obtain resources that they later release for reuse
by other processes
Example:
Processor time, I/O channels, main and secondary memory, files, databases, and
semaphores.
Consumable resources:
Created (produced) and destroyed (consumed) by a process.
Examples:
Interrupts, signals, messages, and information in I/O buffers.
🡪 A process must request a resource before using it, and must release the resource after
using it. A process may request as many resources as it requires to carry out its
designated task.
🡪 Obviously, the number of resources requested may not exceed the total number of
resources available in the system. In other words, a process cannot request three
printers if the system has only two.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request: If the request cannot be granted immediately (for example, the resource is
being used by another process), then the requesting process must wait until it can acquire
the resource.
2. Use: The process can operate on the resource (for example, if the resource is a printer, the
process can print on the printer).
3. Release: The process releases the resource.

Deadlock caused by other Processes in the system:


🡪 A set of processes is in a deadlock state when every process in the set is waiting for an
event that can be caused only by another process in the set.
🡪 The events with which we are mainly concerned here are resource acquisition and release.
The resources may be either physical resources (for example, printers, tape drives,
memory space, and CPU cycles) or logical resources (for example files, semaphores, and
monitors).

Example 1: Deadlock involving the same resource type.


To illustrate a deadlock state, we consider a system with three tape drives. Suppose each
of three processes holds one of these tape drives. If each process now requests another tape
drive, the three processes will be in a deadlock state. Each is waiting for the event "tape drive
is released," which can be caused only by one of the other waiting processes. This example
illustrates a deadlock involving the same resource type.

Example 2:Deadlock in different resource type:


Deadlocks may also involve different resource types. For example, consider a system with one
printer and one tape drive. Suppose that process Pi is holding the tape drive and process Pj is
holding the printer. If Pi requests the printer and Pj requests the tape drive, a deadlock occurs.

Deadlock Characterization:

Necessary Conditions for Deadlock:


A deadlock situation can arise if the following four conditions hold simultaneously in a system:

1. Mutual exclusion:
At least one resource must be held in a non-sharable mode; that is, only one process at a
time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait:
A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by other processes.
3. No preemption:
Resources cannot be preempted; that is, a resource can be released only voluntarily by
the process holding it, after that process has completed its task.
4. Circular wait:
A set {P0, P1, ..., Pn,) of waiting processes must exist such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, ..., Pn-1 is waiting
for a resource that is held by Pn and Pn is waiting for a resource that is held by P0.
Resource Allocation Graph:

🡪 Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph.
🡪 This graph consists of a set of vertices V and a set of edges E.

🡪 The set of vertices V is partitioned into two different types:


1. P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
2. R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.

Edges:
Request edge:
A directed edge from process Pi to resource type Rj is denoted by Pi 🡪Rj; it signifies
that process Pi requested an instance of resource type Rj and is currently waiting for
that resource A directed edge Pi🡪Rj is called a request edge.

Assignment edge:
A directed edge from resource type Rj to process Pi is denoted by Rj 🡪Pi; it signifies
that an instance of resource type Rj has been allocated to process Pi, a directed edge
Rj🡪 Pi is called an assignment edge.

Pictorial representation of Resource allocation graph:

represent each process Pi as a


circle
Circle
Represent each process Rj as
a square.
Square
Since resource type Rj may
have more than one instance,
we represent each such
instance as a dot within the
square.

process Pi requests
an instance of resource type
Rj,
Pi is holding an instance of Rj
🡪 When process Pi requests an instance of resource type Rj, a request edge is inserted in
the resource-allocation graph.
🡪 When this request can be fulfilled, the request edge is instantaneously transformed to
an assignment edge.
🡪 When the process no longer needs access to the resource it releases the resource, and as
a result the assignment edge is deleted.
Example of a Resource Allocation Graph

This resource-allocation graph shows the following situation.


The sets P,R and E:
P= {P1, P2, P3}.
R= {R1, R2, R3, R4}.
E= { P1 🡪R1, P2 🡪R3 , R1 🡪P2 , R2 🡪P2 , R2 🡪P1 , R3 🡪P3}
Resource instances

🡪
One instance of resource type R1.
Two instances of resource type R2.
🡪 One instance of resource type R3.

🡪 Three instances of resource type R4.


Process states:
🡪 Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.
🡪 Process P2 is holding an instance of R1 and R2, and is waiting for an instance of
resource type R3.
🡪 Process P3 is holding an instance of R3.

Given the definition of a resource-allocation graph, if the graph contains no cycles, then
no process in the system is deadlocked. If the graph does contain a cycle, then a
deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock
has occurred.
🡪 If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked. In this case, a cycle in the graph is both a necessary and a sufficient
condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily imply
that a deadlock has occurred.
🡪 In this case, a cycle in the graph is a necessary but not a sufficient condition for the
existence of deadlock.
Example: Resource Allocation Graph with A Deadlock.
Consider the above resource allocation graph,
Suppose that process P3 requests an instance of resource type R2. Since no resource
instance is currently available, a request edge P3🡪R2 is added to the graph. This is
depicted here.

In this graph shows two cycles exist in the system.


P1🡪R1🡪P2🡪R3🡪P3🡪R2🡪P1

P2🡪R3🡪P3🡪R2🡪P2
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held
by process P3. Process P3, on the other hand, is waiting for either process P1 or process P2 to
release resource R2. In addition, process PI is waiting for process P2 to release resource R1.
Example 2: Resource Allocation Graph With A Cycle But No Deadlock
Consider the resource allocation graph,

In this graph we also have the


cycle, P1🡪R1🡪P3🡪R2🡪P1

🡪 However, there is no deadlock. Observe that process P4 may release its instance of
resource type R2. That resource can then be allocated to P3, breaking the cycle.
🡪 So if a resource-allocation graph does not have a cycle, then the system is not in a
deadlock state. On the other hand, if there is a cycle, then the system may or may not
be in a deadlock state.
METHODS FOR HANDLING DEADLOCKS:

Principally, we can deal with the deadlock problem in one of three ways:
1. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
2. We can allow the system to enter a deadlock state, detect it, and recover.
3. We can ignore the problem altogether, and pretend that deadlocks never occur in the
system. This solution is used by most operating systems, including UNIX.

To ensure that deadlocks never occur, the system can use either deadlock prevention or a
deadlock-avoidance scheme.
🡪 Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold. These methods prevent deadlocks by
constraining how requests for resources can be made.
🡪 Deadlock avoidance, on the other hand, requires that the operating system be
given in advance additional information concerning which resources a process
will request and use during its lifetime. With this additional knowledge, we can
decide for each request whether or not the process should wait. To decide whether
the current request can be satisfied or must be delayed, the system must consider
the resources currently available, the resources currently allocated to each
process, and the future requests and releases of each process.
Deadlock Detection and Recovery, If a system does not employ either a deadlock-
prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In
this environment, the system can provide an algorithm that examines the state of the
system to determine whether a deadlock has occurred, and an algorithm to recover from
the deadlock (if a deadlock has indeed occurred).
If a system does not ensure that a deadlock will never occur, and also does not
provide a mechanism for deadlock detection and recovery, then we may arrive at a
situation where the system is in a deadlock state yet has no way of recognizing what has
happened. In this case, the undetected deadlock will result in the deterioration of the
system performance, because resources are being held by processes that cannot run, and
because more and more processes, as they make requests for resources, enter a deadlock
state. Eventually, the system will stop functioning and will need to be restarted
manually. Although this method does not seem to be a viable approach to the deadlock
problem, it is nevertheless used in some operating systems.

DEADLOCK PREVENTION:

A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual Exclusion
2. Hold and Wait
3. No Pre-emption
4. Circular wait
By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
Mutual Exclusion
The mutual-exclusion condition must hold for nonsharable resources.
🡪 For example, a printer cannot be simultaneously shared by several processes.

🡪 Sharable resources, on the other hand, do not require mutually exclusive access, and
thus cannot be involved in a deadlock. Read-only files are a good example of a
sharable resource. If several processes attempt to open a read-only file at the same
time, they can be granted simultaneous access to the file. A process never needs to
wait for a sharable resource.
In general, however, we cannot prevent deadlocks by denying the mutual-exclusion
condition: Some resources are intrinsically nonsharable.

Hold and Wait

To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other
resources.
Two Protocols (rule):
1. One protocol that can be used requires each process to request and be allocated all
its resources before it begins execution. We can implement this provision by
requiring that system calls requesting resources for a process precede all other system
calls.
2. Second protocol allows a process to request resources only when the process has
none. A process may request some resources and use them. Before it can request any
additional resources, however, it must release all the resources that it is currently
allocated.

Example:
🡪 To illustrate the difference between these two protocols, we consider a process that
copies data from a tape drive to a disk file, sorts the disk file, and then prints the
results to a printer.
🡪 If all resources must be requested at the beginning of the process, then the process
must initially request the tape drive, disk file, and printer. It will hold the printer
for its entire execution, even though it needs the printer only at the end.
🡪 The second method (Protocol) allows the process to request initially only the tape
drive and disk file. It copies from the tape drive to the disk, and then releases both
the tape drive and the disk file. The process must then again request the disk file
and the printer. After copying the disk file to the printer, it releases these two
resources and terminates.
These protocols have two main disadvantages.
🡪 First, resource utilization may be low, since many of the resources may be allocated
but unused for a long period. In the example given, for instance, we can release
the tape drive and disk file, and then again request the disk file and printer, only if
we can be sure that our data will remain on the disk file. If we cannot be assured
that they will, then we must request all resources at the beginning for both
protocols.
🡪 Second, starvation is possible. A process that needs several popular resources may
have to wait indefinitely, because at least one of the resources that it needs is
always allocated to some other process.
No Preemption
The third necessary condition is that there be no preemption of resources that have
already been allocated.
To ensure that this condition does not hold, we can use the following protocol.
🡪 If a process is holding some resources and requests another resource that cannot be
immediately allocated to it (that is, the process must wait), then all resources
currently being held are preempted.
🡪 In other words, these resources are implicitly released. The preempted resources are
added to the list of resources for which the process is waiting. The process will be
restarted only when it can regain its old resources, as well as the new ones that it is
requesting.
🡪 Alternatively, if a process requests some resources, we first check whether they are
available. If they are, we allocate them. If they are not available, we check whether
they are allocated to some other process that is waiting for additional resources. If
so, we preempt the desired resources from the waiting process and allocate them to
the requesting process. If the resources are not either available or held by a waiting
process, the requesting process must wait.
🡪 While it is waiting, some of its resources may be preempted, but only if another
process requests them. A process can be restarted only when it is allocated the new
resources it is requesting and recovers any resources that were pre-empted while it
was waiting.
🡪 This protocol is often applied to resources whose state can be easily saved and
restored later, such as CPU registers and memory space. It cannot generally be
applied to such resources as printers and tape drives.
Circular Wait
⮚ The fourth and final condition for deadlocks is the circular-wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types,
and to require that each process requests resources in an increasing order of
enumeration.
⮚ Let R = {R1, R2, ..., Rn,} be the set of resource types. We assign to each resource type a
unique integer number, which allows us to compare two resources and to determine
whether one precedes another in our ordering.
🡪 Formally, we define a one-to-one function F: R 🡪 N, where N is the set of natural
numbers. For example, if the set of resource types R includes tape drives, disk
drives, and printers, then the function F might be defined as follows:
F (tape drive) = 1,
F (disk drive) = 5,
F (printer) = 12.
Protocols to prevent circular wait:
1. Each process can request resources only in an increasing order of enumeration.
That is, a process can initially request any number of instances of a resource type,
say Rj. After that, the process can request instances of resource type Ri if and only if
F(Rj) > F(Ri).
If several instances of the same resource type are needed, a single request for all
of them must be issued.
For example, using the function defined previously, a process that wants to use
the tape drive and printer at the same time must first request the tape drive and then
request the printer.
2. Second protocol states, whenever a process requests an instance of resource type Rj, it
has released any resources Ri such that F(Ri) ≥ F(Rj).
If these two protocols are used, then the circular-wait condition cannot hold.
We can demonstrate this fact by assuming that a circular wait exists (proof by
contradiction). Let the set of processes involved in the circular wait be {P0,P1, ..., Pn,),
where Pi is waiting for a resource Ri, which is held by process Pi+l.(Modulo arithmetic is
used on the indexes, so that Pn is waiting for a resource Rn held by P0.)
Then, since process Pi+l is holding resource Ri while requesting resource Ri+l, we must
have F(Ri)<F(Ri+1), for all i. But this condition means that F(R0) < F(R1) < ... < F(Rn,) <
F(R0). By transitivity, F (R0) < F (R0), which is impossible. Therefore, there can be no
circular wait.
DEADLOCK AVOIDANCE:
Ensure that a system will never enter an unsafe state.
This is the method for avoiding deadlocks is to require additional information about
how resources are to be requested.
🡪 For example, in a system with one tape drive and one printer, we might be told that
process P will request first the tape drive, and later the printer, before releasing both
resources.
🡪 Process Q, on the other hand, will request first the printer, and then the tape drive.
With this knowledge of the complete sequence of requests and releases for each
process, we can decide for each request whether or not the process should wait.
🡪 Each request requires that the system consider the resources
⮚ currently available,
⮚ the resources currently allocated to each process, and
⮚ The future requests and releases of each process, to decide whether the current
request can be satisfied or must wait to avoid a possible future deadlock.
The various algorithms differ in the amount and type of information required.
🡪 The simplest model requires that each process declare the maximum number of
resources of each type that it may need. Given a priori information about the
maximum number of resources of each type that may be requested for each process,
it is possible to construct an algorithm that ensures that the system will never enter a
deadlock state. This algorithm defines the deadlock-avoidance approach.
🡪 A deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that a circular wait condition can never exist.
🡪 The resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
Safe state:
⮚ A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
⮚ System is in safe state if there exists a safe sequence of all processes.
⮚ Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can
be satisfied by currently available resources + resources held by all the Pj, with j<i.
⮚ If the resources that process Pi needs are not immediately available, then Pi can wait until
all Pj have finished. When they have finished, Pi can obtain all of its needed resources,
complete its designated task, return its allocated resources, and terminate. When Pi
terminates, Pi+l can obtain its needed resources, and so on. If no such sequence exists,
then the system state is said to be unsafe.
⮚ A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state.
Not all unsafe states are deadlocks; An unsafe state may lead to a deadlock. As long as
the state is safe, the operating system can avoid unsafe (and deadlock) states.
⮚ In an unsafe state, the operating system cannot prevent processes from requesting
resources such that a deadlock occurs
Safe, Unsafe , Deadlock State

Example:
To illustrate, we consider a system with 12 magnetic tape drives and 3 processes: P0, P1, and P2.
Process P0 requires 10 tape drives, process P1 may need as many as 4, and process P2 may need
up to 9 tape drives. Suppose that, at time to, process P0 is holding 5 tape drives, process P1 is
holding 2, and process P2 is holding 2 tape drives. (Thus, there are 3 free tape drives.)

Maximum Needs Current Needs


P0 10 5
P1 4 2
P2 9 2

At time t0, the system is in a safe state. The sequence <P1, P0, P2> satisfies the safety
condition, since process P1 can immediately be allocated all its tape drives and then return
them (the system will then have 5 available tape drives),
Then process P0 can get all its tape drives and return them (the system will then have 10
available tape drives), and
Finally process P2 could get all its tape drives and return them (the system will then have
all 12 tape drives available).
A system may go from a safe state to an unsafe state.
Suppose that, at time tl, process P2 requests and is allocated 1 more tape drive. The
system is no longer in a safe state.
⮚ At this point, only process P1 can be allocated all its tape drives. When it returns
them, the system will have only 4 available tape drives.
⮚ Since process P0 is allocated 5 tape drives, but has a maximum of 10, it may then
request 5 more tape drives. Since they are unavailable, process P0 must wait.
⮚ Similarly, process P2 may request an additional 6 tape drives and have to wait,
resulting in a deadlock.
⮚ Our mistake was in granting the request from process P2 for 1 more tape drive. If
we had made P2 wait until either of the other processes had finished and released
its resources, then we could have avoided the deadlock.
Given the concept of a safe state, we can define avoidance algorithms that ensure that the
system will never deadlock. The idea is simply to ensure that the system will always
remain in a safe state. Initially, the system is in a safe state.
Whenever a process requests a resource that is currently available, the system must
decide whether the resource can be allocated immediately or whether the process must
wait. The request is granted only if the allocation leaves the system in a safe state.
In this scheme, if a process requests a resource that is currently available, it may still
have to wait. Thus, resource utilization may be lower than it would be without a
deadlock- avoidance algorithm.
Resource-Allocation Graph Algorithm
This algorithm mainly used to avoid the deadlock in the system which have only one
instance of each resource type.
Resource Allocation graph contains two edges
1. Request edge - Pi 🡪Rj Process Pi request a single instance of resource Rj.
2. Assignment edge - Rj Process Pi holds a single instance of resource Rj.
Variant of the resource-allocation graph can be used for deadlock avoidance.
In addition to the request and assignment edges, we introduce a new type of edge, called
a claim edge.
🡪 Claim edge Pi Rj indicated that process Pj may request resource Rj in future;
represented by a dashed line.
🡪 When process Pi requests resource Rj, the claim edge Pi 🡪 Rj is converted to a request
edge. Similarly when a resource Rj is released by Pi,the assignment edge Rj
is reconverted to a claim edge Pi Rj.
🡪 We note that the resources must be claimed a priori in the system. That is, before
process Pi starts executing, all its claim edges must already appear in the resource-
allocation graph. We can relax this condition by allowing a claim edge Pi Rj to be
added to the graph only if all the edges associated with process Pi are claim edges.
Suppose that process Pi requests resource Rj. The request can be granted only if
converting the request edge Pi Rj to an does not result in the
assignment edge Rj formation of a cycle in
the resource-allocation graph.

🡪 If no cycle exists, then the allocation of the resource will leave the system in a safe
state.
🡪 If a cycle is found, then the allocation will put the system in an unsafe state.
Therefore, process Pi will have to wait for its requests to be satisfied.
Note that we check for safety by using a cycle-detection algorithm. An algorithm for
detecting a cycle in this graph requires an order of n2 operations, where n is the number
of processes in the system.
Example:
To illustrate this algorithm, we consider the following resource-allocation graph,

Resource-Allocation Graph For Deadlock Avoidance


Suppose that P2 requests R2. Although R2 is currently free, we cannot allocate it to P2, since this
action will create a cycle in the graph .This is shown in given below.

Unsafe State In Resource-Allocation Graph

A cycle indicates that the system is in an unsafe state. If P1 requests R2, and P2 requests R1, then
a deadlock will occur.

BANKER'S ALGORITHM: (for deadlock avoidance)

The resource-allocation graph algorithm is not applicable to a resource allocation system


with multiple instances of each resource type.
Banker’s algorithm is used to avoid deadlock in resource allocation system with multiple
instances of each resource type.
This algorithm is commonly known as the banker's algorithm. The name was chosen
because this algorithm could be used in a banking system to ensure that the bank never
allocates its available cash such that it can no longer satisfy the needs of all its customers.
When a new process enters the system,
🡪 it must declare the maximum number of instances of each resource type that it may
need. This number may not exceed the total number of resources in the system.
🡪 When a user requests a set of resources, the system must determine whether the
allocation of these resources will leave the system in a safe state. If it will, the
resources are allocated; otherwise, the process must wait until some other process
releases enough resources.
Data Structures for the Banker’s Algorithm:
Let n = number of processes, and m = number of resources types

Available: Vector of length m. If available [j] = k, there are k instances of resource type
Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type Rj.
Allocation: n x m matrix. If Allocation [i,j] = k then Pi is currently allocated k
instances of Rj.
Need: n x m matrix. If Need [i,j] = k, then Pi may need k more instances of Rj to
complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm:
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

1. Let Work and Finish be vectors of length m and n, respectively.


Initialize: Work = Available
Finish [i] = false for i - 1, 2, 3, …, n.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi Work
If no such i exists, go to step 4.
3. Work=Work + Allocationi
Finish[i] =true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state

This algorithm may require an order of m x n2 operations to decide whether a state is safe.

Resource-Request Algorithm

Let Requesti be the request vector for process Pi. If Requesti[j] = k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:

1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim.
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not
available.
3. Pretend to allocate requested resources to Pi by modifying the state as
follows: Available = Available - Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;

If safe the resources are allocated to Pi.


If unsafe Pi must wait, and the old resource-allocation state is restored.

Example:
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances.
Suppose that, at time T0, the following snapshot of the system has been taken:

Processes Allocation Max Available


A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Calculate the Need Matrix:
The content of the matrix Need is defined to be Max - Allocation and is
Processes Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
Calculate Safe sequence:
Initially work = available= (3 3 2)
Finish[i]=false= [F F F F F ]

Process Need Work Work=Work + Allocationi Finish[i]


i
P0 P1 P2 P3 P4
P0 7 4 3 3 3 2 F F F F F F
P1 1 2 2 3 3 2 T 3 3 2 + 2 0 0 = 5 3 2 F T F F F
P2 6 0 0 5 3 2 F F T F F F
P3 0 1 1 5 3 2 T 5 3 2 + 2 1 1 = 7 4 3 F T F T F
P4 4 3 1 7 4 3 T 7 4 3 + 0 0 2 = 7 4 5 F T F T T
P2 6 0 0 7 6 5 T 7 4 5 + 3 0 2 = 10 4 7 F T T T T
P0 7 4 3 10 6 7 T 10 4 7 + 0 1 0 = 10 5 7 T T T T T

For all finish[i] is True so the system is in safe state.


Safe Sequence is < P1, P3, P4, P2, P0>

Suppose now that process P1 requests one additional instance of resource type A and two
instances of resource type C,
So Request1 = (1, 0, 2).
To decide whether this request can be immediately granted, we first check that
Requesti Available that is, (1, 0, 2) (3, 3, 2)), which is true.

We then pretend that this request has been fulfilled, and we arrive at the following new state:

Processes Allocation Need Available


A B C A B C A B C
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
Here we use Resource Request Algorithm for grant the request for P1 or not.
Now calculate Safe sequence:

Initially work = 2 3 0

Process Needi Work Work=Work + Allocationi Finish[i]


P0 P1 P2 P3 P4
P0 7 4 3 2 3 0 F 2 3 0 F F F F F
P1 0 2 0 2 3 0 T 2 3 0 + 3 0 2 = 5 3 2 F T F F F
P2 6 0 0 5 3 2 F 5 3 2 F T F F F
P3 0 1 1 5 3 2 T 5 3 2 + 2 1 1 = 7 4 3 F T F T F
P4 4 3 1 7 4 3 T 7 4 3 + 0 0 2 = 7 4 5 F T F T T
P2 6 0 0 7 4 5 T 7 4 5 + 3 0 2 = 10 4 7 F T T T T
P0 7 4 3 10 4 7 T 10 4 7 + 0 1 0 = 10 5 7 T T T T T

For all finish[i] is TRUE so the system is in safe state.


Safe Sequence is < P1, P3, P4, P2, P0>. So we can immediately grant the request of process P1.

🡪 If the system is in this state, a request for (3, 3, 0) by P4 cannot be granted because
available is (2,3,0).Since the resources are not available.
🡪 A request for (0, 2, 0) by P0 cannot be granted, even though the resources are available,
since the resulting state is unsafe.

Deadlock Detection:
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. In this environment, the system must provide:
An algorithm that examines the state of the system to determine whether a deadlock has
occurred.
An algorithm to recover from the deadlock.

Single Instance of Each Resource Type:


If all resources have only a single instance, then we can define a deadlock detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by removing the nodes of type
resource and collapsing the appropriate edges.
More precisely, an edge from Pi to Pj in a wait-for graph implies that process Pi is
waiting for process Pj to release a resource that Pi needs.
An edge Pi Pj exists in a wait-for graph if and only if the corresponding resource
allocation graph contains two edges Pi Rq and Rq for some resource Rq.
For example, see a resource-allocation graph and the corresponding wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for graph contains a
cycle.
To detect deadlocks, the system needs to maintain the wait-for graph and periodically to
invoke an algorithm that searches for a cycle in the graph.
An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of vertices in the graph.
Resource-Allocation Graph Corresponding wait-for graph

Several Instances of a Resource Type


The wait-for graph scheme is not applicable to a resource-allocation system with
multiple instances of each resource type.
The deadlock-detection algorithm that we describe next is applicable to such a system.
The algorithm employs several time-varying data structures that are similar to those used
in the banker's algorithm:
Data Structures used in deadlock-detection algorithm:
🡪 Available: A vector of length m indicates the number of available resources of
each type.
🡪 Allocation: An n x m matrix defines the number of resources of each type currently
allocated to each process.
🡪 Request: An n x m matrix indicates the current request of each process. If Request
[i,j] = k, then process Pi is requesting k more instances of resource type. Rj.

Detection Algorithm:

1. Let Work and Finish be vectors of length m and n, respectively Initialize:


(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi
0, then Finish[i] = false;otherwise,
Finish[i] = true.
2. Find an index i such that
both: (a)Finish[i] == false
(b)Requesti Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked.
Algorithm requires an order of O(m x n2) operations to detect whether the system is in
deadlocked state.
If Requesti Work is true we can the resources of process Pi. We know that Pi is currently not
involved in a deadlock (since Requesti Work).

Every request of the process, the deadlock will be detected the next time that the deadlock-
detection algorithm is invoked.

Example:
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 7 instances, resource type B has 2 instances, and resource type C has 6 instances.
Suppose that, at time T0, we have the following resource-allocation state:

Processes Allocation Reques Available


t
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2

Execute Deadlock detection algorithm:


Initially work=( 0 0 0 )
Finish[i]=false for all
i.

Process Requesti Work Work = Work + Allocationi Finish[i]


P0 P1 P2 P3 P4
P0 0 0 0 0 0 0 T 0 0 0 + 0 1 0 = 0 1 0 T F F F F
P1 2 0 2 0 1 0 F
P2 0 0 0 0 1 0 T 0 1 0 + 3 0 3 = 3 1 3 T F T F F
P1 2 0 2 3 1 3 F
P3 1 0 0 3 1 3 T 3 1 3 + 2 1 1 = 5 2 4 T F T T F
P1 2 0 2 5 2 4 T 5 2 4 + 2 0 0 = 7 2 4 T T T T F
P4 0 0 2 7 2 4 T 7 2 4 + 0 0 2 = 7 2 6 T T T T T

Finish[i] =TRUE for all i.


Safe sequence is <P0, P2, P3, P1, P4>, Safe sequence exists, We claim that the system is not in a
deadlocked state.
Suppose now that process P2 makes one additional request for an instance of type C. The Request
matrix is modified as follows:

Processes Request
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
Finding Safe sequence:
Process Requesti Work Work = Work + Allocationi Finish[i]
P0 P1 P2 P3 P4
P0 0 0 0 0 0 0 T 0 0 0 + 0 1 0 = 0 1 0 T F F F F
P1 2 0 2 0 1 0 F T F F F F
P2 0 0 1 0 1 0 F T F F F F
P3 1 0 0 0 1 0 F T F F F F
P1 0 0 2 0 1 0 F T F F F F
P4 0 0 2 O 1 0 F T F F F F
Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes requests.
Deadlock exists, consisting of processes P1, P2, P3 and P4.

Detection-Algorithm Usage:
When should we invoke the detection algorithm? The answer depends on two factors:
1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be invoked frequently.
🡪 Resources allocated to deadlocked processes will be idle until the deadlock can be
broken.
🡪 In addition, the number of processes involved in the deadlock cycle may grow.
Deadlocks occur only when some process makes a request that cannot be granted
immediately. This request may be the final request that completes a chain of waiting
processes.
🡪 In the extreme, we could invoke the deadlock detection algorithm every time a request
for allocation cannot be granted immediately.
🡪 In this case, we can identify not only the set of processes that is deadlocked, but also the
specific process that "caused" the deadlock.
Invoking the deadlock-detection algorithm for every request may incur a considerable
overhead in computation time.
🡪 A less expensive alternative is simply to invoke the algorithm at less frequent
intervals-for example, once per hour, or whenever CPU utilization drops below 40
percent. (A deadlock eventually cripples system throughput and will cause CPU
utilization to drop.)
🡪 If the detection algorithm is invoked at arbitrary points in time, there may be many
cycles in the resource graph. We would generally not be able to tell which of the many
deadlocked processes "caused the deadlock.

Recovery from Deadlock


When a detection algorithm determines that a deadlock exists,
🡪 One possibility is to inform the operator that a deadlock has occurred, and to let the
operator deal with the deadlock manually.
🡪 The other possibility is to let the system recover from the deadlock automatically.
There are two options for breaking a deadlock.
1. One solution is simply to abort one or more processes to break the circular
wait.
2. The second option is to preempt some resources from one or more of the
deadlocked processes.
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods,
the system reclaims all resources allocated to the terminated processes.
🡪 Abort all deadlocked processes:
This method clearly will break the deadlock cycle, but at a great expense; these processes
may have computed for a long time, and the results of these partial computations must be
discarded and probably recomputed later.
🡪 Abort one process at a time until the deadlock cycle is eliminated:
This method incurs considerable overhead, since, after each process is aborted, a
deadlock-detection algorithm must be invoked to determine whether any processes are
still deadlocked.
Aborting a process may not be easy.
🡪 If the process was in the midst of updating a file, terminating it will leave that file in an
incorrect state. Similarly,
🡪 if the process was in the midst of printing data on the printer, the system must reset the
printer to a correct state before printing the next job.
If the partial termination method is used, then, given a set of deadlocked processes, we must
determine which process (or processes) should be terminated in an attempt to break the
deadlock. This determination is a policy decision, similar to CPU-scheduling problems.
🡪 The question is basically an economic one; we should abort those processes the
termination of which will incur the minimum cost.
🡪 Unfortunately, the term minimum cost is not a precise one. Many factors may determine
which process is chosen, including:
1. Priority of the process.
2. How long process has computed, and how much longer to completion.
3. Resources the process has used.
4. Resources process needs to complete.
5. How many processes will need to be terminated?
6. Is process interactive or batch?
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some
resources from processes and give these resources to other processes until the deadlock
cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:
1. Selecting a victim: Which resources and which processes are to be preempted?
🡪 As in process termination, we must determine the order of pre- emption to
minimize cost.
🡪 Cost factors may include such parameters as the number of resources a
deadlock process is holding, and the amount of time a deadlocked process
has thus far consumed during its execution.
2. Rollback: If we preempt a resource from a process, what should be done with that
process?
🡪 Clearly, it cannot continue with its normal execution; it is missing some
needed resource. We must roll back the process to some safe state, and
restart it from that state.
🡪 Since, in general, it is difficult to determine what a safe state is, the simplest
solution is a total rollback: Abort the process and then restart it.
3. Starvation: How do we ensure that starvation will not occur? That is, how can we
guarantee that resources will not always be preempted from the same process?
🡪 In a system where victim selection is based primarily on cost factors, it may
happen that the same process is always picked as a victim. As a result, this
process never completes its designated task, a starvation situation that needs
to be dealt with in any practical system.
🡪 Clearly, we must ensure that a process can be picked as a victim only a (small)
finite number of times.

You might also like