Cs23431 Os Unit - III
Cs23431 Os Unit - III
PROCESS SYNCHRONIZATION:
❖ Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
❖ The critical section may be followed by an exit section.
❖ The remaining code is the remainder section.
❖ A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections
2. Progress. If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder sections
can participate in deciding which will enter its critical section next.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
PETERSON’S SOLUTION:
❖ Peterson’s solution is restricted to two processes that alternate execution between their critical
sections and remainder sections.
❖ The processes are numbered P0 and P1. For convenience, when presenting Pi, we use Pj to denote
the other process.
❖ Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
❖ The variable turn indicates whose turn it is to enter its critical section. That is, if turn == i, then
process Pi is allowed to execute in its critical section.
❖ The flag array is used to indicate if a process is ready to enter its critical section. For example, if
flag[i] is true, this value indicates that Pi is ready to enter its critical section.
❖ To enter the critical section, process Pi first sets flag[i] to be true and then sets turn to the value j,
thereby checking that if the other process wishes to enter the critical section, it can do so.
❖ Similarly to enter the critical section, process Pj first sets flag[j] to be true and then sets turn to
the value i, thereby checking that if the other process wishes to enter the critical section.
❖ The solution is correct and thus provides the following.
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.
MUTEX LOCKS:
❖ Mutex locks are used to protect critical regions and thus prevent race conditions
❖ A process must acquire the lock before entering a critical section; it releases the lock when it exits
the critical section.
3
❖ The acquire()function acquires the lock, and the release() function releases the lock.
❖ A mutex lock has a boolean variable available whose value indicates if the lock is available or
not.
❖ If the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
❖ A process that attempts to acquire an unavailable lock is blocked until the lock is released.
❖ The definition of acquire() is as follows:
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
❖ The definition of release() is as follows:
release()
{
available = true;
}
❖ The main disadvantage of the implementation given here is that it requires busy waiting.
❖ While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the call to acquire().
❖ This type of mutex lock is also called a spinlock because the process “spins” while waiting for
the lock to become available.
❖ Busy waiting wastes CPU cycles that some other process might be able to use productively.
❖ Spinlocks do have an advantage, however, in that no context switch is required when a process
must wait on a lock, and a context switch may take considerable time
SEMAPHORES:
❖ A semaphore S is an integer variable that, is accessed only through two standard atomic
operations: wait() and signal().
❖ The wait() operation was originally termed P and the meaning is to test, the signal() was
originally called V and the meaning is to increment.
❖ The definition of wait() is as follows:
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
SEMAPHORE IMPLEMENTATION:
❖ The main disadvantage of semaphore is that it requires busy waiting. When one process is in its
critical section any other process that tries to enter the critical section must loop continuously in
the entry code.
❖ The mutual exclusion implementation with semaphores is given by
do
{
wait(mutex);
//critical section
signal(mutex);
//remainder section
}while(TRUE);
❖ To overcome the need for busy waiting, we can modify the wait() and signal() operations.
❖ When a process executes the wait() operation and finds that the semaphore value is not positive, it
must wait. However, rather than engaging in busy waiting, the process can block itself.
❖ The block operation places a process into a waiting queue associated with the semaphore.
❖ Then control is transferred to the CPU scheduler, which selects another process to execute.
❖ A process that is blocked, waiting on a semaphore S, should be restarted when some other process
executes a signal() operation.
❖ The process is restarted by a wakeup() operation, which changes the process fromthe waiting
state to the ready state.
❖ To implement semaphores under this definition, we define a semaphore as follows:
typedef struct {
int value;
struct process *list;
} semaphore
❖ Each semaphore has an integer value and a list of processes list. When a process must wait on a
semaphore, it is added to the list of processes.
❖ A signal() operation removes one process from the list of waiting processes and awakens that
process.
❖ The wait() semaphore operation can be defined as
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
❖ The signal() semaphore operation can be defined as
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
❖ The Block() and wakeup() operations are provided by the operating system as system calls.
5
DEADLOCKS AND STARVATION:
❖ The implementation of a semaphore with waiting queue may result in a situation where two or
more process are waiting indefinitely for an event that can be caused only by one of the waiting
process. This situation is called as deadlock.
❖ Example: Consider a system consisting of two process p0 and p1, each accessing two semaphores
that is set to value 1.
❖ When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
the chopsticks. When she is finished eating, she puts down both chopsticks.
❖ One simple solution is to represent each chopstick with a semaphore.
❖ A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore. She
releases her chopsticks by executing the signal () operation on the appropriate semaphores.
❖ The shared data is semaphore chopstick [5]; where all the elements of the chopsticks are
initialized to 1.
7
MONITORS:
❖ Although semaphores provide a convenient and effective mechanism for process synchronization,
using them incorrectly can result in timing errors that are difficult to detect.
❖ EXAMPLE: Suppose that a process interchanges the order in which the wait() and signal()
operations on the semaphore mutex are executed, resulting in the following execution:
signal(mutex);
...
critical section
...
wait(mutex);
❖ In this situation, several processes may be executing in their critical sections simultaneously,
violating the mutual-exclusion requirement.
❖ Suppose that a process replaces signal(mutex) with wait(mutex). That is, it executes
wait(mutex);
...
critical section
...
wait(mutex);
❖ In this case, a deadlock will occur. To deal with such errors one fundamental high-level
synchronization constructs called the monitor type is used.
❖ A monitor type is an ADT that includes a set of programmer defined operations that are provided
with mutual exclusion within the monitor.
❖ The monitor type also declares the variables whose values define the state of an instance of that
type, along with the bodies of functions that operate on those variables.
❖ Thus, a function defined within a monitor can access only those variables declared locally within
the monitor and its formal parameters. Similarly, the local variables of a monitor can be accessed
by only the local functions.
❖ The monitor construct ensures that only one process at a time is active within the monitor.
10
❖ In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.
❖ When a philosopher gets hungry she tries to pick up the two chopsticks that are closest to her.
❖ A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor.
❖ When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
the chopsticks. When she is finished eating, she puts down both chopsticks.
❖ The solution imposes the restriction that a philosopher may pick up her chopsticks only if both of
them are available.
❖ To code this solution, we need to distinguish among three states in which we may find a
philosopher. For this purpose, we introduce the following data structure:
enum {THINKING, HUNGRY, EATING} state[5];
❖ Philosopher i can set the variable state[i] = EATING only if her two neighbors are not eating:
(state[(i+4) % 5] != EATING) and(state[(i+1) % 5] != EATING).
❖ Also declare condition self[5]; that allows philosopher i to delay herself when she is hungry but is
unable to obtain the chopsticks she needs.
❖ Each philosopher, before starting to eat, must invoke the operation pickup().
❖ This may result in the suspension of the philosopher process.
❖ After the successful completion of the operation, the philosopher may eat. Following this, the
philosopher invokes the putdown() operation.
❖ Thus, philosopher i must invoke the operations pickup() and putdown() in the following
sequence:
DiningPhilosophers.pickup(i);
eat
DiningPhilosophers.putdown(i);
❖ This solution ensures that no two neighbors are eating simultaneously and that no deadlocks will
occur.
Unit II
DEADLOCK
What is deadlock?
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time,
the process enters a waiting state. Waiting processes may never again change state,
because the resources they have requested are held by other waiting processes. This
situation is called a deadlock.
🡪 A set of processes is in a deadlock state when every process in the set is waiting for an
event that can be caused only by another process in the set. Because all the processes are
waiting, none of them will ever cause any of the events that could wake up any of the
other members of the set, and all the processes continue to wait forever.
Example:
Two processes each want to record a scanned document on a CD. Process A requests
permission to use the scanner and is granted it. Process B is programmed differently and
requests the CD recorder first and is also granted it. Now A asks for the CD recorder, but
the request is denied until B releases it. Unfortunately, instead of releasing the CD
recorder B asks for the scanner. At this point both processes are blocked and will remain
so forever. This situation is called a deadlock.
Deadlock Characterization:
1. Mutual exclusion:
At least one resource must be held in a non-sharable mode; that is, only one process at a
time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait:
A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by other processes.
3. No preemption:
Resources cannot be preempted; that is, a resource can be released only voluntarily by
the process holding it, after that process has completed its task.
4. Circular wait:
A set {P0, P1, ..., Pn,) of waiting processes must exist such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, ..., Pn-1 is waiting
for a resource that is held by Pn and Pn is waiting for a resource that is held by P0.
Resource Allocation Graph:
🡪 Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph.
🡪 This graph consists of a set of vertices V and a set of edges E.
Edges:
Request edge:
A directed edge from process Pi to resource type Rj is denoted by Pi 🡪Rj; it signifies
that process Pi requested an instance of resource type Rj and is currently waiting for
that resource A directed edge Pi🡪Rj is called a request edge.
Assignment edge:
A directed edge from resource type Rj to process Pi is denoted by Rj 🡪Pi; it signifies
that an instance of resource type Rj has been allocated to process Pi, a directed edge
Rj🡪 Pi is called an assignment edge.
process Pi requests
an instance of resource type
Rj,
Pi is holding an instance of Rj
🡪 When process Pi requests an instance of resource type Rj, a request edge is inserted in
the resource-allocation graph.
🡪 When this request can be fulfilled, the request edge is instantaneously transformed to
an assignment edge.
🡪 When the process no longer needs access to the resource it releases the resource, and as
a result the assignment edge is deleted.
Example of a Resource Allocation Graph
🡪
One instance of resource type R1.
Two instances of resource type R2.
🡪 One instance of resource type R3.
Given the definition of a resource-allocation graph, if the graph contains no cycles, then
no process in the system is deadlocked. If the graph does contain a cycle, then a
deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock
has occurred.
🡪 If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked. In this case, a cycle in the graph is both a necessary and a sufficient
condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily imply
that a deadlock has occurred.
🡪 In this case, a cycle in the graph is a necessary but not a sufficient condition for the
existence of deadlock.
Example: Resource Allocation Graph with A Deadlock.
Consider the above resource allocation graph,
Suppose that process P3 requests an instance of resource type R2. Since no resource
instance is currently available, a request edge P3🡪R2 is added to the graph. This is
depicted here.
P2🡪R3🡪P3🡪R2🡪P2
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held
by process P3. Process P3, on the other hand, is waiting for either process P1 or process P2 to
release resource R2. In addition, process PI is waiting for process P2 to release resource R1.
Example 2: Resource Allocation Graph With A Cycle But No Deadlock
Consider the resource allocation graph,
🡪 However, there is no deadlock. Observe that process P4 may release its instance of
resource type R2. That resource can then be allocated to P3, breaking the cycle.
🡪 So if a resource-allocation graph does not have a cycle, then the system is not in a
deadlock state. On the other hand, if there is a cycle, then the system may or may not
be in a deadlock state.
METHODS FOR HANDLING DEADLOCKS:
Principally, we can deal with the deadlock problem in one of three ways:
1. We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
2. We can allow the system to enter a deadlock state, detect it, and recover.
3. We can ignore the problem altogether, and pretend that deadlocks never occur in the
system. This solution is used by most operating systems, including UNIX.
To ensure that deadlocks never occur, the system can use either deadlock prevention or a
deadlock-avoidance scheme.
🡪 Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold. These methods prevent deadlocks by
constraining how requests for resources can be made.
🡪 Deadlock avoidance, on the other hand, requires that the operating system be
given in advance additional information concerning which resources a process
will request and use during its lifetime. With this additional knowledge, we can
decide for each request whether or not the process should wait. To decide whether
the current request can be satisfied or must be delayed, the system must consider
the resources currently available, the resources currently allocated to each
process, and the future requests and releases of each process.
Deadlock Detection and Recovery, If a system does not employ either a deadlock-
prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In
this environment, the system can provide an algorithm that examines the state of the
system to determine whether a deadlock has occurred, and an algorithm to recover from
the deadlock (if a deadlock has indeed occurred).
If a system does not ensure that a deadlock will never occur, and also does not
provide a mechanism for deadlock detection and recovery, then we may arrive at a
situation where the system is in a deadlock state yet has no way of recognizing what has
happened. In this case, the undetected deadlock will result in the deterioration of the
system performance, because resources are being held by processes that cannot run, and
because more and more processes, as they make requests for resources, enter a deadlock
state. Eventually, the system will stop functioning and will need to be restarted
manually. Although this method does not seem to be a viable approach to the deadlock
problem, it is nevertheless used in some operating systems.
DEADLOCK PREVENTION:
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual Exclusion
2. Hold and Wait
3. No Pre-emption
4. Circular wait
By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
Mutual Exclusion
The mutual-exclusion condition must hold for nonsharable resources.
🡪 For example, a printer cannot be simultaneously shared by several processes.
🡪 Sharable resources, on the other hand, do not require mutually exclusive access, and
thus cannot be involved in a deadlock. Read-only files are a good example of a
sharable resource. If several processes attempt to open a read-only file at the same
time, they can be granted simultaneous access to the file. A process never needs to
wait for a sharable resource.
In general, however, we cannot prevent deadlocks by denying the mutual-exclusion
condition: Some resources are intrinsically nonsharable.
To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other
resources.
Two Protocols (rule):
1. One protocol that can be used requires each process to request and be allocated all
its resources before it begins execution. We can implement this provision by
requiring that system calls requesting resources for a process precede all other system
calls.
2. Second protocol allows a process to request resources only when the process has
none. A process may request some resources and use them. Before it can request any
additional resources, however, it must release all the resources that it is currently
allocated.
Example:
🡪 To illustrate the difference between these two protocols, we consider a process that
copies data from a tape drive to a disk file, sorts the disk file, and then prints the
results to a printer.
🡪 If all resources must be requested at the beginning of the process, then the process
must initially request the tape drive, disk file, and printer. It will hold the printer
for its entire execution, even though it needs the printer only at the end.
🡪 The second method (Protocol) allows the process to request initially only the tape
drive and disk file. It copies from the tape drive to the disk, and then releases both
the tape drive and the disk file. The process must then again request the disk file
and the printer. After copying the disk file to the printer, it releases these two
resources and terminates.
These protocols have two main disadvantages.
🡪 First, resource utilization may be low, since many of the resources may be allocated
but unused for a long period. In the example given, for instance, we can release
the tape drive and disk file, and then again request the disk file and printer, only if
we can be sure that our data will remain on the disk file. If we cannot be assured
that they will, then we must request all resources at the beginning for both
protocols.
🡪 Second, starvation is possible. A process that needs several popular resources may
have to wait indefinitely, because at least one of the resources that it needs is
always allocated to some other process.
No Preemption
The third necessary condition is that there be no preemption of resources that have
already been allocated.
To ensure that this condition does not hold, we can use the following protocol.
🡪 If a process is holding some resources and requests another resource that cannot be
immediately allocated to it (that is, the process must wait), then all resources
currently being held are preempted.
🡪 In other words, these resources are implicitly released. The preempted resources are
added to the list of resources for which the process is waiting. The process will be
restarted only when it can regain its old resources, as well as the new ones that it is
requesting.
🡪 Alternatively, if a process requests some resources, we first check whether they are
available. If they are, we allocate them. If they are not available, we check whether
they are allocated to some other process that is waiting for additional resources. If
so, we preempt the desired resources from the waiting process and allocate them to
the requesting process. If the resources are not either available or held by a waiting
process, the requesting process must wait.
🡪 While it is waiting, some of its resources may be preempted, but only if another
process requests them. A process can be restarted only when it is allocated the new
resources it is requesting and recovers any resources that were pre-empted while it
was waiting.
🡪 This protocol is often applied to resources whose state can be easily saved and
restored later, such as CPU registers and memory space. It cannot generally be
applied to such resources as printers and tape drives.
Circular Wait
⮚ The fourth and final condition for deadlocks is the circular-wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types,
and to require that each process requests resources in an increasing order of
enumeration.
⮚ Let R = {R1, R2, ..., Rn,} be the set of resource types. We assign to each resource type a
unique integer number, which allows us to compare two resources and to determine
whether one precedes another in our ordering.
🡪 Formally, we define a one-to-one function F: R 🡪 N, where N is the set of natural
numbers. For example, if the set of resource types R includes tape drives, disk
drives, and printers, then the function F might be defined as follows:
F (tape drive) = 1,
F (disk drive) = 5,
F (printer) = 12.
Protocols to prevent circular wait:
1. Each process can request resources only in an increasing order of enumeration.
That is, a process can initially request any number of instances of a resource type,
say Rj. After that, the process can request instances of resource type Ri if and only if
F(Rj) > F(Ri).
If several instances of the same resource type are needed, a single request for all
of them must be issued.
For example, using the function defined previously, a process that wants to use
the tape drive and printer at the same time must first request the tape drive and then
request the printer.
2. Second protocol states, whenever a process requests an instance of resource type Rj, it
has released any resources Ri such that F(Ri) ≥ F(Rj).
If these two protocols are used, then the circular-wait condition cannot hold.
We can demonstrate this fact by assuming that a circular wait exists (proof by
contradiction). Let the set of processes involved in the circular wait be {P0,P1, ..., Pn,),
where Pi is waiting for a resource Ri, which is held by process Pi+l.(Modulo arithmetic is
used on the indexes, so that Pn is waiting for a resource Rn held by P0.)
Then, since process Pi+l is holding resource Ri while requesting resource Ri+l, we must
have F(Ri)<F(Ri+1), for all i. But this condition means that F(R0) < F(R1) < ... < F(Rn,) <
F(R0). By transitivity, F (R0) < F (R0), which is impossible. Therefore, there can be no
circular wait.
DEADLOCK AVOIDANCE:
Ensure that a system will never enter an unsafe state.
This is the method for avoiding deadlocks is to require additional information about
how resources are to be requested.
🡪 For example, in a system with one tape drive and one printer, we might be told that
process P will request first the tape drive, and later the printer, before releasing both
resources.
🡪 Process Q, on the other hand, will request first the printer, and then the tape drive.
With this knowledge of the complete sequence of requests and releases for each
process, we can decide for each request whether or not the process should wait.
🡪 Each request requires that the system consider the resources
⮚ currently available,
⮚ the resources currently allocated to each process, and
⮚ The future requests and releases of each process, to decide whether the current
request can be satisfied or must wait to avoid a possible future deadlock.
The various algorithms differ in the amount and type of information required.
🡪 The simplest model requires that each process declare the maximum number of
resources of each type that it may need. Given a priori information about the
maximum number of resources of each type that may be requested for each process,
it is possible to construct an algorithm that ensures that the system will never enter a
deadlock state. This algorithm defines the deadlock-avoidance approach.
🡪 A deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that a circular wait condition can never exist.
🡪 The resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
Safe state:
⮚ A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
⮚ System is in safe state if there exists a safe sequence of all processes.
⮚ Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can
be satisfied by currently available resources + resources held by all the Pj, with j<i.
⮚ If the resources that process Pi needs are not immediately available, then Pi can wait until
all Pj have finished. When they have finished, Pi can obtain all of its needed resources,
complete its designated task, return its allocated resources, and terminate. When Pi
terminates, Pi+l can obtain its needed resources, and so on. If no such sequence exists,
then the system state is said to be unsafe.
⮚ A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state.
Not all unsafe states are deadlocks; An unsafe state may lead to a deadlock. As long as
the state is safe, the operating system can avoid unsafe (and deadlock) states.
⮚ In an unsafe state, the operating system cannot prevent processes from requesting
resources such that a deadlock occurs
Safe, Unsafe , Deadlock State
Example:
To illustrate, we consider a system with 12 magnetic tape drives and 3 processes: P0, P1, and P2.
Process P0 requires 10 tape drives, process P1 may need as many as 4, and process P2 may need
up to 9 tape drives. Suppose that, at time to, process P0 is holding 5 tape drives, process P1 is
holding 2, and process P2 is holding 2 tape drives. (Thus, there are 3 free tape drives.)
At time t0, the system is in a safe state. The sequence <P1, P0, P2> satisfies the safety
condition, since process P1 can immediately be allocated all its tape drives and then return
them (the system will then have 5 available tape drives),
Then process P0 can get all its tape drives and return them (the system will then have 10
available tape drives), and
Finally process P2 could get all its tape drives and return them (the system will then have
all 12 tape drives available).
A system may go from a safe state to an unsafe state.
Suppose that, at time tl, process P2 requests and is allocated 1 more tape drive. The
system is no longer in a safe state.
⮚ At this point, only process P1 can be allocated all its tape drives. When it returns
them, the system will have only 4 available tape drives.
⮚ Since process P0 is allocated 5 tape drives, but has a maximum of 10, it may then
request 5 more tape drives. Since they are unavailable, process P0 must wait.
⮚ Similarly, process P2 may request an additional 6 tape drives and have to wait,
resulting in a deadlock.
⮚ Our mistake was in granting the request from process P2 for 1 more tape drive. If
we had made P2 wait until either of the other processes had finished and released
its resources, then we could have avoided the deadlock.
Given the concept of a safe state, we can define avoidance algorithms that ensure that the
system will never deadlock. The idea is simply to ensure that the system will always
remain in a safe state. Initially, the system is in a safe state.
Whenever a process requests a resource that is currently available, the system must
decide whether the resource can be allocated immediately or whether the process must
wait. The request is granted only if the allocation leaves the system in a safe state.
In this scheme, if a process requests a resource that is currently available, it may still
have to wait. Thus, resource utilization may be lower than it would be without a
deadlock- avoidance algorithm.
Resource-Allocation Graph Algorithm
This algorithm mainly used to avoid the deadlock in the system which have only one
instance of each resource type.
Resource Allocation graph contains two edges
1. Request edge - Pi 🡪Rj Process Pi request a single instance of resource Rj.
2. Assignment edge - Rj Process Pi holds a single instance of resource Rj.
Variant of the resource-allocation graph can be used for deadlock avoidance.
In addition to the request and assignment edges, we introduce a new type of edge, called
a claim edge.
🡪 Claim edge Pi Rj indicated that process Pj may request resource Rj in future;
represented by a dashed line.
🡪 When process Pi requests resource Rj, the claim edge Pi 🡪 Rj is converted to a request
edge. Similarly when a resource Rj is released by Pi,the assignment edge Rj
is reconverted to a claim edge Pi Rj.
🡪 We note that the resources must be claimed a priori in the system. That is, before
process Pi starts executing, all its claim edges must already appear in the resource-
allocation graph. We can relax this condition by allowing a claim edge Pi Rj to be
added to the graph only if all the edges associated with process Pi are claim edges.
Suppose that process Pi requests resource Rj. The request can be granted only if
converting the request edge Pi Rj to an does not result in the
assignment edge Rj formation of a cycle in
the resource-allocation graph.
🡪 If no cycle exists, then the allocation of the resource will leave the system in a safe
state.
🡪 If a cycle is found, then the allocation will put the system in an unsafe state.
Therefore, process Pi will have to wait for its requests to be satisfied.
Note that we check for safety by using a cycle-detection algorithm. An algorithm for
detecting a cycle in this graph requires an order of n2 operations, where n is the number
of processes in the system.
Example:
To illustrate this algorithm, we consider the following resource-allocation graph,
A cycle indicates that the system is in an unsafe state. If P1 requests R2, and P2 requests R1, then
a deadlock will occur.
Available: Vector of length m. If available [j] = k, there are k instances of resource type
Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type Rj.
Allocation: n x m matrix. If Allocation [i,j] = k then Pi is currently allocated k
instances of Rj.
Need: n x m matrix. If Need [i,j] = k, then Pi may need k more instances of Rj to
complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm:
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
This algorithm may require an order of m x n2 operations to decide whether a state is safe.
Resource-Request Algorithm
Let Requesti be the request vector for process Pi. If Requesti[j] = k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim.
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not
available.
3. Pretend to allocate requested resources to Pi by modifying the state as
follows: Available = Available - Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
Example:
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances.
Suppose that, at time T0, the following snapshot of the system has been taken:
Suppose now that process P1 requests one additional instance of resource type A and two
instances of resource type C,
So Request1 = (1, 0, 2).
To decide whether this request can be immediately granted, we first check that
Requesti Available that is, (1, 0, 2) (3, 3, 2)), which is true.
We then pretend that this request has been fulfilled, and we arrive at the following new state:
Initially work = 2 3 0
🡪 If the system is in this state, a request for (3, 3, 0) by P4 cannot be granted because
available is (2,3,0).Since the resources are not available.
🡪 A request for (0, 2, 0) by P0 cannot be granted, even though the resources are available,
since the resulting state is unsafe.
Deadlock Detection:
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may occur. In this environment, the system must provide:
An algorithm that examines the state of the system to determine whether a deadlock has
occurred.
An algorithm to recover from the deadlock.
Detection Algorithm:
Every request of the process, the deadlock will be detected the next time that the deadlock-
detection algorithm is invoked.
Example:
Consider a system with five processes P0 through P4 and three resource types A, B, C. Resource
type A has 7 instances, resource type B has 2 instances, and resource type C has 6 instances.
Suppose that, at time T0, we have the following resource-allocation state:
Processes Request
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
Finding Safe sequence:
Process Requesti Work Work = Work + Allocationi Finish[i]
P0 P1 P2 P3 P4
P0 0 0 0 0 0 0 T 0 0 0 + 0 1 0 = 0 1 0 T F F F F
P1 2 0 2 0 1 0 F T F F F F
P2 0 0 1 0 1 0 F T F F F F
P3 1 0 0 0 1 0 F T F F F F
P1 0 0 2 0 1 0 F T F F F F
P4 0 0 2 O 1 0 F T F F F F
Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes requests.
Deadlock exists, consisting of processes P1, P2, P3 and P4.
Detection-Algorithm Usage:
When should we invoke the detection algorithm? The answer depends on two factors:
1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be invoked frequently.
🡪 Resources allocated to deadlocked processes will be idle until the deadlock can be
broken.
🡪 In addition, the number of processes involved in the deadlock cycle may grow.
Deadlocks occur only when some process makes a request that cannot be granted
immediately. This request may be the final request that completes a chain of waiting
processes.
🡪 In the extreme, we could invoke the deadlock detection algorithm every time a request
for allocation cannot be granted immediately.
🡪 In this case, we can identify not only the set of processes that is deadlocked, but also the
specific process that "caused" the deadlock.
Invoking the deadlock-detection algorithm for every request may incur a considerable
overhead in computation time.
🡪 A less expensive alternative is simply to invoke the algorithm at less frequent
intervals-for example, once per hour, or whenever CPU utilization drops below 40
percent. (A deadlock eventually cripples system throughput and will cause CPU
utilization to drop.)
🡪 If the detection algorithm is invoked at arbitrary points in time, there may be many
cycles in the resource graph. We would generally not be able to tell which of the many
deadlocked processes "caused the deadlock.