Os Unit 2 PDF 1
Os Unit 2 PDF 1
1. CPU utilisation
2. Throughput
3. Turnaround time
4. Waiting time
5. Response time
1. CPU utilisation :
➢ The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
➢ CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent
depending on the load upon the system.
➢ Let assume:
FCFS : cpu utilization is 100%
SJF : cpu utilization is 50%
RR; cpu utilization is 75%
Here FCFS is the best scheduling algorithm by considering CPU utilization.
CPU utilization should be maximum. So, the system can make use of cpu in efficient manner than only
we can say FCFS is the best algorithm.
2. Throughput :
Throughput means how many processes are completed its execution per a time unit(time unit may be
1sec or 2sec) is known as Throughput.
Let assume: In FCFS algorithm, 5 processes are completed in 1sec, In SJF , 3 processes are completed in
1sec, In Round robin, 9 processes are completed in 1 sec. so here Round Robin is the best algorithm.
So, when the cpu is busy executing the process throughput can be maximized.
3. Turnaround time:
It is a amount of time to execute a particular process.
It means the time difference between the time of completion of a process and the time of arrival in the ready
queue.
Let assume: a process has been submitted at t0 time in ready queue ,where as cpu has completed the
process execution at time t2.
So, TAT=t2-t0 (TAT=Completion time-Arrival time) Here turnaround time should be minimum.
4. Waiting time:
It specifies how much time a process waits in the ready queue for its execution.
Waiting time is a time difference between turnaround time and burst time.
waiting time= t1-t0 (WT=TAT-BT) Here waiting time should be
minimized.
5. Response time :
It is a amount of time in which the request was submitted until the first response is produced.
RESPONSE TIME(RT) =PROCESS STARTING AT FIRST TIME-PROCESS ARRIVAL
TIME.
Response time should be minimized.
Scheduling Algorithms
The main goal of CPU scheduling algorithms is to make sure that the CPU is never in an idle state, meaning
that the OS has at least one of the processes ready for execution among the available processes in the
ready queue.
The objectives of a scheduling algorithm are as follows:
➢ Maximize the CPU utilization, means the OS keep the CPU as busy as possible.
3. Priority Scheduling
➢ It is a simplest scheduling algorithm that schedules the processes according to arrival times of
processes.
Example 1: consider a set of processes arriving at time zero msec.
process Arrival Time Burst Time
P1 0 24
P2 0 3
p3 0 3
Here Burst time(BT) : Time required by cpu to execute the process.
Arrival time(AT): time at which the process arrives in the ready queue.
Completion time(CT): time at which process completes its execution.
Gantt chart/time line chart(Generalized activity Normalization time table): It showing a particular
schedule, including start and finish time of each process.
p1
p2 p3
0 24 27 30
P1 0 24 24 24 0 0 GA
NTT
P2 0 3 27 27 24 24 CH
AR
p3 0 3 30 30 27 27 T
Calculate avg turnaround time and waiting time
Average TAT= Total processes TAT/total no.of process
TAT=24+27+30/3
=81/3
=27ms
Average WT= Total processes WT/total no.of process
WT= 0+24+27/3
=51/3
=33ms
In Non-Preemptive scheduling algorithms, the waiting time(WT) and response time(RT) is same.
But In preemptive , the WT and RT is different.
• Let assume processes are arrival order is : P2, P3, P1
p2
p3 p1
0 3 6 30
In this order(p2,p3,p1), p2 & P3 does not wait for the long time for its execution. Because P2 & P3
burst time is 3ms and P1 Burst time is 24ms AWT=6+0+3/3 = 9/3 =3ms.
In this order(p1,p2,p3), p2 & P3 wait for the long time for its execution. Because P1 Burst time is
24ms and P2 & P3 burst time is 3ms and
AWT=6+0+3/3 = 9/3 =33ms.
Example 2
process Arrival Time Burst Time
P0 0 3
P1 2 6
p2 4 4
p3 6 5
p4 8 2
Gantt chart:
P0 P1 P2 P3 P4
0 3 9 13 18 20
Avg TAT=3+7+9+12+12/5
=43\5 AVG WT=0+1+5+7+10/5
=8.6ms =23/5 = 4.6ms
P0 0 3 3 3 0 0-0=0
P1 2 6 9 7 1 3-2=1
p2 4 4 13 9 5 9-4=5
p3 6 5 18 12 7 13-6=7
p4 8 2 20 12 10 18-8=10
Disadvantages:
If a process has a very high burst time and is coming first, then it will be executed first even if another
process with a lesser time is present in the ready state. It is called “convoy effect” problem.
It is a poor in performance as average waiting time is high. so.This is not suitable for modern computers.
2. Shortest-Job-First (SJF) Scheduling
Here job means process
It is also called as shortest-job-Next(SJN) and it is a non-preemptive scheduling algorithem.
A process with the shortest burst time is scheduled first to the cpu it is called SJF/SJN.
If two processes have same burst time, then FCFS is used to break the tie.
It is best approach to minimize waiting time and response time.
In this, the processor(CPU) should know in advance how much time process will take.
Example 1:
PROCESS ARRIVAL TIME BURST TIME
P1 2 1
P2 1 5
P3 4 1
P4 0 6
P5 2 3
Draw a GANTT Chart:
P4 P1 P3 P5 P2
0 6 7 8 11 16
Here P4 arrives at 0ms in ready queue.
Next p1&p3 have same burst time, here we follow FCFS technique. According to FCFS ,P1 first come into
the ready queue next P3.
• Next, P5 has shortest burst time i.e 3ms
• Last P2 will be execute because it contain long burst time i.e 5ms.
P1 2 1 7 5 4 4
P2 1 5 16 15 10 10
P3 4 1 8 4 3 3
P4 0 6 6 6 0 0
P5 2 3 11 9 6 6
Advantages:
Maximum throughput
Minimum average waiting and turnaround time Shortest processes will execute first.
Better response time than FCFS.
Disadvantages:
There is a chance of starvation problem.i.e if shortest processes coming into the ready queue keep executing
and longest process blocked for indefinite time that is called starvation.
There is also Convoy effect problem.
3. Priority Scheduling:
Priority scheduling is a non-preemptive and preemptive scheduling algorithm.
The priority assigned to each process. If process with highest priority than it is to be executed first and so
on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Example 1:
PROCESS Arrival TIME PRIORITY BURST TIME
P0 0 5 10/9
P1 1 4 6/4
P2 3 2 2/2-2=0
P3 5 0 4
Draw a GANTT Chart: Here lesser value is highest priority
P0 P1 P2 P3 P1 P0
0 1 3 5 9 13 22
Find CT, TAT, WT, RT, AVG TAT, AVG WT..?
• Example2:
PROCESS BURST TIME PRIORITY
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Draw a GANTT Chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3
Draw a GANTT Chart: Ready queue:
p1 p2 p3 p1 p4 p5 p2 p6 p5 p2 p6 p5
Gantt chart:
p1 p2 p3 p1 p4 p5 p2 p6 p5 p2 p6 p5
0 2 4 6 8 9 11 13 15 17 18 19 21
Find CT, TAT, WT, RT, AVG TAT, AVG WT..?
Practice Example2:
process Arrival time Burst Time
P1 0 8
P2 0 2
P3 0 7
p4 0 3
p5 0 5
Advantages: No Convoy effect
No Starvation
Priority is same for each process
Each process gets a fair share of cpu Easy and simple to implement.
Disadvantages:
The CPU is left idle due to a lot of context switching.
5. Shortest Remaining Time first(SRTF)
Shortest Remaining Time First (SRTF) scheduling algorithm is basically a preemptive mode of the Shortest
Job First (SJF) algorithm
P1 2 1
P2 1 5
P3 4 1
P4 0 6/5/4
P5 2 3/2
Ready Queue:
P4 P2 P1 P5 P3
Draw Gantt Chart:
p4 p4 p1 p5 p3 p5 p4 p2
0 1 2 3 4 5 7 11 16
First at time 0ms p4 process is entered into ready queue, after 1ms(6-1=5) Here p2 BT is 5ms , p4 BT 5ms.
so., here first arrival process is p4 in ready queue. so, again p4 is running.
At 2ms process P1&P5 entered into ready queue. But P2 has low Burst time. So, P1 is executed.
At 3ms P5 is started its execution and stop P5 execution because at 4ms process P3 is entered into ready
queue and it has low burst time. So, now P3 is executed.
No need to check after arrival of all the processes into ready queue.
Next P5 is executed .because p5 has low BT(i.e. 2ms)
Next P4 is executed .because p54has low BT(i.e. 4ms) Next P2 is executed .because p2 has low BT(i.e.
5ms).
P2 1 4
P3 2 9
p4 3 5
6. Multilevel Queue Scheduling
In this algorithm, partition the ready queue into several separate ready queues.
A multilevel queue is suitable when the processes are classified in groups.
Each of the processes are permanently assigned to one queue based on some priority.
Then each queue has its own scheduling algorithm.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on
the algorithm assigned to the queue.
• Example 2:
Multilevel Feedback Queue scheduling:
In Multilevel Feedback Queue Scheduling (MFQS), the processes can move between the queues.
There are multiple CPUs available for load sharing. This process helps avoid starvation as if there is a
process waiting in the low priority queue can be moved to the high priority queue, and a process taking
too much of the CPU time will be shifted to the lower priority queue.
MFQS is a flexible scheduling algorithm.
It allows the different processes to move between queues.
Prevents CPU starvation.
It is the most complex scheduling algorithm.
• Example:
Multiple Processors Scheduling
A multi-processor is a system that has more than one processor but shares the same memory, bus, and
input/output devices.
The bus connects the processor to the RAM, to the I/O devices, and to all the other components of the
computer.
Multi processor system works even if a processor goes down. The rest of the system keeps working. In
multi-processor scheduling, more than one processors(CPUs) share the load to handle the execution of
processes smoothly.
Approaches to Multiple Processor Scheduling
There are two approaches to multiple processor scheduling in the operating system: Symmetric
Multiprocessing and Asymmetric Multiprocessing.
1.Symmetric Multiprocessing(SMP):
In this , a system has more than one processor and each processor has its own private operating system
and memory is shared among all the processors and inputoutput system are also shared. All the system
is connected by the single bus.
SMP System
In SMP, where each processor is self scheduling and All processors are treated equally.
Tasks of the OS are done individual processor.
All processors communicate with another processor by a shared memory.
In SMP, the process is taken from ready queue.
SMP systems are costlier and this systems are complex to design.
2. Asymmetric multiprocessing(AMP):
It is a multiprocessor interconnected system where not all of the multiple interconnected processors are
treated equally.
In this , one processor works as master and other processor work as a slave..
In this, all the processors are handled by the single processor which is called master server.
The master server runs the operating system process and the slave server run the user processes.
The memory and I/O devices are shared among all the processors and all the processor are connected to a
common bus.
This system is simple and reduces the data sharing so this system is called Asymmetric multiprocessing.
AMP systems are easier to design.
AMP System
Processor Affinity:
A process has an affinity for a processor on which it runs.
i.e means systems are try to avoid migration of processes from one processor to another and keep a
process running on the same processor. This is known as processor affinity.
There are two types of processor affinity
1. Soft Affinity: The system has a rule of trying to keep running a process on the same processor but
does not guarantee it. hence some process may migrate between processors. This is called soft affinity.
2. Hard Affinity:
Hard Affinity allows a process to specify a subset of processors on which it may run. So, In this,
process can’t migrate from one process to another processors. This is called hard affinity.
Some Linux systems implement soft affinity and provide system calls like sched_setaffinity() that also
support hard affinity.
Load Balancing:
An important goal in a multiprocessor system is to balance the load between processors, so that one
processor won't be sitting idle while another is overloaded.
Load Balancing keeps the workload evenly distributed across all processors in an SMP system.
Load balancing is necessary only on systems where each processor has its own private queue of a
process that is eligible to execute.
There are two general approaches to load balancing:
Balancing can be achieved through either push migration or pull migration.
1. Push Migration:
In push migration, a task routinely checks the load on each processor. If it finds an imbalance, it evenly
distributes the load on each processor by moving the processes from overloaded to idle or less busy
processors.
2. Pull Migration:
Pull Migration occurs when an idle processor pulls a waiting task from a busy processor for its execution.
System call interface for process management fork, exit, wait, waitpid, exec.
1. The fork() System Call:
➢ The fork() is one of the system calls that is very special and useful in Linux/Unix systems.
➢ Fork() system call is used for creating a new process, which is called child process, which runs
concurrently with the parent process (OR)
➢ Parent process has to wait until the child process completed its execution. So, once the child process
execution is terminated than only the parent process has to admitted.
➢ Child process uses the same pgm counter, registers, open files which use in the parent process.
Fork() takes no parameters and returns an integer value. Below are different values returned by fork().
Negative value: creation of child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: returned to parent
So, the value contains process ID(PID) of newly created child process.
Here parent PID and child PID are different.
The returned process ID is of type pid_t defined in sys/types.h. Normally, the process ID is an
integer.
Aprocess can use function getpid() to retrieve the process ID assigned to this process.
2. exit() System call
The exit() is such a function or one of the system calls that is used to terminate the process.
This system call defines that the thread execution is completed especially in the case of a multi-threaded
environment.
After the use of exit() system call, all the resources used in the process are retrieved by the operating
system and then terminate the process. The system call Exit() is equivalent to exit().
Synopsis:#include <unistd.h>
void _exit(int status);
#include <stdlib.h> void _Exit(int status);
3. wait() system Call
As in the case of a fork, child processes are created and get executed but the parent process is suspended
until the child process executes.
In this case, a wait() system call is activated automatically due to the suspension of the parent process.
After the child process ends the execution, the parent process has to admitted.
4. exec() System Call
The exec() system call that runs by replacing the current process image with the new process image.
However, the original process remains as a new process but the new process replaces the head data, stack
data,etc.
It runs the program from the entry point by loading the program into the current process space.
5. waitpid() System Call
The waitpid() system call suspends execution of the current process until a child specified by pid
argument has changed state.
By default, waitpid() waits only for terminated children, but this behaviour is modifiable via the options
argument.
Syntax of waitpid() :
pid_t waitpid(pid_t pid, int *status, int options);
Deadlocks Characterization
Necessary Conditions: -
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion. At least one resource must be held in a non-sharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait. A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by other processes.
3. No preemption. Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set { P0 , Pl, ... , Pn } of waiting processes must exist such that Po is waiting for
a resource held by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is waiting for a resource
held by Pn, and Pn is waiting for a resource held by Po.
We emphasize that all four conditions must hold for a deadlock to occur. The circular-wait
condition implies the hold-and-wait condition, so the four conditions are not completely
independent.
Resource-Allocation Graph: -
Deadlocks can be described more precisely in terms of a directed graph called a graph. This graph
consists of a set of vertices V and a set of edges E.
The set of vertices V is partitioned into two different types of nodes:
P == { P1, P2, ..., Pn}, the set consisting of all the active processes in the system, and
R == {R1, R2..., Rm} the set consisting of all resource types in the system
A directed edge from process Pi to resource type Rj is denoted by Pi Rj; it signifies that
process Pi has requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj Pi; it signifies that an
instance of resource type Rj has been allocated to process Pi.
A directed edge Pi Rj is called a request edge; a directed edge Rj Pi is called an
assignment edge.
• Pictorially we represent each process Pi as a circle and each resource type Rj as a rectangle.
• Since resource type Rj may have more than one instance, we represent each such instance as a dot
within the rectangle.
• Note that a request edge points to only the rectangle R1, whereas an assignment edge must also
designate one of the dots in the rectangle.
• When process Pi requests an instance of resource type Rj, a request edge is inserted in the
resource-allocation graph.
• When this request can be fulfilled, the request edge is instantaneously transformed to an
assignment edge. When the process no longer needs access to the resource, it releases the
resource; as a result, the assignment edge is deleted.
• If a resource category contains more than one instance, then the presence of a cycle in the resource-
allocation graph indicates the possibility of a deadlock, but does not guarantee one.
• However, there is no deadlock. Observe that process P4 may release its instance of resource type
R2. That resource can then be allocated to P3, breaking the cycle.
• In summary, if a resource-allocation graph does not have a cycle, then the system is not in a
deadlocked state.
• If there is a cycle, then the system may or may not be in a deadlocked state.
• This observation is important when we deal with the deadlock problem.
Deadlock Prevention
❖ Deadlock provides a set of methods for ensuring that at least one of the necessary conditions
cannot hold. We elaborate on this approach by examining each of the four necessary conditions
separately.
1. Mutual Exclusion:
The mutual-exclusion condition must hold for non-sharable resources.
For example, a printer cannot be simultaneously shared by several processes.
Sharable resources, in contrast, do not require mutually exclusive access and thus cannot be
involved in a deadlock.
Read-only files are a good example of a sharable resource. If several processes attempt to open a
read-only file at the same time, they can be granted simultaneous access to the file.
A process never needs to wait for a sharable resource.
In general, however, we cannot prevent Deadlocks by denying the mutual-exclusion condition,
because some resources are intrinsically non-sharable.
2. Hold and Wait:
To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other resources.
One protocol that can be used requires each process to request and be allocated all its resources
before it begins execution.
An alternative protocol allows a process to request resources only when it has none. A process may
request some resources and use them.
Before it can request any additional resources, however, it must release all the resources that it is
currently allocated.
Example: To illustrate the difference between these two protocols, we consider --
• First method, a process that copies data from a DVD drive to a file on disk, sorts the file, and
then prints the results to a printer.
• If all resources must be requested at the beginning of the process, then the process must
initially request the DVD drive, disk file, and printer.
• It will hold the printer for its entire execution, even though it needs the printer only at the end.
• The second method allows the process to request initially only the DVD drive and disk file.
• It copies from the DVD drive to the disk and then releases both the DVD drive and the disk
file.
• The process must then again request the disk file and the printer. After copying the disk file to
the printer, it releases these two resources and terminates.
Both these protocols have two main disadvantages.
• First, resources may be allocated but unused for a long period. In the example we can release
the DVD drive and disk file, and then again request the disk file and printer, only if we can be
sure that our data will remain on the disk file. Otherwise, we must request all resources at the
beginning for both protocols.
• Second, starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to some other
process.
3. No Preemption:
The third necessary condition for deadlocks is that there be no preemption of resources that have
already been allocated. To ensure that this condition does not hold, we can use the following
protocol.
If a process is holding some resources and requests another resource that cannot be immediately
allocated to it (that is, the process must wait), then all resources the process is currently holding are
preempted.
Alternatively, if a process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not, we check whether they are allocated to some other
process that is waiting for additional resources.
If so, we preempt the desired resources from the waiting process and allocate them to the
requesting process.
If the resources are neither available nor held by a waiting process, the requesting process must
wait. While it is waiting, some of its resources may be preempted, but only if another process
requests them.
A process can be restarted only when it is allocated the new resources it is requesting and recovers
any resources that were preempted while it was waiting.
4. Circular Wait:
The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure that
this condition never holds.
Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a
resource that is held by P1.
P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn,
and Pn is waiting for a resource that is held by P0.
Deadlock Avoidance:
The simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need. Given this a priori information, it is possible to construct an
algorithm that ensures that the system will never enter a deadlocked state. Such an algorithm defines
the deadlock-avoidance approach. I. Safe state:
• A state is safe if the system can allocate resources to each process (up to its maximum) in some
order and still avoid a deadlock.
• A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current allocation state if, for
each Pi, the resource requests that Pi can still make can be satisfied by the currently available
resources plus the resources held by all Pj, with j < i.
• In this situation, if the resources that Pi needs are not immediately available, then Pi can wait until
all Pj have finished.
• When they have finished, Pi can obtain all of its needed resources, complete it, and return its
allocated resources, and terminate.
• If no such sequence exists, then the system state is said to be unsafe.
• A safe state is not a deadlocked state. Not all unsafe states are deadlocks, however (Figure
D).
• An unsafe state may lead to a deadlock. As long as the state is safe, the operating system can avoid
unsafe (and deadlocked) states.
• In an unsafe state, the operating system cannot prevent processes from requesting resources in
such a way that a deadlock occurs.
2. If Requesti < Available, go to step 3. Otherwise, Pi must wait, since the resources are not
available.
3. Have the system pretend to have allocated the requested resources to process Pi by modifyil1.g
the state as follows: Available= Available - Requesti ;
Allocationi =Allocationi + Requesti ;
Needi =Needi - Requesti;
If the resulting resource-allocation state is safe, the transaction is completed, and process P i is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Request i , and the
old resource-allocation state is restored.
Illustrative Example:
To illustrate the use of the banker's algorithm,
Consider a system with five processes Po through P4 and three resource types A, B, and C. Resource
type A has ten instances, resource type B has five instances, and resource type C has seven instances.
Suppose that, at time T0 , the following snapshot of the system has been taken:
We claim that the system is currently in a safe state. Indeed, the Sequence <P1, P3, P4, P0, P2>
satisfies the safety criteria.
Suppose now that process P1 requests one additional instance of resource type A and two instances
of resource type C, so Request1 = (1,0,2).
To decide whether this request can be immediately granted, we first check that Request1
≤ Available, that is, that (1,0,2) ≤ (3,3,2), which is true.
We then pretend that this request has been fulfilled, and we arrive at the following new state:
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 2 600
P3 2 1 1 011
P4 002 431
We must determine whether this new system state is safe. To do so, we execute our safety
algorithm and find that the sequence <P1, P3, P4, P0, P2> satisfies the safety requirement. Hence,
we can immediately grant the request of process P1.
However, that when the system is in this state, a request for (3,3,0) by P4 cannot be granted, since
the resources are not available.
i.e. we first check that Request4 ≤ Available, that is, that (3,3,0) ≤ (2,3,0), which is false.
Furthermore, a request for (0,2,0) by P0 cannot be granted, even though the resources are
available, since the resulting state is unsafe.
i.e we first check that Request0 ≤ Available, that is, that (0,2,0) ≤ (2,3,0), which is true.
But we don’t have a safety sequence if we grant the resources for P0 immediately.
Example 2: Consider the following snapshot of a system:
Allocation Max
ABCD ABCD
P0 3 0 1 4 5117
P1 2 2 1 0 3211
P2 3121 3321
P3 0510 4612
P4 4212 6325
Using the banker’s algorithm, determine whether or not each of the following
states is unsafe. If the state is safe, illustrate the order in which the processes
may complete. Otherwise, illustrate why the state is
unsafe.
a. Available = (0, 3, 0, 1)
b. Available = (1, 0, 0, 2)
P1 3 1 2 1 5252
P2 2103 2316
P3 1 3 1 2 1424
P4 1432 3665
b. If a request from process P1 arrives for (1, 1, 0, 0), can the request be
granted immediately?
c. If a request from process P4 arrives for (0, 0, 2, 0), can the request be
granted immediately?
Example 4: Consider the following snapshot of a system:
You may wonder why we reclaim the resources of process Pi (in step 3) as soon as we
determine that Requesti ≤ Work (in step 2b). We know that Pi is currently not involved in a
deadlock (since Requesti ≤ Work). Thus, we take an optimistic attitude and assume that Pi will
require no more resources to complete its task; it will thus soon return all currently allocated
resources to the system. If our assumption is incorrect, a deadlock may occur later. That
deadlock will be detected the next time the deadlock-detection algorithm is invoked.
To illustrate this algorithm, we consider a system with five processes P0 through P4 and three resource
types A, B, and C. Resource type A has seven instances, resource type B has two instances, and resource
type C has six instances. Suppose that, at time T0, we have the following resource-allocation state:
Allocation Request Available
ABC ABC ABC
010 000 000
P0
200 202
P1
303 000
P2
211 100
P3
002 002
P4
We claim that the system is not in a deadlocked state. Indeed, if we execute our algorithm, we will
find that the sequence <P0, P2, P3, P1, P4> results in Finish[ i] = = true for all i.
Suppose now that process P2 makes one additional request for an instance of type C. The Request
matrix is modified as follows:
Request
AB
C
P0 0 0 0
P1 20
2
P2 0 0 1
P3 1 0 0
P4 0 0 2
We claim that the system is now deadlocked. Although we can reclaim the resources held by
process P0, the number of available resources is not sufficient to fulfill the requests of the other
processes. Thus, a deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage:
When should we invoke the detection algorithm? The answer depends on two factors:
1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be invoked frequently. Resources
allocated to deadlocked processes will be idle until the deadlock can be broken. In addition, the
number of processes involved in the deadlock cycle may grow.
One is simply to abort one or more processes to break the circular wait.
The other is to preempt some resources from one or more of the deadlocked processes.
1. Process Termination:
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated processes.
a) Abort all deadlocked processes. This method clearly will break the deadlock cycle, but at
great expense. The deadlocked processes may have computed for a long time, and the results of these
partial computations must be discarded and probably will have to be recomputed later.
b) Abort one process at a time until the deadlock cycle is eliminated. This method incurs
considerable overhead, since after each process is aborted, a deadlock-detection algorithm must be
invoked to determine whether any processes are still deadlocked.
If preemption is required to deal with deadlocks, then three issues need to be addressed:
Since, in general, it is difficult to determine what a safe state is, the simplest solution is a total
rollback: abort the process and then restart it. Although it is more effective to roll back the
process only as far as necessary to break the deadlock, this method requires the system to keep
more information about the state of all running processes.
3. Starvation. How do we ensure that starvation will not occur? That is, how can we guarantee that
resources will not always be preempted from the same process?
• In a system where victim selection is based primarily on cost factors, it may happen that the same
process is always picked as a victim.
• As a result, this process never completes its designated task, a starvation situation any practical
system must address.
• Clearly, we must ensure that a process can be picked as a victim only a (small) finite number of
times.
• The most common solution is to include the number of rollbacks in the cost factor.