Unit 2 Os
Unit 2 Os
Process States:
When a process executes, it passes through different states. These stages may differ in
different operating systems.
Page 1 of 28
In general, a process can have one of the following five states at a time.
Page 2 of 28
CPU Scheduling Information: Process priority and other scheduling information
7
which is required to schedule the process.
Memory management information: This includes the information of page table,
8
memory limits, Segment table depending on memory used by the operating system.
Accounting information: This includes the amount of CPU used for process
9
execution, time limits, execution ID etc.
10 IO status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
Page 3 of 28
What are Scheduling Queues?
All processes, upon entering into the system, are stored in the Job Queue.
Processes in the Ready state are placed in the Ready Queue.
Processes waiting for a device to become available are placed in Device
Queues. There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it
is selected for execution (or dispatched). Once the process is assigned to the CPU and
is executing, one of the following several events can occur:
The process could issue an I/O request, and then be placed in the I/O queue.
The process could create a new sub-process and wait for its termination.
The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the
ready state, and is then put back in the ready queue. A process continues this cycle
until it terminates, at which time it is removed from all queues and has its PCB and
resources deallocated.
Schedulers:
Schedulers are special system software which handle process scheduling in various
ways.
Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler.
A long-term scheduler determines which programs are admitted to the system for
processing.
It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
Page 4 of 28
It also controls the degree of multiprogramming.
If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler:
It is also called as CPU scheduler.
Its main objective is to increase system performance in accordance with the chosen
set of criteria.
It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping.
It removes the processes from the memory.
It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request.
A suspended process cannot make any progress towards completion.
In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be swapped out or rolled
out.
Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N.
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
is a process swapping
1 is a job scheduler is a CPU scheduler
scheduler.
Speed is in between both short
Speed is lesser than short Speed is fastest among
2 and long term scheduler.
term scheduler other two
provides lesser control
controls the degree of reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming
It is almost absent or minimal
is also minimal in time It is a part of Time sharing
4 in time sharing system
sharing system systems.
It selects processes from pool selects those processes
It can re-introduce the process
5 and loads them into which are ready to into memory and execution
memory for execution execute can be continued.
Page 5 of 28
Context Switch:
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single
CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block.
After this, the state for the process to run next is loaded from its own PCB and used
to set the PC, registers, etc.
At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state
must be saved and restored.
To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers.
When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Page 6 of 28
Inter-process Communication
Processes executing concurrently in the operating system might be either
independent processes or cooperating processes.
A process is independent if it cannot be affected by the other processes executing in
the system.
Inter Process Communication (IPC) is a mechanism that involves communication
of one process with another process. This usually occurs only in one system.
Communication can be of two types −
Between related processes initiating from only one process, such as parent and
child processes.
Between unrelated processes, or two or more different processes.
Processes can communicate with each other using these two ways:
Shared Memory
Message passing
Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes.
This is done so that the processes can communicate with each other.
All POSIX systems, as well as Windows operating systems use shared memory.
Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other.
Messages are stored in the queue until their recipient retrieves them.
Message queues are quite useful for inter-process communication and are used
by most operating systems.
If two processes p1 and p2 want to communicate with each other, they proceed
as follow:
Establish a communication link (if a link already exists, no need to
establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
send(message, destinaion) or
send(message) receive(message, host) or
receive(message)
Page 7 of 28
What is Thread?
A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers
which hold its current working variables, and a stack which contains the execution
history.
A thread shares with its peer threads few information like code segment, data
segment and open files.
When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process.
Threads provide a way to improve application performance through parallelism.
Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.
Threads have been successfully used in implementing network servers and web
server.
They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors.
The following figure shows the working of a single-threaded and a multithreaded
process.
Page 9 of 28
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Page 10 of 28
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a
way that the system does not support them, then the Kernel threads use the many-to-
one relationship modes.
Page 11 of 28
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads
to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship
model.
User-level threads are faster to create andKernel-level threads are slower to create
1
manage. and manage.
User-level thread is generic and can run Kernel-level thread is specific to the
3
on any operating system. operating system.
CPU scheduling is a process that allows one process to use the CPU while the execution of
another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
Page 12 of 28
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient,
fast, and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term scheduler
(or CPU scheduler). The scheduler selects from among the processes in memory that are ready
to execute and allocates the CPU to one of them.
Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher
is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program from where it
left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process
switch. The time taken by the dispatcher to stop one process and start another process is known
as the Dispatch Latency. Dispatch Latency can be explained using the below figure:
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O request or
invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.
Page 13 of 28
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists
in the ready queue) must be selected for execution. There is a choice, however in circumstances
2 and 3.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.
1. Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.
It is the only method that can be used on certain hardware platforms because It does not require
the special hardware(for example a timer) needed for preemptive scheduling.
In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and then after that it
can allocate the CPU to any other process.
Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically
non-preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.
2. Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary
to run a certain task that has a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority
task has finished its execution.
Thus this type of scheduling is used mainly when a process switches either from running state
to ready state or from waiting state to ready state. The resources (that is CPU cycles) are mainly
allocated to the process for a limited amount of time and then are taken away, and after that, the
Page 14 of 28
process is again placed back in the ready queue in the case if that process still has a CPU burst
time remaining. That process stays in the ready queue until it gets the next chance to execute.
Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR),
Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.
There are many different criteria to check when considering the "best" scheduling algorithm,
they are:
1. CPU Utilization
To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time(Ideally 100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)
2. Throughput
It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the
specific processes.
3. Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).
4. Waiting Time
Page 15 of 28
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
5. Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.
6. Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.
FCFS Scheduling
In CPU-scheduling problems some terms are used while solving the problems, so for
conceptual purpose the terms are discussed as follows −
Arrival time (AT) − Arrival time is the time at which the process arrives in ready
queue.
Burst time (BT) or CPU time of the process − Burst time is the unit of time in which
a particular process completes its execution.
Completion time (CT) − Completion time is the time at which the process has been
terminated.
Turn-around time (TAT) − The total time from arrival time to completion time is
known as turn-around time. TAT can be written as,
Waiting time (WT) Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written as,
Page 16 of 28
Turn-around time (TAT) = Completion time (CT) − Arrival time (AT) or TAT = Burst time
(BT) + Waiting time (WT)
Response time (RT) − Response time is the time at which CPU has been allocated to a
particular process first time.
In case of non-preemptive scheduling, generally Waiting time and Response time is
same.
Gantt chart − Gantt chart is a visualization which helps to scheduling and managing
particular tasks in a project. It is used while solving scheduling problems, for a concept
of how the processes are being allocated in different algorithms.
Problem 1
Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.
Solution
Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table
Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit can be considered
as milliseconds)
Average Turn-around time = (11+14+4+11+16)/5 = 56/5 = 11.2 time unit (time unit can be
considered as milliseconds)
Page 17 of 28
Problem 2
Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.
Solution
Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table
Average Waiting time = (0+0+2+4+8)/5 = 14/5 = 2.8 time unit (time unit can be considered as
milliseconds)
Average Turn-around time = (2+1+5+9+13)/5 = 30/5 = 6 time unit (time unit can be
considered as milliseconds)
*In idle (not-active) CPU period, no process is scheduled to be terminated so in this time it
remains void for a little time.
It is an easy algorithm to implement since it does not include any complex way.
Every task should be executed simultaneously as it follows FIFO queue.
FCFS does not give priority to any random important tasks first so its a fair scheduling.
Page 18 of 28
FCFS results in convoy effect which means if a process with higher burst time comes
first in the ready queue then the processes with lower burst time may get blocked and
that processes with lower burst time may not be able to get the CPU if the higher burst
time task takes time forever.
If a process with long burst time comes in the line first then the other short burst time
process have to wait for a long time, so it is not much good as time-sharing systems.
Since it is non-preemptive, it does not release the CPU before it completes its task
execution completely.
*Convoy effect and starvation sounds similar but there is slight difference, so it is advised to
not treat these both terms as same words.
Conclusion
Although FCFS is simple to implement and understand, FCFS is not good for interactive
system and not used in modern operating systems. The Convoy effect in the FCFS can be
prevented using other CPU-scheduling preemptive algorithms such as Round-robin scheduling.
A CPU scheduling strategy is a procedure that selects one process in the waiting state and
assigns it to the CPU so that it can be executed. There are a number of scheduling algorithms.
In this section, we will learn about Shortest Job First or SJF scheduling algorithm.
In the Shortest Job First scheduling algorithm, the processes are scheduled in ascending order
of their CPU burst times, i.e. the CPU is allocated to the process with the shortest execution
time.
In the non-preemptive version, once a process is assigned to the CPU, it runs into completion.
Here, the short term scheduler is invoked when a process completes its execution or when a
new process(es) arrives in an empty ready queue.
This is the preemptive version of SJF scheduling and is also referred as Shortest Remaining
Time First (SRTF) scheduling algorithm. Here, if a short process enters the ready queue while a
longer process is executing, process switch occurs by which the executing process is swapped
out to the ready queue while the newly arrived shorter process starts to execute. Thus the short
term scheduler is invoked either when a new process arrives in the system or an existing
process completes its execution.
Page 19 of 28
In cases where two or more processes have the same burst time, arbitration is done
among these processes on first come first serve basis.
There are both preemptive and non-premptive versions.
It minimises the average waiting time of the processes.
It may cause starvation of long processes if short processes continue to come in the
system.
Example 1
Suppose that we have a set of four processes that have arrived at the same time in the order P1,
P2, P3 and P4. The burst time in milliseconds of each process is given by the following table −
Process P3 has the shortest burst time and so it executes first. Then we find that P1 and P4 have
equal burst time of 6ms. Since P1 arrived before, CPU is allocated to P1 and then to P4. Finally
P2 executes. Thus the order of execution is P3, P1, P4, P2 and is given by the following
GANTT chart −
Let us compute the average turnaround time and average waiting time from the above chart.
= (10 + 26 + 4 + 16) / 4 = 14 ms
= (4 + 16 + 0 + 10) / 4 = 7.5 ms
Page 20 of 28
Example 2
In the previous example, we had assumed that all the processes had arrived at the same time, a
situation which is practically impossible. Here, we consider circumstance when the processes
arrive at different times. Suppose we have set of four processes whose arrival times and CPU
burst times are as follows −
Let us draw the GANTT chart and find the average turnaround time and average waiting time
using non-preemptive SJF algorithm.
GANTT Chart
While drawing the GANTT chart, we will consider which processes have arrived in the system
when the scheduler is invoked. At time 0ms, only P1 is there and so it is assigned to CPU. P1
completes execution at 6ms and at that time P2 and P3 have arrived, but not P4. P3 is assigned
to CPU since it has the shortest burst time among current processes. P3 completes execution at
time 10ms. By that time P4 has arrived and so SJF algorithm is run on the processes P2 and P4.
Hence, we find that the order of execution is P1, P3, P4, P2 as shown in the following GANTT
chart −
Let us calculate the turnaround time of each process and hence the average.
= (6 + 19+ 6 + 5) / 4 = 9 ms
Page 21 of 28
The waiting time is given by the time that each process waits in the ready queue. For a non-
preemptive scheduling algorithm, waiting time of each process can be simply calculated as −
WTP1 = 0 - 0 = 0 ms
WTP2 = 13 - 4 = 9 ms
WTP3 = 6 - 4 = 2 ms
WTP4 = 10 - 8 = 2 ms
= (0 + 9 + 2 + 2) / 4 = 3.25 ms
Let us now perform preemptive SJF (SRTN) scheduling on the following processes, draw
GANTT chart and find the average turnaround time and average waiting time.
GANTT Chart
Since this is a preemptive scheduling algorithm, the scheduler is invoked when a process
arrives and when it completes execution. The scheduler computes the remaining time of
completion for each of the processes in the system and selects the process having shortest
remaining time left for execution.
Initially, only P1 arrives and so it is assigned to CPU. At time 4ms, P2 and P3 arrive. The
scheduler computes the remaining time of the processes P1, P2 and P3 as 4ms, 10ms and 3ms.
Since, P3 has shortest time, P1 is pre-empted by P3. P3 completes execution at 7ms and the
Page 22 of 28
scheduler is invoked. Among the processes in the system, P1 has shortest time and so it
executes. At time 10ms, P4 arrives and the scheduler again computes the remaining times left
for each process. Since the remaining time of P1 is least, no process switch occurs and P1
continues to execute. In the similar fashion, the rest of the processes complete execution.
From the GANTT chart, we compute the average turnaround time and the average waiting
time.
= (3 + 11 + 0 + 1) / 4 = 3.75 ms
In both preemptive and non-preemptive methods, the average waiting time is reduced
substantially in SJF when compared to FCFS scheduling.
SJF optimizes turnaround time to a considerable degree.
If execution time of each process is estimated precisely, it promises maximum
throughput.
In situations where there is an incoming stream of processes with short burst times,
longer processes in the system may be waiting in the ready queue indefinitely leading to
starvation.
In preemptive SJF, i.e. SRTF, if all processes arrive at different times and at frequent
intervals, the scheduler may be always working and consequently the processor may be
more engaged in process switching than actual execution of the processes.
Correct estimation of the burst time a process is a complicated process. Since the
effectiveness of the algorithm is entirely based upon the burst times, an erroneous
calculation may cause inefficient scheduling.
Conclusion
Shortest Job First scheduling can be termed as the optimal scheduling algorithm due to its
theoretical best results. However, the implementation is much more complex and the execution
is more unpredictable than First Come First Serve or Round Robin scheduling.
This scheduling strategy derives its name from an age old round-robin principle which
advocated that all participants are entitled to equal share of assets or opportunities in a turn
wise manner. In RR scheduling, each process gets equal time slices (or time quanta) for which
it executes in the CPU in turn wise manner. When a process gets its turn, it executes for the
assigned time slice and then relinquishes the CPU for the next process in queue. If the process
has burst time left, then it is sent to the end of the queue. Processes enter the queue on first
come first serve basis.
Round Robin scheduling is preemptive, which means that a running process can be interrupted
by another process and sent to the ready queue even when it has not completed its entire
execution in CPU. It is a preemptive version of First Come First Serve (FCFS) scheduling
algorithm.
RR is a fair scheduling strategy where all processes get equal share to execute in turn
wise manner.
It is a preemptive scheduling method where an executing process must give up its
control of the CPU once its time quantum expires.
Through RR scheduling strategy, none of the processes go into starvation.
It is widely used for its simple working principle.
The performance of RR scheduling is vastly dependent on the chosen time quantum.
Any new process that arrives the system is inserted at the end of the ready queue in
FCFS manner.
The first process in the queue is removed and assigned to the CPU.
If the required burst time is less than or equal to the time quantum, the process runs to
completion. The scheduler is invoked when the process completes executing to let in the
next process in the ready queue to the CPU.
If the required burst time is more than the time quantum, the process executes up to the
allotted time quantum. Then its PCB (process control block) status is updated and it is
added to the end of the queue. Context switch occurs and the next process in the ready
queue is assigned to the CPU.
The above steps are repeated until there are no more processes in the ready queue.
We can understand the workings RR scheduling algorithm through the aid of the following
example.
Let us consider a system that has four processes which have arrived at the same time in the
order P1, P2, P3 and P4. The burst time in milliseconds of each process is given by the
following table −
Page 24 of 28
Process CPU Burst Times in ms
P1 8
P2 10
P3 6
P4 4
Let us consider time quantum of 2ms and perform RR scheduling on this. We will draw
GANTT chart and find the average turnaround time and average waiting time.
=(TATP1+TATP2+TATP3+TATP4)/4
In order to calculate the waiting time of each process, we multiply the time quantum with the
number of time slices the process was waiting in the ready queue.
=(WTP1+WTP2+WTP3+WTP4)/4
= ( 8*2 + 9*2+ 8*2+ 6*2) / 4 = 15.5 ms
Round Robin scheduling is the most a fair scheduling algorithm whereby all processes
are given equal time quantum for execution.
Starvation of any process caused by indefinite waiting in ready queue is totally
eliminated in RR scheduling.
It does not require any complicated method to calculate the CPU burst time of each
process prior to scheduling.
It is pretty simple to implement and so finds application in a wide range of situations.
Convoy effect does not occur in RR scheduling as in First Come First Serve CPU
(FCFS) scheduling.
Conclusion
Round Robin scheduling, if properly implemented, provide the simplest and the best solutions
to scheduling problems. A number of variations of RR scheduling are being researched upon
and implemented, in order to avoid the disadvantages of this algorithm. One variant that helps
to provide near perfect time quantum is Dynamic Time Quantum Scheduling. Here, the time
quantum dynamically varies according to the behaviour of the processes in the system. Another
variant, the Selfish Round Robin scheduling assigns priorities to processes and provides more
CPU slices to higher priority processes.
To increase the system's overall performance, numerous processors or cores are frequently used
in modern computer systems. The operating system must be able to effectively schedule
processes to execute on various processors, though, in order to make the best use of these
resources.
Multiple processor scheduling involves deciding which processes should be assigned to which
processor or core and how long they should be permitted to run. While ensuring that all
processes are fairly and appropriately prioritized, the objective is to achieve efficient utilization
of the available processors.
Page 26 of 28
Multiple Processor Scheduling
The system's numerous CPUs communicate often and share a common bus, memory, and other
peripherals. As a result, the system is said to be strongly connected. These systems are
employed whenever large amounts of data need to be processed, and they are mostly used in
satellite, weather forecasting, etc.
Multiprocessor systems can be homogeneous (the same CPU) or heterogeneous (various types
of CPUs). Special scheduling restrictions, such as devices coupled to a single CPU through a
private bus, may apply.
The ideal scheduling method for a system with a single processor cannot be determined by any
rule or policy. There is also no ideal scheduling strategy for a system with several CPUs.
Operating systems utilize a range of multiprocessor scheduling algorithms. Among the most
typical types are ?
Page 27 of 28
Priority Scheduling ? Processes are given levels of priority in this method, and those with
greater priorities are scheduled to run first. This technique might be helpful in systems where
some jobs, like real-time tasks, call for a higher priority.
Scheduling with the shortest job first (SJF) ? This algorithm schedules tasks according to
how long they should take to complete. It is planned for the shortest work to run first, then the
next smallest job, and so on. This technique can be helpful in systems with lots of quick
processes since it can shorten the typical response time.
Fair-share scheduling ? In this technique, the number of processors and the priority of each
process determine how much time is allotted to each. As it ensures that each process receives a
fair share of processing time, this technique might be helpful in systems with a mix of long and
short processes.
Earliest deadline first (EDF) scheduling ? Each process in this algorithm is given a deadline,
and the process with the earliest deadline is the one that will execute first. In systems with real-
time activities that have stringent deadlines, this approach can be helpful.
Scheduling using a multilevel feedback queue (MLFQ) ? Using a multilayer feedback queue
(MLFQ), processes are given a range of priority levels and are able to move up or down the
priority levels based on their behavior. This strategy might be useful in systems with a mix of
short and long processes.
Page 28 of 28