0% found this document useful (0 votes)
6 views28 pages

Unit 2 Os

The document provides an overview of processes in operating systems, detailing their execution, memory structure, states, and the role of the Process Control Block (PCB). It explains process scheduling, types of schedulers, context switching, inter-process communication, and the concept of threads, including their advantages and differences from processes. Additionally, it discusses user-level and kernel-level threads, along with multithreading models.

Uploaded by

megha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views28 pages

Unit 2 Os

The document provides an overview of processes in operating systems, detailing their execution, memory structure, states, and the role of the Process Control Block (PCB). It explains process scheduling, types of schedulers, context switching, inter-process communication, and the concept of threads, including their advantages and differences from processes. Additionally, it discusses user-level and kernel-level threads, along with multithreading models.

Uploaded by

megha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT -2

A process is basically a program in execution or instance of the program execution.


The execution of a process must progress in a sequential fashion.
 Process is not as same as program code but a lot more than it.
 A process is an 'active' entity as opposed to program which is considered to be
a 'passive' entity.
 Attributes held by process include hardware state, memory, CPU etc.

Process memory is divided into four sections for efficient working :


 The Text section is made up of the compiled program code, read in from non-
volatile storage when the program is launched.
 The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
 The Heap is used for the dynamic memory allocation, and is managed via calls
to new, delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.

Process States:
When a process executes, it passes through different states. These stages may differ in
different operating systems.

Page 1 of 28
In general, a process can have one of the following five states at a time.

S.N. State & Description


1 Start: This is the initial state when a process is first started/created.
Ready: The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run.
2
Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.
Running: Once the process has been assigned to a processor by the OS scheduler, the
3
process state is set to running and the processor executes its instructions.
Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
4
waiting for user input, or waiting for a file to become available.
Terminated or Exit: Once the process finishes its execution, or it is terminated by the
5 operating system, it is moved to the terminated state where it waits to be removed from
main memory.

Process State Diagram


Process Control Block (PCB):
 A Process Control Block is a data structure maintained by the Operating
System for every process.
 The PCB is identified by an integer process ID (PID).
 A PCB keeps all the information needed to keep track of a process as listed
below in the table –

S.N. Information & Description


Process State: The current state of the process i.e., whether it is ready, running, waiting,
1
or whatever.
2 Process privileges: This is required to allow/disallow access to system resources.
3 Process ID: Unique identification for each of the process in the operating system.
4 Pointer: A pointer to parent process.
Program Counter: Program Counter is a pointer to the address of the next instruction to
5
be executed for this process.
CPU registers: Various CPU registers where process need to be stored for execution for
6
running state.

Page 2 of 28
CPU Scheduling Information: Process priority and other scheduling information
7
which is required to schedule the process.
Memory management information: This includes the information of page table,
8
memory limits, Segment table depending on memory used by the operating system.
Accounting information: This includes the amount of CPU used for process
9
execution, time limits, execution ID etc.
10 IO status information: This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

Process Control Block (PCB) Diagram

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

Process Scheduling
Definition
 The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating systems.
 Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

Page 3 of 28
What are Scheduling Queues?
 All processes, upon entering into the system, are stored in the Job Queue.
 Processes in the Ready state are placed in the Ready Queue.
 Processes waiting for a device to become available are placed in Device
Queues. There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it
is selected for execution (or dispatched). Once the process is assigned to the CPU and
is executing, one of the following several events can occur:
 The process could issue an I/O request, and then be placed in the I/O queue.
 The process could create a new sub-process and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the
ready state, and is then put back in the ready queue. A process continues this cycle
until it terminates, at which time it is removed from all queues and has its PCB and
resources deallocated.
Schedulers:
 Schedulers are special system software which handle process scheduling in various
ways.
 Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler
 It is also called a job scheduler.
 A long-term scheduler determines which programs are admitted to the system for
processing.
 It selects processes from the queue and loads them into memory for execution.
 Process loads into the memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.

Page 4 of 28
 It also controls the degree of multiprogramming.
 If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
 On some systems, the long-term scheduler may not be available or minimal.
 Time-sharing operating systems have no long term scheduler.
 When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler:
 It is also called as CPU scheduler.
 Its main objective is to increase system performance in accordance with the chosen
set of criteria.
 It is the change of ready state to running state of the process.
 CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
 Medium-term scheduling is a part of swapping.
 It removes the processes from the memory.
 It reduces the degree of multiprogramming.
 The medium-term scheduler is in-charge of handling the swapped out-processes.
 A running process may become suspended if it makes an I/O request.
 A suspended process cannot make any progress towards completion.
 In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage.
 This process is called swapping, and the process is said to be swapped out or rolled
out.
 Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N.
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
is a process swapping
1 is a job scheduler is a CPU scheduler
scheduler.
Speed is in between both short
Speed is lesser than short Speed is fastest among
2 and long term scheduler.
term scheduler other two
provides lesser control
controls the degree of reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming
It is almost absent or minimal
is also minimal in time It is a part of Time sharing
4 in time sharing system
sharing system systems.
It selects processes from pool selects those processes
It can re-introduce the process
5 and loads them into which are ready to into memory and execution
memory for execution execute can be continued.

Page 5 of 28
Context Switch:
 A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time.
 Using this technique, a context switcher enables multiple processes to share a single
CPU.
 Context switching is an essential part of a multitasking operating system features.
 When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block.
 After this, the state for the process to run next is loaded from its own PCB and used
to set the PC, registers, etc.
 At that point, the second process can start executing.

 Context switches are computationally intensive since register and memory state
must be saved and restored.
 To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers.
 When the process is switched, the following information is stored for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Page 6 of 28
Inter-process Communication
 Processes executing concurrently in the operating system might be either
independent processes or cooperating processes.
 A process is independent if it cannot be affected by the other processes executing in
the system.
 Inter Process Communication (IPC) is a mechanism that involves communication
of one process with another process. This usually occurs only in one system.
 Communication can be of two types −
 Between related processes initiating from only one process, such as parent and
child processes.
 Between unrelated processes, or two or more different processes.
 Processes can communicate with each other using these two ways:
 Shared Memory
 Message passing
Shared Memory
 Shared memory is the memory that can be simultaneously accessed by multiple
processes.
 This is done so that the processes can communicate with each other.
 All POSIX systems, as well as Windows operating systems use shared memory.
Message Queue
 Multiple processes can read and write data to the message queue without being
connected to each other.
 Messages are stored in the queue until their recipient retrieves them.
 Message queues are quite useful for inter-process communication and are used
by most operating systems.
 If two processes p1 and p2 want to communicate with each other, they proceed
as follow:
 Establish a communication link (if a link already exists, no need to
establish it again.)
 Start exchanging messages using basic primitives.
 We need at least two primitives:
send(message, destinaion) or
send(message) receive(message, host) or

receive(message)
Page 7 of 28
What is Thread?
 A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers
which hold its current working variables, and a stack which contains the execution
history.
 A thread shares with its peer threads few information like code segment, data
segment and open files.
 When one thread alters a code segment memory item, all other threads see that.
 A thread is also called a lightweight process.
 Threads provide a way to improve application performance through parallelism.
 Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.
 Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.
 Threads have been successfully used in implementing network servers and web
server.
 They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors.
 The following figure shows the working of a single-threaded and a multithreaded
process.

Difference between Process and Thread


S.N. Process Thread
Process is heavy weight or resource Thread is light weight, taking lesser
1
intensive. resources than a process.
Process switching needs interaction with
Thread switching does not need to
2
operating system. interact with operating system.
In multiple processing environments,
each process executes the same codeAll threads can share same set of open files,
3
but has its own memory and file child processes.
resources.
Page 8 of 28
If one process is blocked, then no
While one thread is blocked and waiting,
4 other process can execute until the
a second thread in the same task can run.
first process is unblocked.
Multiple processes without using Multiple threaded processes use fewer
5
threads use more resources. resources.
In multiple processes each process One thread can read, write or change
6
operates independently of the others. another thread's data.
Advantages of Thread
 Threads minimize the context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on
kernel, an operating system core.
User Level Threads
 In this case, the thread management kernel is not aware of the existence of
threads.
 The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and
for saving and restoring thread contexts.
 The application starts with a single thread.

Page 9 of 28
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a thread
basis. The Kernel performs thread creation, scheduling and management in Kernel
space. Kernel threads are generally slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of
the same process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires
a mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models
are three types
 Many to many relationship.
 Many to one relationship.
 One to one relationship.
Many to Many Model
 The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
 The following diagram shows the many-to-many threading model where 6 user
level threads are multiplexing with 6 kernel level threads.
 In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallel on a multiprocessor machine.
 This model provides the best accuracy on concurrency and when a thread performs
a blocking system call, the kernel can schedule another thread for execution.

Page 10 of 28
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a
way that the system does not support them, then the Kernel threads use the many-to-
one relationship modes.

Page 11 of 28
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads
to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship
model.

Difference between User-Level & Kernel-Level Thread


S.N. User-Level Threads Kernel-Level Thread

User-level threads are faster to create andKernel-level threads are slower to create
1
manage. and manage.

Implementation is by a thread library at the


Operating system supports creation of
2
user level. Kernel threads.

User-level thread is generic and can run Kernel-level thread is specific to the
3
on any operating system. operating system.

Multi-threaded applications cannot take Kernel routines themselves can be


4
advantage of multiprocessing. multithreaded.

CPU Scheduling in Operating System

CPU scheduling is a process that allows one process to use the CPU while the execution of
another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
Page 12 of 28
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient,
fast, and fair.

Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term scheduler
(or CPU scheduler). The scheduler selects from among the processes in memory that are ready
to execute and allocates the CPU to one of them.

CPU Scheduling: Dispatcher

Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher
is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves:

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where it
left last time.

The dispatcher should be as fast as possible, given that it is invoked during every process
switch. The time taken by the dispatcher to stop one process and start another process is known
as the Dispatch Latency. Dispatch Latency can be explained using the below figure:

Types of CPU Scheduling

CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state(for I/O request or
invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.

Page 13 of 28
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one exists
in the ready queue) must be selected for execution. There is a choice, however in circumstances
2 and 3.

When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme
is non-preemptive; otherwise, the scheduling scheme is preemptive.

1. Non-Preemptive Scheduling

Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.

This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.

It is the only method that can be used on certain hardware platforms because It does not require
the special hardware(for example a timer) needed for preemptive scheduling.

In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and then after that it
can allocate the CPU to any other process.

Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically
non-preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.

2. Preemptive Scheduling

In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary
to run a certain task that has a higher priority before another task although it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority
task has finished its execution.

Thus this type of scheduling is used mainly when a process switches either from running state
to ready state or from waiting state to ready state. The resources (that is CPU cycles) are mainly
allocated to the process for a limited amount of time and then are taken away, and after that, the
Page 14 of 28
process is again placed back in the ready queue in the case if that process still has a CPU burst
time remaining. That process stays in the ready queue until it gets the next chance to execute.

Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR),
Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.

CPU Scheduling: Scheduling Criteria

There are many different criteria to check when considering the "best" scheduling algorithm,
they are:

1. CPU Utilization

To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time(Ideally 100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)

2. Throughput

It is the total number of processes completed per unit of time or rather says the total amount of
work done in a unit of time. This may range from 10/second to 1/hour depending on the
specific processes.

3. Turnaround Time

It is the amount of time taken to execute a particular process, i.e. The interval from the time of
submission of the process to the time of completion of the process(Wall clock time).

4. Waiting Time

Page 15 of 28
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.

5. Load Average

It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.

6. Response Time

Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).

In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.

Operating System Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms. There are six popular process scheduling algorithms which
we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are


designed so that once a process enters the running state, it cannot be preempted until it
completes its allotted time, whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a high priority process
enters into a ready state.

FCFS Scheduling

In CPU-scheduling problems some terms are used while solving the problems, so for
conceptual purpose the terms are discussed as follows −

 Arrival time (AT) − Arrival time is the time at which the process arrives in ready
queue.
 Burst time (BT) or CPU time of the process − Burst time is the unit of time in which
a particular process completes its execution.
 Completion time (CT) − Completion time is the time at which the process has been
terminated.
 Turn-around time (TAT) − The total time from arrival time to completion time is
known as turn-around time. TAT can be written as,
 Waiting time (WT) Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written as,

Page 16 of 28
Turn-around time (TAT) = Completion time (CT) − Arrival time (AT) or TAT = Burst time
(BT) + Waiting time (WT)

Waiting time (WT) = Turn-around time (TAT) − Burst time (BT)

 Response time (RT) − Response time is the time at which CPU has been allocated to a
particular process first time.
 In case of non-preemptive scheduling, generally Waiting time and Response time is
same.
 Gantt chart − Gantt chart is a visualization which helps to scheduling and managing
particular tasks in a project. It is used while solving scheduling problems, for a concept
of how the processes are being allocated in different algorithms.

Problem 1

Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.

Process ID Arrival time Burst time


P1 2 2
P2 5 6
P3 0 4
P4 0 7
P5 7 4

Solution

Gantt chart

For this problem CT, TAT, WT, RT is shown in the given table

Process ID Arrival time Burst time CT TAT=CT-AT WT=TAT-BT RT


P1 2 2 13 13-2= 11 11-2= 9 9
P2 5 6 19 19-5= 14 14-6= 8 8
P3 0 4 4 4-0= 4 4-4= 0 0
P4 0 7 11 11-0= 11 11-7= 4 4
P5 7 4 23 23-7= 16 16-4= 12 12

Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit can be considered
as milliseconds)

Average Turn-around time = (11+14+4+11+16)/5 = 56/5 = 11.2 time unit (time unit can be
considered as milliseconds)

Page 17 of 28
Problem 2

Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.

Process ID Arrival time Burst time


P1 2 2
P2 0 1
P3 2 3
P4 3 5
P5 4 5

Solution

Gantt chart

For this problem CT, TAT, WT, RT is shown in the given table

Process ID Arrival time Burst time CT TAT=CT-AT WT=TAT-BT RT


P1 2 2 4 4-2= 2 2-2= 0 0
P2 0 1 1 1-0= 1 1-1= 0 0
P3 2 3 7 7-2= 5 5-3= 2 2
P4 3 5 12 12-3= 9 9-5= 4 4
P5 4 5 17 17-4= 13 13-5= 8 8

Average Waiting time = (0+0+2+4+8)/5 = 14/5 = 2.8 time unit (time unit can be considered as
milliseconds)

Average Turn-around time = (2+1+5+9+13)/5 = 30/5 = 6 time unit (time unit can be
considered as milliseconds)

*In idle (not-active) CPU period, no process is scheduled to be terminated so in this time it
remains void for a little time.

Advantages Of FCFS Scheduling

 It is an easy algorithm to implement since it does not include any complex way.
 Every task should be executed simultaneously as it follows FIFO queue.
 FCFS does not give priority to any random important tasks first so its a fair scheduling.

Disadvantages Of FCFS Scheduling

Page 18 of 28
 FCFS results in convoy effect which means if a process with higher burst time comes
first in the ready queue then the processes with lower burst time may get blocked and
that processes with lower burst time may not be able to get the CPU if the higher burst
time task takes time forever.
 If a process with long burst time comes in the line first then the other short burst time
process have to wait for a long time, so it is not much good as time-sharing systems.
 Since it is non-preemptive, it does not release the CPU before it completes its task
execution completely.

*Convoy effect and starvation sounds similar but there is slight difference, so it is advised to
not treat these both terms as same words.

Conclusion

Although FCFS is simple to implement and understand, FCFS is not good for interactive
system and not used in modern operating systems. The Convoy effect in the FCFS can be
prevented using other CPU-scheduling preemptive algorithms such as Round-robin scheduling.

Shortest Job First (SJF) Scheduling

A CPU scheduling strategy is a procedure that selects one process in the waiting state and
assigns it to the CPU so that it can be executed. There are a number of scheduling algorithms.
In this section, we will learn about Shortest Job First or SJF scheduling algorithm.

SJF (SHORTEST JOB FIRST) Scheduling

In the Shortest Job First scheduling algorithm, the processes are scheduled in ascending order
of their CPU burst times, i.e. the CPU is allocated to the process with the shortest execution
time.

Variants of SJF Scheduling

SJF non-preemptive scheduling

In the non-preemptive version, once a process is assigned to the CPU, it runs into completion.
Here, the short term scheduler is invoked when a process completes its execution or when a
new process(es) arrives in an empty ready queue.

SJF preemptive scheduling

This is the preemptive version of SJF scheduling and is also referred as Shortest Remaining
Time First (SRTF) scheduling algorithm. Here, if a short process enters the ready queue while a
longer process is executing, process switch occurs by which the executing process is swapped
out to the ready queue while the newly arrived shorter process starts to execute. Thus the short
term scheduler is invoked either when a new process arrives in the system or an existing
process completes its execution.

Features of SJF Algorithm

 SJF allocates CPU to the process with shortest execution time.

Page 19 of 28
 In cases where two or more processes have the same burst time, arbitration is done
among these processes on first come first serve basis.
 There are both preemptive and non-premptive versions.
 It minimises the average waiting time of the processes.
 It may cause starvation of long processes if short processes continue to come in the
system.

Example 1

Suppose that we have a set of four processes that have arrived at the same time in the order P1,
P2, P3 and P4. The burst time in milliseconds of each process is given by the following table −

Process CPU Burst Time in ms


P1 6
P2 10
P3 4
P4 6
Let us draw the GANTT chart and find the average turnaround time and average waiting time
using non-preemptive SJF algorithm.

GANTT Chart for the set of processes using SJF

Process P3 has the shortest burst time and so it executes first. Then we find that P1 and P4 have
equal burst time of 6ms. Since P1 arrived before, CPU is allocated to P1 and then to P4. Finally
P2 executes. Thus the order of execution is P3, P1, P4, P2 and is given by the following
GANTT chart −

Let us compute the average turnaround time and average waiting time from the above chart.

Average Turnaround Time

=Sum of Turnaround Time of each Process / Number of Processes

= (TATP1 + TATP2 + TATP3 + TATP4) / 4

= (10 + 26 + 4 + 16) / 4 = 14 ms

Average Waiting Time

= Sum of Waiting Time of Each Process / Number of processes

= (WTP1 + WTP2 + WTP3 + WTP4) / 4

= (4 + 16 + 0 + 10) / 4 = 7.5 ms

Page 20 of 28
Example 2

In the previous example, we had assumed that all the processes had arrived at the same time, a
situation which is practically impossible. Here, we consider circumstance when the processes
arrive at different times. Suppose we have set of four processes whose arrival times and CPU
burst times are as follows −

Process Arrival Time CPU Burst Time


P1 0 6
P2 4 10
P3 4 4
P4 8 3

Let us draw the GANTT chart and find the average turnaround time and average waiting time
using non-preemptive SJF algorithm.

GANTT Chart

While drawing the GANTT chart, we will consider which processes have arrived in the system
when the scheduler is invoked. At time 0ms, only P1 is there and so it is assigned to CPU. P1
completes execution at 6ms and at that time P2 and P3 have arrived, but not P4. P3 is assigned
to CPU since it has the shortest burst time among current processes. P3 completes execution at
time 10ms. By that time P4 has arrived and so SJF algorithm is run on the processes P2 and P4.
Hence, we find that the order of execution is P1, P3, P4, P2 as shown in the following GANTT
chart −

Let us calculate the turnaround time of each process and hence the average.

Turnaround Time of a process = Completion Time Arrival Time

TATP1 = CTP1 - ATP1 = 6 - 0 = 6 ms

TATP2 = CTP2 - ATP2 = 23 - 4 = 19 ms

TATP3 = CTP3 - ATP3 = 10 - 4 = 6 ms

TATP4 = CTP4 - ATP4 = 13 - 8 = 5 ms

Average Turnaround Time

=Sum of Turnaround Time of each Process/ Number of Processes

= (6 + 19+ 6 + 5) / 4 = 9 ms

Page 21 of 28
The waiting time is given by the time that each process waits in the ready queue. For a non-
preemptive scheduling algorithm, waiting time of each process can be simply calculated as −

Waiting Time of any process = Time of admission to CPU Arrival Time

WTP1 = 0 - 0 = 0 ms

WTP2 = 13 - 4 = 9 ms

WTP3 = 6 - 4 = 2 ms

WTP4 = 10 - 8 = 2 ms

Average Waiting Time

= Sum of Waiting Time of Each Process/ Number of processes

= (WTP1 + WTP2 + WTP3 + WTP4) / 4

= (0 + 9 + 2 + 2) / 4 = 3.25 ms

Example of Preemptive SJF (SRTF) algorithm

Let us now perform preemptive SJF (SRTN) scheduling on the following processes, draw
GANTT chart and find the average turnaround time and average waiting time.

Process Arrival Time CPU Burst Time


P1 0 8
P2 4 10
P3 4 3
P4 10 4

GANTT Chart

Since this is a preemptive scheduling algorithm, the scheduler is invoked when a process
arrives and when it completes execution. The scheduler computes the remaining time of
completion for each of the processes in the system and selects the process having shortest
remaining time left for execution.

Initially, only P1 arrives and so it is assigned to CPU. At time 4ms, P2 and P3 arrive. The
scheduler computes the remaining time of the processes P1, P2 and P3 as 4ms, 10ms and 3ms.
Since, P3 has shortest time, P1 is pre-empted by P3. P3 completes execution at 7ms and the
Page 22 of 28
scheduler is invoked. Among the processes in the system, P1 has shortest time and so it
executes. At time 10ms, P4 arrives and the scheduler again computes the remaining times left
for each process. Since the remaining time of P1 is least, no process switch occurs and P1
continues to execute. In the similar fashion, the rest of the processes complete execution.

From the GANTT chart, we compute the average turnaround time and the average waiting
time.

Average Turnaround Time

=Sum of Turnaround Time of each Process/ Number of Processes

= (TATP1 + TATP2 + TATP3 + TATP4) / 4

= ((11 - 0) + (25 - 4) + (7 - 4) + (15 - 10)) / 4 = 10 ms

Average Waiting Time

= Sum of Waiting Time of Each Process/ Number of processes

= (WTP1 + WTP2 + WTP3 + WTP4) / 4

= (3 + 11 + 0 + 1) / 4 = 3.75 ms

Advantages of SJF Algorithm

 In both preemptive and non-preemptive methods, the average waiting time is reduced
substantially in SJF when compared to FCFS scheduling.
 SJF optimizes turnaround time to a considerable degree.
 If execution time of each process is estimated precisely, it promises maximum
throughput.

Disadvantages of SJF Algorithm

 In situations where there is an incoming stream of processes with short burst times,
longer processes in the system may be waiting in the ready queue indefinitely leading to
starvation.
 In preemptive SJF, i.e. SRTF, if all processes arrive at different times and at frequent
intervals, the scheduler may be always working and consequently the processor may be
more engaged in process switching than actual execution of the processes.
 Correct estimation of the burst time a process is a complicated process. Since the
effectiveness of the algorithm is entirely based upon the burst times, an erroneous
calculation may cause inefficient scheduling.

Conclusion

Shortest Job First scheduling can be termed as the optimal scheduling algorithm due to its
theoretical best results. However, the implementation is much more complex and the execution
is more unpredictable than First Come First Serve or Round Robin scheduling.

Round Robin (RR) Scheduling Algorithm


Page 23 of 28
Among the CPU scheduling strategies, Round Robin Scheduling is one of the most efficient
and the most widely used scheduling algorithm which finds its employability not only in
process scheduling in operating systems but also in network scheduling.

Round Robin (RR) Scheduling

This scheduling strategy derives its name from an age old round-robin principle which
advocated that all participants are entitled to equal share of assets or opportunities in a turn
wise manner. In RR scheduling, each process gets equal time slices (or time quanta) for which
it executes in the CPU in turn wise manner. When a process gets its turn, it executes for the
assigned time slice and then relinquishes the CPU for the next process in queue. If the process
has burst time left, then it is sent to the end of the queue. Processes enter the queue on first
come first serve basis.

Round Robin scheduling is preemptive, which means that a running process can be interrupted
by another process and sent to the ready queue even when it has not completed its entire
execution in CPU. It is a preemptive version of First Come First Serve (FCFS) scheduling
algorithm.

Features of Round Robin Scheduling

 RR is a fair scheduling strategy where all processes get equal share to execute in turn
wise manner.
 It is a preemptive scheduling method where an executing process must give up its
control of the CPU once its time quantum expires.
 Through RR scheduling strategy, none of the processes go into starvation.
 It is widely used for its simple working principle.
 The performance of RR scheduling is vastly dependent on the chosen time quantum.

Working Principle of Round Robin Scheduling

 Any new process that arrives the system is inserted at the end of the ready queue in
FCFS manner.
 The first process in the queue is removed and assigned to the CPU.
 If the required burst time is less than or equal to the time quantum, the process runs to
completion. The scheduler is invoked when the process completes executing to let in the
next process in the ready queue to the CPU.
 If the required burst time is more than the time quantum, the process executes up to the
allotted time quantum. Then its PCB (process control block) status is updated and it is
added to the end of the queue. Context switch occurs and the next process in the ready
queue is assigned to the CPU.
 The above steps are repeated until there are no more processes in the ready queue.

We can understand the workings RR scheduling algorithm through the aid of the following
example.

Example of Round Robin Scheduling

Let us consider a system that has four processes which have arrived at the same time in the
order P1, P2, P3 and P4. The burst time in milliseconds of each process is given by the
following table −
Page 24 of 28
Process CPU Burst Times in ms
P1 8
P2 10
P3 6
P4 4

Let us consider time quantum of 2ms and perform RR scheduling on this. We will draw
GANTT chart and find the average turnaround time and average waiting time.

GANTT Chart with time quantum of 2ms

Average Turnaround Time

Average TAT = Sum of Turnaround Time of each Process / Number of Processes

=(TATP1+TATP2+TATP3+TATP4)/4

= ( 24 + 28 + 22+ 16) / 4 = 22.5 ms

In order to calculate the waiting time of each process, we multiply the time quantum with the
number of time slices the process was waiting in the ready queue.

Average Waiting Time

Average WT = Sum of Waiting Time of Each Process / Number of processes

=(WTP1+WTP2+WTP3+WTP4)/4
= ( 8*2 + 9*2+ 8*2+ 6*2) / 4 = 15.5 ms

Advantages of Round Robin Scheduling

 Round Robin scheduling is the most a fair scheduling algorithm whereby all processes
are given equal time quantum for execution.
 Starvation of any process caused by indefinite waiting in ready queue is totally
eliminated in RR scheduling.
 It does not require any complicated method to calculate the CPU burst time of each
process prior to scheduling.
 It is pretty simple to implement and so finds application in a wide range of situations.
 Convoy effect does not occur in RR scheduling as in First Come First Serve CPU
(FCFS) scheduling.

Disadvantages of Round Robin Scheduling


Page 25 of 28
 The performance of Round Robin scheduling is highly dependent upon the chosen time
quantum. This requires prudent analysis before implementation, failing which required
results are not received.
 If the chosen time quantum is very large, most of the processes with complete within
the burst time. In effect, RR scheduling will act as FCFS scheduling. Thus, all the
limitations of FCFS will come into the system.
 If the chosen time quantum is too small, the CPU will be very busy in context
switching, i.e. swapping in swapping out processes to and from the CPU and memory.
This would reduce the throughput of the system since more time will be expended in
context switching rather than actual execution of the processes.
 RR scheduling does not give any scope to assign priorities to processes. So, system
processes which need high priority gets the same preference as background processes.
This may often hamper the overall performance of a system.

Conclusion

Round Robin scheduling, if properly implemented, provide the simplest and the best solutions
to scheduling problems. A number of variations of RR scheduling are being researched upon
and implemented, in order to avoid the disadvantages of this algorithm. One variant that helps
to provide near perfect time quantum is Dynamic Time Quantum Scheduling. Here, the time
quantum dynamically varies according to the behaviour of the processes in the system. Another
variant, the Selfish Round Robin scheduling assigns priorities to processes and provides more
CPU slices to higher priority processes.

Multiple Processors Scheduling in Operating System

To increase the system's overall performance, numerous processors or cores are frequently used
in modern computer systems. The operating system must be able to effectively schedule
processes to execute on various processors, though, in order to make the best use of these
resources.

Multiple processor scheduling involves deciding which processes should be assigned to which
processor or core and how long they should be permitted to run. While ensuring that all
processes are fairly and appropriately prioritized, the objective is to achieve efficient utilization
of the available processors.

Page 26 of 28
Multiple Processor Scheduling

The goal of multiple processor scheduling, also known as multiprocessor scheduling, is to


create a system's scheduling function that utilizes several processors. In multiprocessor
scheduling, multiple CPUs split the workload (load sharing) to enable concurrent execution of
multiple processes. In comparison to single-processor scheduling, multiprocessor scheduling is
generally more complicated. There are many identical processors in the multiprocessor
scheduling system, allowing us to perform any process at any moment.

The system's numerous CPUs communicate often and share a common bus, memory, and other
peripherals. As a result, the system is said to be strongly connected. These systems are
employed whenever large amounts of data need to be processed, and they are mostly used in
satellite, weather forecasting, etc.

In some instances of multiple-processor scheduling, the functioning of the processors is


homogeneous, or identical. Any process in the queue can run on any available processor.

Multiprocessor systems can be homogeneous (the same CPU) or heterogeneous (various types
of CPUs). Special scheduling restrictions, such as devices coupled to a single CPU through a
private bus, may apply.

The ideal scheduling method for a system with a single processor cannot be determined by any
rule or policy. There is also no ideal scheduling strategy for a system with several CPUs.

Types of Multiprocessor Scheduling Algorithms

Operating systems utilize a range of multiprocessor scheduling algorithms. Among the most
typical types are ?

Round-Robin Scheduling ? The round-robin scheduling algorithm allocates a time quantum to


each CPU and configures processes to run in a round-robin fashion on each processor. Since it
ensures that each process gets an equivalent amount of CPU time, this strategy might be useful
in systems wherein all programs have the same priority.

Page 27 of 28
Priority Scheduling ? Processes are given levels of priority in this method, and those with
greater priorities are scheduled to run first. This technique might be helpful in systems where
some jobs, like real-time tasks, call for a higher priority.

Scheduling with the shortest job first (SJF) ? This algorithm schedules tasks according to
how long they should take to complete. It is planned for the shortest work to run first, then the
next smallest job, and so on. This technique can be helpful in systems with lots of quick
processes since it can shorten the typical response time.

Fair-share scheduling ? In this technique, the number of processors and the priority of each
process determine how much time is allotted to each. As it ensures that each process receives a
fair share of processing time, this technique might be helpful in systems with a mix of long and
short processes.

Earliest deadline first (EDF) scheduling ? Each process in this algorithm is given a deadline,
and the process with the earliest deadline is the one that will execute first. In systems with real-
time activities that have stringent deadlines, this approach can be helpful.

Scheduling using a multilevel feedback queue (MLFQ) ? Using a multilayer feedback queue
(MLFQ), processes are given a range of priority levels and are able to move up or down the
priority levels based on their behavior. This strategy might be useful in systems with a mix of
short and long processes.

Page 28 of 28

You might also like