0% found this document useful (0 votes)
9 views24 pages

Os Unit-2

This document covers the fundamentals of process management in operating systems, detailing the concepts of processes, process states, and the process control block (PCB). It explains process scheduling, including various scheduling algorithms and the roles of long-term, short-term, and medium-term schedulers. Additionally, it discusses inter-process communication methods, such as shared memory and message passing, and the operations involved in process creation and termination.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views24 pages

Os Unit-2

This document covers the fundamentals of process management in operating systems, detailing the concepts of processes, process states, and the process control block (PCB). It explains process scheduling, including various scheduling algorithms and the roles of long-term, short-term, and medium-term schedulers. Additionally, it discusses inter-process communication methods, such as shared memory and message passing, and the operations involved in process creation and termination.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT – II Operating Systems

Process Management: Process, Process States, Process Control Block, Process Scheduling-
Scheduling Queues, Schedulers; Operations- Process Creation, Process termination; Inter
process communication- Shared-Memory Systems, Message-Passing Systems:- (Naming,
Synchronization, Buffering) Multi Thread programming models- Many to one, One to one,
Many to Many model, Process Scheduling Criteria- CPU scheduler, Preemptive scheduling,
Dispatcher, Scheduling Criteria, CPU Scheduling Algorithms- First Come First Serve,
Shortest job first, Priority Scheduling, Round robin scheduling.

Process a program in execution is called process. A program becomes a process (active


entity) when an executable (passive) loaded in to memory. Process execution must progress
in sequential fashion. A process is more than the program code, which is sometimes known as
the text section. It also includes the current activity, as represented by the value of the
program counter and the contents of the processor's registers. A process generally also
includes the process stack, which contains temporary data (such as function parameters,
return addresses, and local variables), and a data section, which contains global variables. A
process may also include a heap, which is memory that is dynamically allocated during
process run time. The structure of a process in memory is shown in following Figure.

Process state:

Two state process diagram


1 Narasaraopeta Engineering College
UNIT – II Operating Systems

Five state Process Diagram

As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process. Each process may be in one of the following states:
 New: The process is being created
 Running : Instructions are being executed
 Waiting(blocked): The process is waiting for some event to occur ( waiting for I/O,
Communication reply)
 Ready: The process is waiting to be assigned to a processor
 Terminated: The process has finished execution
Only one process is being executed on a processor the remaining process will be either in ready state
or blocked state. All the five states will be in main memory only.
New ->ready: Whenever new process comes it will be kept in the ready state.
Ready ->run: scheduler picks up this process
Run -> ready: time slice is completed or scheduler picks up another process
Run -> terminated: whenever process completes it will be terminated
Run -> wait: Process waits for input
Wait -> ready: input become available
Differences between Program and process

Program Process
Sequence of instructions is called an executable The program in running is called a process
program.
It is a static entity It is a active entity
Single instance is available Each running instance of a program is a Separate
process. Notepad is one program and can be
opened twice
Stored in secondary storage devices Stored in main memory
Program is a static entiry Process will under go several states during it’s
execution

Process Control Block:

2 Narasaraopeta Engineering College


UNIT – II Operating Systems

Each process is represented in the operating system by a process control block (PCB)— also
called a task control block. A PCB is shown in following Figure.

Fig. Process Control Block (PCB) or task control block

It contains many pieces of information associated with a specific process, including these:
Process state: The state may be new, ready, running, waiting, halting, and so on.
Program counter: The counter indicates the address of the next instruction to be executed
for this process.
CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process to be continued
correctly afterward as shown in following figure.
CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information: This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

3 Narasaraopeta Engineering College


UNIT – II Operating Systems

Fig. Diagram showing CPU switch from process to process.


Process Scheduling:
The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization. The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running. To meet these
objectives, the process scheduler selects an available process (possibly from a set of several
available processes) for program execution on the CPU. For a single-processor system, there
will never be more than one running process. If there are more processes, the rest will have to
wait until the CPU is free and can be rescheduled
Scheduling Queues:
As processes enter the system, they are put into a job queue, which consists of all processes in
the system. The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored as a linked
list. A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue. The list of processes
waiting for a particular I/O device is called a device queue. Each device has its own device
queue as shown in following figure.

4 Narasaraopeta Engineering College


UNIT – II Operating Systems

Fig: The ready queue and various I/O device queues

Run Queue:
The scheduler selects one of the processes from ready queue and allocates the CPU, it will be in
running state for the given time slice and it will be moved to ready queue after completion of time
slice.
A new process is initially put in the ready queue. It waits there until it is selected for
execution, or dispatched. Once the process is allocated the CPU and is executing, one of
several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the
ready state and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources
deallocated.

5 Narasaraopeta Engineering College


UNIT – II Operating Systems

Fig: Representation of Process Scheduling

Schedulers:
A process migrates among the various scheduling queues throughout its life time. The operating
system must select, for scheduling purposes, processes from these queues in some fashion.
The selection process is carried out by the appropriate scheduler There are three types of
schedulers
1) Long term scheduler or job scheduler
2) Short term scheduler or CPU scheduler
3) Medium term scheduler

Long Term Schedulers:


The long term scheduler controls degree of multiprogramming. The multi programming means that
number of programs that can be loaded in to memory. The long term scheduler is invoked whenever a
new program is submitted to the system or a program exits.
Whenever new program submitted, it decides whether the program is to be kept in ready state
(memory) or ready suspended state (disk). If degree of multiprogramming reaches to maximum the
new process will be kept on the disk. If one of the running process exits then long term scheduler

6 Narasaraopeta Engineering College


UNIT – II Operating Systems

decides which program should be bring from ready suspended to ready state. Long term scheduler
selects mix of I/O bound processes and CPU bound processes. An I/O bound process is one that
spends more of its time doing I/O than it spends doing computations. CPU bound process takes less
I/O time and more CPU time.
Short term scheduler or CPU scheduler:
It is also known as dispatcher. It selects one of the processes from ready queue and allocates CPU to
it. The short-term scheduler must select a new process frequently. It is invoked very frequently
compared to long term and medium term scheduler.
Medium Term Scheduler:
Operating systems like time-sharing systems uses medium term scheduler. Some times it can be
advantageous to remove a process from memory and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be continued where it let
off. This scheme is called swapping. The medium term scheduler is in-charge of handling the
swapped out-processes.

The ready and wait states of processes will be in memory. The ready suspended and blocked
suspended process will be on DISK. Removing a process from memory and keep it on the disk is
called SWAP OUT and bringing back the process from the disk to memory is called SWAP IN.

.
Process Switching or Context Switch:
7 Narasaraopeta Engineering College
UNIT – II Operating Systems

When CPU switches to another process, the system must save the state of the old process and load the
saved state for the new process. This is known as context switch. The context switching takes place
whether the process in kernel mode or in user mode. The context switching takes place only in the
kernel.
The context is represented in the PCB (Process control block) of the process; it includes the value of
the CPU register, the process state, and memory management information. Context-switch time is
overhead; the system does no useful work while switching. So, Multi-Batch system is more efficient
than time-sharing system. The context switches time dependents on hardware. The context switching
time also depends upon memory utilized by the process, memory speed and operating system. The
context switching takes place when clock interrupt, I/O interrupt, memory interrupt occurs or system
call occurs.

Fig: CPU Switch From Process to Process

Operations on Process:
The process may be created and terminated dynamically. The system must provide mechanism to
create and terminate process.
Process Creation:
A process may create several new processes, via a create-process system call, during the course of
execution. The creation process is called parent process and new processes are called children of that
process. Each of these new processes may in turn create other processes, forming a tree of processes.
Whenever a new process created each process is given unique process identifier called PID, which is
typically an integer number. A new process is created whenever a new program is executed, new user
is logged in, OS wants to provide a new service or spawned by existing process.

In general, a process will need certain resources (CPU time, memory, files, I/O devices) to accomplish
the task. When a process creates a sub process:
 that sub process may be able to obtain resources directly from operating system
 Or it may get subset of resources from parent.

8 Narasaraopeta Engineering College


UNIT – II Operating Systems

The following Figure illustrates a typical process tree for the Linux operating system,
showing the name of each process and its pid. (We use the term process rather loosely, as
Linux prefers the term task instead.) The init process (which always has a pid of 1) serves as
the root parent process for all user processes. Once the system has booted, the init process can
also create various user processes, such as a web or print server, an ssh server, and the like. In
Figure 3.8, we see two children of init— kthreadd and sshd. The kthreadd process is
responsible for creating additional processes that perform tasks on behalf of the kernel (in this
situation, khelper and pdflush). The sshd process is responsible for managing clients that
connect to the system by using ssh (which is short for secure shell). The login process is
responsible for managing clients that directly log onto the system

Fig: A tree of process on a typical Linux system

When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the same program and data as
the parent).
2. The child process has a new program loaded into it

In UNIX, a new process is created by fork() system call. The new process consists of a copy of the
address space of the original process. An exec system call used after a fork to replace the process’
memory space with a new program. . The parent waits for the child process to complete
with the wait() system call. When the child process completes (by either implicitly or
explicitly invoking exit()), the parent process resumes from the call to wait(), where it
completes using the exit() system call as shown in following figure.. The value of pid

9 Narasaraopeta Engineering College


UNIT – II Operating Systems

(the process identifier) for the child process is zero, while that for the parent is an
integer value greater than zero.

Fig : Process Creation

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
main()
{
pid_t pid;

/* fork another process */


pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}

if (pid == 0) { /* child process */


execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}
Fig: C program forking a separate process

Process Termination:
A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit () system call. A parent may terminate the execution of
one of its children for a variety of reasons, such as these
1. The child has exceeded its usage of some of the resources that it has been allocated.
(To determine whether this has occurred, the parent must have a mechanism to inspect
the state of its children.)
10 Narasaraopeta Engineering College
UNIT – II Operating Systems

2. The task assigned to the child is no longer required.


3. The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
Some systems do not allow a child to exist if its parent has terminated. In such systems, if a
process terminates (either normally or abnormally), then all its children must also be
terminated. This phenomenon, referred to as cascading termination, is normally initiated by
the operating system.

Inter Process Communication:


Processes executing concurrently in the operating system may be either independent processes or
cooperating processes. A process is cooperating if it can affect or be affected by the other processes
executing in the system. Any process that does not share data with any other process is independent.
Reasons for Process cooperation are Information Sharing, computational speed up, modularity,
convenience.
Cooperating processes require an Interprocess Communication (IPC) mechanism that will allow
them to exchange data and information. There are two fundamental models of inter process
communication :(1) Shared Memory and (2) Message Passing.

Fig: Communication Models (a) Message Passing (b) Shared Memory

In the shared-Memory model, a region of memory that is shared by cooperating process is established.
Process can then exchange information by reading and writing data to the shared region. In the
message passing model, communication takes place by means of messages exchanged between the
cooperating processes.

Shared Memory Model Message Passing


1)It is useful for exchanging larger amount of 1)It is useful for exchanging smaller amount of
data data.
2)read write conflict to be avoided 2)No conflicts need to be avoided
3)Difficult to implement 3)Easier to implement than shared memory
11 Narasaraopeta Engineering College
UNIT – II Operating Systems

4)Faster than message passing 4)Slower than shared memory model


5) System calls require establishing shared 6) Kernel intervention is required for message
memory region. After that shared memory is passing
accessed like memory only

Shared Memory Systems:


Interprocess communication using shared memory requires communicating processes to establish a
region of shared memory. A shared memory-segment region resides in the address space of the
process creating the shared-memory segment. Other process that wishes to communicate using shared-
memory segment must attach it to their address space. Normally, memory management prevents one
process from accessing another process’s memory. Shared memory requires that two or more
processes agree to remove this restriction. They can then exchange information by reading and writing
data in the shared areas. The form of data and the location are determined by these processes and are
not under operating system’s control. The processes are responsible for ensuring that they are not
writing to the same location simultaneously.

Example for cooperative Processing is producer consumer problem.

The producer process (server) produces elements that are consumed by a consumer process (Client).
One solution for producer consumer problem uses shared memory, to allow producer and consumer
process to run concurrently. This buffer will reside in a region of memory that is shared by both
producer and consumer processes. A producer produces one item at a time and consumer consumes
one item at a time. The producer and consumer must be synchronized, so that the consumer does not
try to consume an item that has not yet produced.
Two types of buffers can be used. The unbounded buffer assumes infinite buffer size is available.
The consumer may have to wait for new items, but the producer can produce new items. The bounded
buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the
producer must wait if the buffer is full.

#define BUFFER_SIZE 10
Item buffer[BUFFER_SIZE];
Int in =0;
Int out = 0;

The shared buffer is implemented as a circular array with two logical pointers in and out. The variable
in points to the next free position in the buffer; out points to the first full position in the buffer; The
buffer is empty when in == out; the buffer is full when (( in + 1) % BUFFER_SIZE) == out.

Item nextproduced;
while(true)
{
/* Produce an item in next produced */
While(((in +1) % BUFFER_SIZE) == out); /* Do nothing */
Buffer[in] = nextproduced.
In = (in + 1) % BUFFER_SIZE;
}

12 Narasaraopeta Engineering College


UNIT – II Operating Systems

Item nextconsumed;
while(true)
{
while(in == out) ; /* Do nothing */

Nextconsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE; /* Consume the item in next consumed */
}
Fig : Producer and Consumer Process

Message-Passing Systems:
Message passing provides a mechanism to allow a process to communicate and to synchronize their
actions without sharing the same address space. It is particularly useful in a distributed environment,
where the communicating processes may reside on different computers connected by network. For
example, a chat program on the World Wide Web.
Message-passing facility provides two operations:
 send(message) – message size can be fixed or variable
 receive(message)
If P and Q wish to communicate, they need to:
 Establish a communication link between them
 Exchange messages via send/receive
Logical implementation of communication link
 Direct or indirect communication (Naming)
 Synchronous or asynchronous communication
 Automatic or explicit buffering
Naming:
Processes that want to communicate must have a way to refer to each other. They can use either direct
or indirect communication.
Direct communication
Each process that wants to communicate must explicitly name the recipient or sender of the
communication.
 Send(P, message) – Send a message to process P.
 Receive(Q, message) – Receive a message from process Q.
A communication link in this scheme has the following properties:
 A link is established automatically between every pair of processes that want to communicate.
The processes need to know only each other’s identity to communicate.
 A link is associated with exactly two processes
 Between each pair of processes, there exists exactly one link.
 The link may be unidirectional, but is usually bi-directional.
This scheme exhibits symmetry in addressing; i.e, both the sender process and the receiver process
must name the other to communicate.

In asymmetry in addressing, only sender names the recipient; the recipient is not required to name the
sender. The send() and receive() primitives are defined as follows:
 Send(P, message) – Send a message to process P.

13 Narasaraopeta Engineering College


UNIT – II Operating Systems

 Receive(id, message) – Receive a message from any process; the variable id is set to the name
of the process with which communication has taken place.
The disadvantage in both of these schemes (Symmetric and asymmetric) is the limited modularity of
the resulting process definitions. Changing the identifier of a process may necessitate examining all
other processes definitions.

Indirect Communication: The messages are sent to and received from mailboxes or ports. A
mailbox can be viewed abstractly as an object into which messages can be placed by processes and
from which messages can be removed. Each mail box has a unique identification. Two processes can
communicate only if the processes have a shared mailbox.

Send() and receive() primitives are defined as follows:


 Send(A, message) - Send a message to mailbox A.
 Receive(A, Message) – Receive a message from mailbox A.
A communication link has following Properties
 A Link is established between a pair of processes only if both members of the pair have a
shred mail box.
 A link may be associated with more than two processes
 Between each pair of processes, there may be many different links, with each link
corresponding to one mailbox
For a shared mailbox, messages are received based on the following methods:
 Allow a link to be associated with at most two processes
 Allow at most one process at a time to execute a receive() operation
 Allow the system to select arbitrarily which process will receive the message (e.g., a
round robin approach)

A mailbox may be owned either by a process or by the operating system. If the mail box is owned by
a process, then we distinguish between the user(Who can send a message) and owner( who receives
message through mail box). When a process that owns a mail box terminates, the mail box disappears.
Any process that subsequently sends a message to this mail box must be notified that the mail box no
longer exists.

A mail box that is owned by the operating system has an existence of its own. It is independent and is
not attached to any particular process. The OS then must provide a mechanism that allows a process to
do the following:
 Create a new mailbox
 Send and receive messages through mailbox
 Destroy a mailbox

Synchronization:
Communication between processes takes place through calls to send() and receive() primitives.
Message passing may be either blocking or nonblocking also known as Synchronous and
Asynchronous.

 Blocking is considered synchronous


Blocking send has the sender block until the message is received
14 Narasaraopeta Engineering College
UNIT – II Operating Systems

Blocking receive has the receiver block until a message is available


 Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and continue
Non-blocking receive has the receiver receive a valid message or null
When both send() and receive() are blocking, we have rendezvous between sender and receiver.

Buffering:
Whether communication is direct or indirect, messages exchanged by communicating processes
reside in a temporary queue. Such queues can be implemented by three ways.
 Zero capacity –0 messages (No buffering)
Sender must wait until the recipient receives the message (rendezvous)
 Bounded capacity –finite length of n messages ( n messages can reside on it)
Sender must wait if the queue is full
If the queue is not full, when a new message is sent, the message is placed in the queue and
sender can continue the execution without waiting.
 Unbounded capacity
The queue length is infinite; thus, any number of messages can wait in it. The sender never
blocks.

Multi Threaded Programming


A thread is a light weight process. It comprises a thread ID, a program counter, a register set, and a
stack. It shares with other threads belonging to the same process its code section, and other
operating-system resources such as open files and signals. For e.g, Windows-XP and Solaris
kernels are multithreaded.

15 Narasaraopeta Engineering College


UNIT – II Operating Systems

Benefits:
Resource Sharing: Threads share memory and resources of the process to which they belong.
Economy: Allocating memory and resources for process creation is costly. Solaris, for example,
creating a process is about thirty times slower than is creating thread, and context switching is five
times slower.
Utilization of multiprocessor architecture: Multiple threads can run on multiple processors
parallel. A single threaded process can run only on one CPU.
Responsive time: Multithreading an interactive application may allow a program to continue
running even if part of it is blocked, there by increasing responsiveness. ( If one thread is blocked
other thread is scheduled).
Process Thread
A program in execution is called a process. A thread is a subset of the process. Threads are
(Heavy weight ) considered lightweight because they use far less
resources than processes.
Processes are complex to create than thread Threads are easier to create than processes since
. they don’t require a separate address space.
Processes are independent of each other. Threads within the process share some resources.
Creation of process takes more time than Creation of thread takes less time than process.
thread.
Termination of process takes more time Termination of thread takes less time than
than thread. process.
Different independent processes can’t share All the threads running within a process share the
resources. same address space, file descriptor, stack and
other process related attributes.
Process switching takes more time Thread switching takes less time compared to
compared to thread switching. process switching.
If process goes for I/O within its time slice, If a thread goes for I/O within time slice of
it goes to blocked state. process another thread will be scheduled.

Multi Threading Models:


Threads may be provided either at the user level, for user threads, or by the kernel, for kernel
threads. User threads are supported above the kernel and are managed by without kernel support.
Kernel threads are supported and managed directly by the operating system. All contemporary
operating systems like Solaris, windows-XP, Linux and MAC OS x support kernel threads.

Many-to-One model:
The many-to-one model maps many user-level threads to one kernel thread. Thread management is
done by thread library in user space, so it is efficient; but the entire process will block if a thread
makes a blocking system call. Since, kernel is unaware of user threads, multiple threads can’t be
scheduled on multiple processors. GNU portable threads on UNIX Java threads, WIN32 threads
and Green threads on Solaris are user thread libraries. .

16 Narasaraopeta Engineering College


UNIT – II Operating Systems

Fig: Many-to-One Model

One-to-One Model:

Fig: One-to-One Model

The one-to-one Model maps each user thread to a kernel thread. It provides more concurrency than
the many-to-one model by allowing another thread to run when a thread makes a blocking system
call; it allows multiple threads to run in parallel on multiprocessors. The only drawback is creating
a kernel thread correspond to user thread. Creating a kernel thread is an overhead to the
application. Solaris 9, Linux, Windows 95, 98, NT, 2000 and xp implement one-to-one model.

Many-to-Many Model:
The many-to-many model multiplexes many user-level threads to small or equal number of kernel
threads. Many-to-one model allows the developer to create as many user threads as she wishes, true
concurrency is not gained because kernel can schedule only one user thread at a time. The one-to-
one model allows for greater concurrency, but a developer has to be careful not to create too many
threads within application.

17 Narasaraopeta Engineering College


UNIT – II Operating Systems

The man-to-many model does not suffers from neither of these shortcomings: Developers can
create as many user threads as necessary, and corresponding kernel threads can run in parallel on a
multiprocessor. Also, when a thread performs a blocking system call, the kernel can schedule
another thread for execution. Solaris prior to version 9, windows NT/2000 with thread fiber
package supports many-to-many model.

Fig: Many-to-Many Model

Two-level-model:
It is a variation of many-to-many model. Two-level model still multiplexes many user-level threads
to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to
kernel thread. Two-level model is supported by Solaris 8 and earlier.

Fig: Two-level-Model

Process Scheduling
CPU Scheduler:
Whenever the CPU becomes the idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by short-term scheduler (CPU
scheduler). The scheduler selects a process from the ready queue.
18 Narasaraopeta Engineering College
UNIT – II Operating Systems

Preemptive Scheduling:
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
For situations 1 and 4, a new process must be selected for execution.
Non preemptive or cooperative scheduling (situation 1 and 4), Once a process is in the running
state, it will continue until it terminates or blocks itself for I/O. This scheduling method was used
by windows 3.x.
Preemptive scheduling:
– Currently running process may be interrupted and moved to ready state by the OS
– Preemption may occur when new process arrives, on an interrupt, or periodically.
– This scheduling method was used by Windows 95, 98, xp and MAC OS x.
Dispatcher:
Another component involved in the CPU-Scheduling function is Dispatcher. The dispatcher is the
module that gives control of the CPU to the process selected by short-term scheduler. This
function involves the following:
– Switching context
– Switching to user mode
– Jumping to the proper location in the user program to restart that program.
Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

Scheduling Criteria:
Scheduling criteria is used to compare different scheduling algorithms.
CPU Utilization: We want keep the CPU as busy as possible. In real systems, CPU utilization
range from 40 to 90 percent. Every scheduling algorithm tries to maximize the CPU utilization.
Throughput: No of processes that are completed their execution per time unit, called throughput.
Throughput depends upon scheduling policy and average length of process. Every scheduling
algorithm tries to maximize the Throughput.
Turnaround time: This interval from time of submission of a process to the time of completion is
the turnaround time. Scheduling algorithm tries to minimize the turnaround time.
Turnaround time = waiting time to get into memory + waiting time in ready queue + I/O time +
CPU execution time.
Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue. Scheduling
algorithm tries to minimize the waiting time.
Response time: It is the time from the submission of a request until the first response produced. It
is the important criteria for interactive jobs. Scheduling algorithm tries to minimize the waiting
time.

19 Narasaraopeta Engineering College


UNIT – II Operating Systems

Scheduling Algorithms:
CPU scheduling algorithm decides which of the processes in ready queue is to be allocated the
CPU. There are many CPU scheduling algorithms.

First-Come, First-Served Scheduling (FCFS):


In FCFS, the process that requests the CPU first is allocated the CPU first. Each process joins the
ready queue. When the CPU is free, it is allocated to the process at the head of the queue. The
running process is removed from the queue. A short process may have to wait a very long time
before it can execute. In FCFS average waiting time is long. FCFS scheduling algorithm is a
nonpreemptive. It is not suitable for interactive jobs and time sharing systems.
Assume that the following set of processes that arrive at time 0, with length of CPU burst given in
milliseconds:

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt chart for the schedule is:

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
Average turn Around Time : (24+27+30) / 3 = 27
Average Response Time : ( 0 + 24 + 27)/3 = 17.

Process Burst Time


P2 3
P3 3
P1 24

20 Narasaraopeta Engineering College


UNIT – II Operating Systems

Average waiting time: (0 + 3 + 6)/3 = 3


Average turn Around Time : (3+6+30) / 3 = 13
Average Response Time : ( 0 + 3 + 6)/3 = 3

Convoy effect:
Assume we have one CPU-bound process and many I/O bound processes. The CPU-bound
processes will get and hold the CPU. During this time, all the other processes will finish their I/O
and will move into the ready queue, waiting for the CPU. While the processes wait in ready queue,
the I/O devices are idle. Eventually, the CPU bound process finishes its CPU burst and moves to an
I/O device. All the I/O bound processes, which have short CPU bursts, execute quickly and move
back to I/O queues. At this point, CPU is IDLE. The CPU-bound process will then move back to
ready queue and be allocated the CPU. Again, all the I/O bound processes will end up waiting in
ready queue until the CPU-bound processes is done. This is a CONVOY effect as all the other
processes wait for one big process to get off the CPU.

Shortest-job-First Scheduling(SJF):
When CPU is available, it is assigned to the process that has the smallest next CPU burst. If the
next CPU bursts of two processes are same, FCFS scheduling is used to break the tie. The more
appropriate name for this algorithm is Shortest-next-CPU-burst algorithm. It is not suitable for
time-sharing systems and interactive jobs. The longer process may get starvation.

Process Burst Time


P1 6
P2 8
P3 7
P4 3

Process Burst Time Waiting Time Turn Around Time Response Time

P1 6 3 9 3
P2 8 16 24 16
P3 7 9 16 9
P4 3 0 3 0

21 Narasaraopeta Engineering College


UNIT – II Operating Systems

Average waiting time: (0 + 3 + 9+16)/4 = 7


Average turn Around Time : (9+24+16+3) / 4 = 13
Average Response Time : ( 3+16+9 + 0)/4 = 7

The average waiting time is very low. The real difficult with the SJF algorithm is knowing
the length of the next CPU burst. SJF algorithm is frequently used in long term scheduling
and it cannot be implemented at the level of short-term scheduling

The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new
process arrives at the ready queue while previous process is still executing. The next CPU
burst of the newly arrived process may be shorter than what is left of the currently executing
process. A preemptive SJF algorithm will preempt the currently executing process, whereas a
nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst.

Process Burst Time(Milli Seconds) Arrival Time


P1 8 0
P2 4 1
P3 9 2
P4 5 3

Process Burst Arrival Time Waiting Time Turn Around Response Time
Time Time
P1 8 0 9 17 0
P2 4 1 0 4 0
P3 9 2 15 24 15
P4 5 3 2 7 2

Average waiting time: (9+ 0 + 15+2)/4 = 6.5


Average turn Around Time: (9+24+16+3) / 4 = 13
Average Response Time : (3+16+9 + 0)/4 = 7
Priority Scheduling:
A priority is associated with each process. And the CPU is allocated to the process with
highest priority. Equal priority processes are scheduled in FCFS order. (An SJF is special
case of priority scheduling. The largest the CPU burst, the lower the priority, and vice versa.
In this topic, we assume that low numbers represent high priority.
Process Burst Time(Milli Seconds) Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

22 Narasaraopeta Engineering College


UNIT – II Operating Systems

Process Burst Priority Waiting Time Turn Around Response Time


Time Time
P1 10 3 6 16 6
P2 1 1 0 1 0
P3 2 4 16 18 16
P4 1 5 18 19 18
P5 5 2 1 6 1
Priorities can be defined internal or externally. Internal priorities are defined based on time limit,
memory requirement, number of opened files. Priority scheduling can be either preemptive or
nonpreemtive. When a process arrives at the ready queue, its priority is compared with priority of
currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the currently running process. A
nonpreemptive priority scheduling algorithm will simply put the new process at the head of the
ready queue.
A major problem with priority scheduling algorithm is INDEFINITE BLOCKING or
STARVATION. A priority scheduling algorithm can leave some low priority processes waiting
indefinitely. A steady stream of high higher-priority processes can prevent low-priority processes
waiting indefinitely. Then the low priority processes get starvation.
A solution to the problem of STARVATION of low priority processes is AGING. Aging is a
technique of gradually increasing the priority of processes that wait in the system for a long time.
For example, increase the priority of a low priority job by 1 for every 15 minutes.

Round-Robin Scheduling Algorithm(RR):


The Round-Robin (RR) Scheduling Algorithm is designed especially for time sharing systems.
Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to the end of the ready queue.
To implement RR scheduling, we keep the ready queue as FIFO queue of processes. New
processes are added to the tail of the ready queue. The CPU scheduler picks the first process from
ready queue, sets timer to interrupt after 1 time quantum, and dispatches the process.
The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will
release the CPU voluntarily. The scheduler will then proceed to the next process in the ready
queue.
If CPU burst is longer than 1 time quantum, the timer will go off and will cause an interrupt to OS.
A context switch will be executed, and the running process will be put at the tail of the ready
queue. The Round-Robin scheduler selects the next process in the ready queue.
The average waiting time under RR policy is often long. If time quantum increases turn around
time decreases and response time increases. In general time quantum always should be greater than
context switching time. If time quantum is too long it will become FCFS scheduling algorithm.

Problem 1:
Process Burst Time
23 Narasaraopeta Engineering College
UNIT – II Operating Systems

P1 24
P2 3
P3 3
Assume that all jobs are submitted at the same time and time quantum is 4msec.

Process Burst Waiting Time Turn Around Response Time


Time Time
P1 24 6 30 0
P2 3 4 7 4
P3 3 7 10 7

The average waiting time is 17/3 = 5.66 milliseconds.


Problem 2:
Process Burst Time
P1 24
P2 3
P3 3
Assume that all jobs are submitted at the same time and time quantum is 2msec.

Process Burst Waiting Time Turn Around Response Time


Time Time
P1 24 6 30 0
P2 3 6 9 2
P3 3 7 10 4

If time slice decreases, the average turn around time increases and average response time
decreases.

Problem 3:
Process Burst Time
P1 24
P2 7
P3 7
Assume that all jobs are submitted at the same time and time quantum is 4msec.

Process Burst Waiting Time Turn Around Response Time


Time Time
P1 24 13 41 0
P2 7 12 19 4
P3 10 19 29 8

24 Narasaraopeta Engineering College

You might also like