0% found this document useful (0 votes)
23 views31 pages

Pos Unit Ii

This document covers the principles of operating systems, focusing on processes, threads, and scheduling. It explains the concept of processes, their states, and the role of the Process Control Block (PCB), as well as the importance of inter-process communication and cooperation. Additionally, it discusses scheduling algorithms and the advantages of cooperating processes in terms of information sharing, modularity, and computation speedup.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views31 pages

Pos Unit Ii

This document covers the principles of operating systems, focusing on processes, threads, and scheduling. It explains the concept of processes, their states, and the role of the Process Control Block (PCB), as well as the importance of inter-process communication and cooperation. Additionally, it discusses scheduling algorithms and the advantages of cooperating processes in terms of information sharing, modularity, and computation speedup.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

GEETHANJALI INSTITUTE OF SCIENCE & TECHNOLOGY::NELLORE

III B.TECH II SEMESTER –ECE


Name of the subject: PRINCIPLES OF OPERATING SYSTEMS
Faculty Name: K.Venkateswarlu

UNIT-II

UNIT-II: Process, Threads and Scheduling


Process concept, scheduling, operation on processes, cooperating processes,
inter-process communication, Threads- Multithreading models- Thread
Libraries,- Threading Issues-Scheduling Criteria, Scheduling Algorithms,
Algorithm Evaluation

Process concept:

An OS executes a variety of programs .Batch system executes jobs ( A Batch


system is one in which jobs are bundled together with the instruction to allow
them to be processed without intervention ).

Time should system executes user program or tasks. Even on a single user
system, such as MS-Windows and Macintosh operating systems , a user may be
able to run several programs at one time ; a word processor, web browser and
email- packages.

The process:

Process is a sequential program in execution each process has its


own virtual CPU. The components of the process are the following.

 The object program to be executed(called the program text in UNIX)


 The data on which the program will executed(obtained from a file)
 Recourses required by the program(Ex. file containing requisite
information)
 The states of the processes execution.
A process include

1. Code-text Section.
2. Program counter
3. (Process) stack/heap
4. Data section.

1|P ag e
 A process is more than the program code, which is sometimes known as
the text section.

 It also includes the current activity, as represented by the value of the


program counter and the contents of the processor's registers.

 A process generally also includes the process stack, which contains


temporary data (such as function parameters, return addresses, and local
variables), and a data section, which contains global variables.

 A process may also include a heap, which is memory that is dynamically


allocated during process run time.

 We emphasize that a program by itself is not a process;

 a program is a passive entity, such as a file containing a list of


instructions stored on disk (often called an executable file), whereas a
process is an active entity, with a program counter specifying the next
instruction to execute and a set of associated resources.

 A program becomes a process when an executable file is loaded into


memory.

 Two common techniques for loading executable files are double-clicking


an icon representing the executable file and entering the name of the
executable file on the command line (as in prog. exe or a. out.)

Process State:

As a process executes, it changes state.

2|P ag e
 New – The Process is being created only memory is allocated , but not
CPU
 Running –Instructions are being executed if it current has the CPU.
CPU is allocated and this is only one process which is executed by the
CPU.
 Waiting (Blocked)-The Process is waiting for some event to occur.
Until the event is completed the process cannot proceed further.
 Ready-The process is waiting to be assigned to processor. In ready
state the program is waiting for CPU in ready queues
 Terminated –The process has finished execution. After the process
has finished execution. Its result is displayed.

Only one process can be running on any processor at any instant, although may
processes may be ready and waiting.

Process Control Block(PCB) :

Each process is represented in the OS by its own PCB also called a “Task
Control Block”.

PCB gives information about the status of the job.

Each user has a PCB which is created when a user creates a process and is
removed from the system when the process is killed.

3|P ag e
Process State: -The Process state represents the current state of the process.
The state may be new, ready, running, and waiting, terminated and so on. The
process may be any one of the above states & depending on the process state.
PCB is updated .

Program Counter: The Counter indicates the address of the next instruction to
be executed for this process.

CPU Registers: The registers vary in number and type, depending on the
computer architecture. They include

1.Accumulators

2.Index registers

3.Stack pointers and

4.General purpose registers are stored in PCB.

All the register contents must be stored in PCB before a process goes to I/O
burst when an interrupt occurs.

CPU Scheduling information- It includes a process priority, pointers to


scheduling Queues.

It also stores information about the CPU scheduling algorithm that is being
used.

Memory management information- The information may include, the


address of the base and limit registers ,the page tables or the segment tables,
depending on the memory system used by the OS.

Accounting information: It stores the amount of CPU utilization and real time
used time limits, account numbers, job or process numbers and so on.

I/O Status information- It includes the list of I/O devices allocated to this
process, a list of open files and so on.

Process Scheduling:

 The process scheduler selects an available process (possibly from a set


of several available processes) for program execution on the CPU.

4|P ag e
 As processes enter the system, they are put into a job queue, which
consists of all processes in the system.

 The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue.

 This queue is generally stored as a linked list. A ready-queue header


contains pointers to the first and final PCBs in the list.

 Each PCB includes a pointer field that points to the next PCB in the
ready queue.

 Each rectangular box represents a queue.

 Two types of queues are present: the ready queue and a set of device
queues.

 The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
5|P ag e
 A new process is initially put in the ready queue. It waits there till it is
selected for execution, or is dispatched.

 Once the process is allocated the CPU and is executing, one of several
events could occur:

 The process could issue an I/O request and then be placed in an I/O
queue.

 The process could create a new sub process and wait for the sub process’s
termination.

 The process could be removed forcibly from the CPU, as a result of an


interrupt, and be put back in the ready queue

Schedulers:

 A process migrates among the various scheduling queues throughout its


lifetime.

 The operating system must select, for scheduling purposes, processes


from these queues in some fashion.

 The selection process is carried out by the appropriate scheduler.

 The long-term scheduler, or job scheduler, selects processes from this


pool and loads them into memory for execution.

6|P ag e
The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute and allocates the CPU to one of them

Operations on Processes:

Process Creation:

 A process may create several new processes, via a create-process system


call, during the course of execution.

 The creating process is called a parent process, and the new processes
are called the children of that process.

 Each of these new processes may in turn create other processes, forming
a tree of processes.

 Most operating systems identify processes according to a unique process


identifier (or pid), which is typically an integer number.

 These processes are responsible for managing memory and file systems.

 The sched process also creates the init process, which serves as the root
parent process for all user processes.

 When a process creates a new process, two possibilities exist in terms of


execution:

 The parent continues to execute concurrently with its children.

 The parent waits until some or all of its children have terminated.

Process Termination:

 A process terminates when it finishes executing its final statement and


asks the operating system to delete it by using the exit () system call.

 At that point, the process may return a status value (typically an integer)
to its parent process (via the wait() system call).

 All the resources of the process—including physical and virtual memory,


open files, and I/O buffers—are deal located by the operating system.

 Termination can occur in other circumstances as well. A process can


cause the termination of another process via an appropriate system call
(for example, Terminate ProcessO in Win32).
7|P ag e
 Usually, such a system call can be invoked only by the parent of the
process that is to be terminated.

 A parent may terminate the execution of one of its children for a variety
of reasons, such as these:

 The child has exceeded its usage of some of the resources that it has been
allocated.

 The task assigned to the child is no longer required.

 The parent is exiting, and the operating system does not allow a child to
continue if its parent terminates.

 Consider that, in UNIX, we can terminate a process by using the exit()


system call; its parent process may wait for the termination of a child
process by using the wait() system call.

 The wait () system call returns the process identifier of a terminated child
so that the parent can tell which of its possibly many children has
terminated.

Cooperating Process

There are various processes in a computer system, which can be either


independent or cooperating processes that operate in the operating system. It is
considered independent when any other processes operating on the system may
not impact a process.

Process-independent processes don't share any data with other processes. On


the other way, a collaborating process may be affected by any other process
executing on the system.

A cooperating process shares data with another.

Advantages of Cooperating Process in Operating System

There are various advantages of cooperating process in the operating system.


Some advantages of the cooperating system are as follows:

1. Information Sharing

8|P ag e
Cooperating processes can be used to share information between various
processes. It could involve having access to the same files. A technique is
necessary so that the processes may access the files concurrently.

2. Modularity

Modularity refers to the division of complex tasks into smaller subtasks.


Different cooperating processes can complete these smaller subtasks. As a
result, the required tasks are completed more quickly and efficiently.

3. Computation Speedup

Cooperating processes can be used to accomplish subtasks of a single task


simultaneously. It improves computation speed by allowing the task to be
accomplished faster. Although, it is only possible if the system contains several
processing elements.

4. Convenience

There are multiple tasks that a user requires to perform, such as printing,
compiling, editing, etc. It is more convenient if these activities may be managed
through cooperating processes.

Concurrent execution of cooperating processes needs systems that enable


processes to communicate and synchronize their actions.

Methods of Cooperating Process

Cooperating processes may coordinate with each other by sharing data or


messages. The methods are given below:

1. Cooperation by sharing

The processes may cooperate by sharing data, including variables, memory,


databases, etc. The critical section provides data integrity, and writing is
mutually exclusive to avoid inconsistent data.

Here, you see a diagram that shows cooperation by sharing. In this diagram,
Process P1 and P2 may cooperate by using shared data like files, databases,
variables, memory, etc.

9|P ag e
2. Cooperation by Communication

The cooperating processes may cooperate by using messages. If every process


waits for a message from another process to execute a task, it may cause a
deadlock. If a process does not receive any messages, it may cause starvation.

Here, you have seen a diagram that shows cooperation by communication. In


this diagram, Process P1 and P2 may cooperate by using messages to
communicate.

Interprocess Communication:

 Processes executing concurrently in the operating system may be either


independent processes or cooperating processes.

 A process is independent if it cannot affect or be affected by the other


processes executing in the system.

 Any process that does not share data with any other process is
independent. A process is cooperating if it can affect or be affected by
the other processes executing in the system.

 There are several reasons for providing an environment that allows


process cooperation:

Information sharing:

10 | P a g e
 several users may be interested in the same piece of information (for
instance, a shared file), we must provide an environment to allow
concurrent access to such information.

Computation speedup:

If we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with the others

Modularity:

 We may want to construct the system in a modular fashion, dividing the


system functions into separate processes or threads.

Convenience:

 Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, printing, and compiling in parallel.

 Cooperating processes require an interprocess communication (IPC)


mechanism that will allow them to exchange data and information.

 There are two fundamental models of interprocess communication:

shared memory

message passing

shared memory :

 In the shared-memory model, a region of memory that is shared by


cooperating processes is established.

 Processes can then exchange information by reading and writing data to


the shared region.

message passing:

 In the message passing model, communication takes place by means of


messages exchanged between the cooperating processes.

 Message passing is useful for exchanging smaller amounts of data,


because no conflicts need be avoided.

11 | P a g e
 Message passing is also easier to implement than is shared memory for
intercomputer communication.

 Shared memory allows maximum speed and convenience of


communication, as it can be done at memory speeds when within a
computer.

 Shared memory is faster than message passing, as message-passing


systems are typically implemented using system calls and thus require the
more time consuming task of kernel intervention.

Naming:

 Processes that want to communicate must have a way to refer to each


other. They can use either direct or indirect communication.

 Under direct communication, each process that wants to communicate


must explicitly name the recipient or sender of the communication.

 In this scheme, the send.0 and receive() primitives are defined as:

 send(P, message)—Send a message to process P.

 receive (Q, message)—Receive a message from process Q.

 A link is established automatically between every pair of processes that


want to communicate.

12 | P a g e
 The processes need to know only each other's identity to communicate.

 A link is associated with exactly two processes.

 Between each pair of processes, there exists exactly one link.

Synchronization:

 Communication between processes takes place through calls to send()


and receive () primitives. There are different design options for
implementing each primitive.

 Message passing may be either blocking or nonblocking— also known


as synchronous and asynchronous.

 Blocking send- The sending process is blocked until the message is


received by the receiving process or by the mailbox.

 Nonblocking send- The sending process sends the message and resumes
operation.

 Blocking receive- The receiver blocks until a message is available.

 Nonblocking receive- The receiver retrieves either a valid message or a


null.

Buffering:

 Whether communication is direct or indirect, messages exchanged by


communicating processes reside in a temporary queue.

 Basically, such queues can be implemented in three ways:

Zero capacity- The queue has a maximum length of zero; thus, the link cannot
have any messages waiting in it. In this case, the sender must block until the
recipient receives the message.

Bounded capacity- The queue has finite length n; thus, at most n messages can
reside in it.

 If the queue is not full when a new message is sent, the message is placed
in the queue (either the message is copied or a pointer to the message is
kept), and the sender can continue execution without waiting.

13 | P a g e
 The links capacity is finite, however. If the link is full, the sender must
block until space is available in the queue.

Unbounded capacity- The queues length is potentially infinite; thus, any


number of messages can wait in it. The sender never blocks.

Threads :

Threads some times called “Lightweight process”, is a single sequential flow of


execution within a process, is a basic unit of CPU utilization. It compromises a
thread ID, a program counter, a register set and a stack.

Threads are software entities that can share memory and execute concurrently.

It shares with other threads belonging to the some process its code section, data
section and other OS resources such as open file and signals. If the process has
multiple threads of control it can do more than one task at a time.

A process is a program that performs a single thread of execution. For


Example, if a process is running a word processor program, a single thread of
instruction is being executed.

It performs only one task at one time. For example, the user could not
simultaneously type in characters and run the spell checker within the same
process.

May modern OS have extended the process concept to allow a process to have
multiple threads of execution. for example a web browser might have one thread
display images or text while another thread retrieves data from the network.

Motivation

Many soft ware packages that ran on modern desktop PC’s are
multithreads an application is implemented as a separate process with several
threads. A web browser might have one thread display images or text while
another thread retrieve data from the n/w.

Eg: A word processor is application- Display graphics , reading key strokes


from user, performing spelling & grammar.

Web browser is also an application -Web pages , images, sounds.

14 | P a g e
In certain situations a single application may be required to perform
several similar tasks. Eg: A web server accepts client requests for web pages,
images, sound. A busy web server may have several of clients concurrently
accessing it. If the web server run as a traditional single threaded process, it
would be able to require only one client at a time.

A thread it self is not a program, it can not run on its own , Rather, it runs within
a program.

Benefits:

The benefits of multithreaded programming can be broken down into 4 major


categories.

1. Responsiveness- Multithreading is an interactive application may


allow a program to continue running even if part of it is blocked or
performing lengthy operation, then by increasing responsiveness to the
user. For instance a multithreaded web browser could still allow user
interaction in one thread, while an image is being loaded in another
thread.
2. Resource sharing - Threads share the memory and the resources of
the process to which they belong. The benefits of code sharing are that
it allows an application to have several different threads of activity all
within the same address space.
3. Economy-Allocating memory and resources for process creation is
costly.
4. Utilization of MP Architectures.

15 | P a g e
User and Kernel Threads:

User Threads: User threads are above the Kernel it does not support or require
from the kernel .

1. It was managed by the application itself.

2. User level threads are generally fast to create and manage.

3. The library provides support for thread creation, scheduling and management
in user address space.

4. They are flexible and are of low cost.

5. Every user thread belong to a process.

6. They are scheduled by the thread library in the user address space

Three primary thread libraries

 POSIX Pthreads.
 Win 32 threads
 Java threads.
Kernel Threads:

1.Kernel Threads managed by the Kernel.


2.Kernel threads are generally shower to create and manage than are
user threads.
3.The kernel performs thread creation, scheduling and management in
kernel space.
4.They are inflexibility and are high cost.
5.Kernel threads need not be associated with a process.
6.They are scheduled by the Kernel.
Example.

Windows xp, windows 2000, linux.

Solaries , tru64 UNIX and Mac OSx.

Multi threaded Models:

More systems provides support for both user and kernel threads.
Resulting in different multithreading models.

3 common types of multithreading models.

16 | P a g e
 Many to One
 One to One
 One to Many
a)Many to one model:

This level shows the relationship between many user level threads to one kernel
thread.

If a thread calls a blocking system call , then the whole process in the
system will be blocked. In this model multiple threads does not run
simultaneously on microprocessors. because only one thread can access the
kernel at a time .

Eg. Green thread( a thread library) supported by solaries operating system.

b)One to One Model:

This model shows relationship between a user thread and a kernel thread. If
a thread performs a blocking system call, then another thread is allowed to
run without blocking the entire process. This model also allows the
execution of multiple threads to be performed on multiprocessors
simultaneously.

Eg : Windows NT/ XP/2000, Linux and solaries.

c)Many to Many Model:

17 | P a g e
This model shows relationship between multiple user level threads and some
or equal number of kernel level threads. Particularly an application (OS)
should contain only limited number of kernel threads.

The many to one model create multiple user threads , as a result of which no
concurrency occurs between them. This is because only one thread can be
accessed by the kernel at a time.

The one to one model provides a greater concurrency but is restricted to


create multiple threads within an application.

The many to many model suffers from neither of them short comings
.Because multiple threads can be created as per the need of the developers
and simultaneously, the corresponding kernel thread can be executed on the
multiprocessors.

CPU Scheduling:

CPU scheduling is the basic of multi programmed OS’s. The


objective of multiprogramming is to have some process running at all times,
in order to maximize the CPU utilization. In a uni-processor system, only
one process may run at time, any other processor must wait until the CPU is
free and can be rescheduled.

A process is executed until it must wait, typically for the completion


of some I/O request. In a simple computer system, the CPU would them set
idle, all this waiting time is wasted. Several processes are kept in memory at
one time. When one process has to wait, the OS takes the CPU always from
that process and gives the CPU to another process. This pattern continues.

18 | P a g e
Scheduling is a fundamental OS function. Almost all computer resources are
scheduled before use.

CPU-I/O Burst cycle:

Process execution consists of a cycle of CPU execution and I/O wait.


Processes alternate between these two states.

Process execution begins with a CPU burst. That is followed by an I/O burst,
then another I/O burst and so on. Finally the last CPU burst will and with
system request to terminate execution.

Altering sequence of CPU and I/O burst

CPU Schedules:

Whenever the CPU becomes idle, the operating system must select one of
the processes is the ready queues to be executed. The selection process is
carried out by the CPU scheduler. The scheduler selects from among the
processes in memory that are ready to execute and allocates the CPU to one
of them.

CPU scheduling decisions may take place under the 4 circumstances :

1. When a process switches from the running state to the waiting


state(I/O request)

19 | P a g e
2. When a process switches from the running state to the ready state(for
eg: when a interrupt occurs).
3. When the process switches from the waiting state to the ready state
(for eg: completion of I/O)
4. When a process terminates.
In circumstances 1 and 4 there is no choice in terms of scheduling, is non
preemptive, other wise the scheduling scheme is preemptive.

Scheduling-Criteria:

Different CPU-Scheduling algorithms have different properties.

In choosing which algorithm to use in a particular situation, we must


consider the properties of various algorithms.

The criteria include the following

 CPU utilization:
We want to keep the CPU as busy as possible . CPU utilization may
range from 0 to 100 percent.

 Through put- If the CPU is busy executing processes, then work is


being done. One measure of work is the number of processes
completed per time unit called throughput.
Through put might be 1 process per one hour or 10 processes per
second,

 Turn around time : Amount of time to execute a particular process.


The interval from the time of submission of a process to the time of
completion is the turn around time.
 Waiting time: Amount of time a process has been waiting in the
ready queue.
 Response time: Amount of time it takes from when a request was
submitted until the 1st response is produced; not output.

Optimization of criteria

 Max CPU utilization


 Max throughput
 Min turn around time.
 Min waiting time.
 Min response time.

20 | P a g e
Scheduling Algorithms:

• There are many different CPU scheduling algorithms which deals with
the problem of deciding which of the processes in the ready queue is to be
allocated the CPU.

• First-Come, First-Served (FCFS) Scheduling

• Shortest-Job-First (SJF) Scheduling

• Priority Scheduling

• Round Robin (RR)

• Multilevel Queue Scheduling

• Multilevel Feedback Queue Scheduling

First come , First served Scheduling[FCFS]:

• Simplest CPU-Scheduling algorithm is FCFS.


• The process there requests the CPU 1st is allocated the CPU first.
• When a process enter the ready queue its PCB is linked on to the tail of
the queue.
• When the CPU is free , it is allocated to the process at the head of the
queue.
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrives in the order p1,p2,p3 and are
served in FCFS order .
• The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
• Turn around time for P1 = 24; P2 =27; P3 =30
• Average TT: ( 24+27+ 30)/3 =27
• Suppose that the process arrives in order p2,p3,p1.
• The Gantt chart for the schedule is

21 | P a g e
P2 P3 P1
3
0 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Turn around time for P1 =3; P2 =6; P3 =30
• Average TT: ( 3+6+ 30)/3 =13

• Much better than previous case


• Convoy effect short process behind long process
• This reduction is very substantial .much better than previous one.

•The FCFS scheduling algorithm is the non preemptive. Once the CPU has
been allocated to a process, the process keeps the CPU until it releases the
CPU.
Shortest Job First(SJF) Scheduling:

• The algorithm associates with process, the length of its next CPU burst
use these lengths to schedule the process with the short time. When the
CPU is available , it is assigned to the process that has the smallest next
CPU burst of two processes having the same CPU bursts, FCFS
scheduling is used to break the time can be either preemptive or non
preemptive.

• An example consider the following set of processes with the length of the
CPU bursts given in milliseconds.

Process Burst time

P1 6

P2 8

P3 7

P4 3

• Gantt chart
P4 P1 P3 P2

0 3 9 16 24

• Waiting time for p1=3, p2=16, p3=9, p4=0


• Average waiting time is (3+16+9+0)/4=7 ms
• Turnaround time: P1=9,P2=24,P3=16,P4=3
22 | P a g e
• Avg Turnaround time= 9+24+16+3/4=52/4=13 ms

• By comparison we were using FCFS scheduling scheme the average


waiting time would be 10.25ms.
• SJF is optimal – gives minimum average waiting time for a given set of
process.

Priority Scheduling:

A priority number(integer) is associated with each process, and the CPU is


allocated to the process with the highest priority. Equal priority processes are
scheduled in FCFS order.

[smallest integer= highest priority]

Priorities are generally indicated by some fixed range of numbers such as 0-7
or 0-40.

However there is no general agreement on whether ‘0’ is the highest or lowest


priority . Some systems are low numbers to represent low priority. Others use
low numbers for high priority. This difference can lead to confusion. In this we
assume that low numbers for high priority. This difference can lead to
confusion. In this we assume that low no’s represent high priority.

Eg:

Process Burst Time Priority

P1 10 3

P2 1 1

P3 2 4

P4 1 5

P5 5 2

Using priority scheduling ,we would schedule these process according to the
following Gantt chart.

23 | P a g e
P2 P5 P1 P3 P4

0 1 6 16 18 19

Waiting time for p1=6, p2=0, p3=16, p4=18, p5=1

 The average time is =41/5=8.2ms


Turnaround time for p1=16

Turnaround time for p2=1

Turnaround time for p3=18

Turnaround time for p4=19

Turnaround time for p5=6

 Avg Turnaround time=16+1+18+19+6/5=60/5=12 ms

Priority scheduling can be either preemptive or non-preemptive. When a process


arrives at the ready queue, its priority is compared with the priority of currently
running process.

Preemptive priority algorithms will preempt the CPU if the newly arrived
process priority is higher than the priority of the currently running process.

A non preemptive priority algorithm will simply put the new process at the head
of the ready queue.

Round Robin Scheduling :

It is designed especially for time sharing systems. It is similar to FCFS


scheduling , but preemption is added to switch between processes. A small unit
of time, called a time quantum or time slice. A time quantum is generally from
10 to 100 milliseconds.

The ready queue is treated as circular queue. Alter time quantum has elapsed;
the process is preempted and added to the end of the ready queue.

Consider the following set of processes that arrives at time 0 with length of the
CPU burst given in milliseconds.

24 | P a g e
Process Burst time

P1 24

P2 3

P3 3

If we use a time quantum of 4 milliseconds , then process p1 gets the 1 st 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the
first time quantum and the CPU is given to the next process is the ready queue,
process p2. Since process p2 does not need 4 milliseconds, it quits before its
time quantum expires. The CPU is given to the next process.
The resulting RR scheduling is

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

The average waiting time is (10-4)+4+7=17/3 =5.66 milliseconds.

Turnaround time for p1=30

Turnaround time for p2=04

Turnaround time for p3=7

Avg Turnaround time=30+4+7/3=13.66 ms

Multilevel Queue Scheduling :

• In this scheduling ,process are easily classified in to different groups.

• For example, a common division is made between foreground and


background processes.

• These two types of processes have different response-time requirements.

• A multilevel queue-scheduling algorithm partitions the ready queue in to


several separate queues.

• The process are permanently assigned size, process priority, or process


type.

• Each queue has its own scheduling algorithm.


25 | P a g e
• The foreground queue might be scheduled by an RR algorithm, while the
background queue is scheduled by an FCFS algorithm.

• The scheduling among the queue is implemented as fixed- priority pre-


emptive scheduling.

• For example, the foreground queue may have absolute priority over the
background queue.

• In our example, a multilevel queue-scheduling algorithm with five


queues:

Multilevel Feedback Queue Scheduling:

• This scheduling allows a process to move between queues.

• The idea is to separate processes with different cpu burst characteristics.

• If a process uses more cpu time, it will be moved to a lower-priority


queue.

• This scheme leaves i/o bound and interactive processes in the higher-
priority queues.

• Like this , a process which waits more time in a lower –priority queue
may be moved to a higher –priority queue.

26 | P a g e
For example ,consider a multilevel feedback queue scheduler with three queues
, numbered from 0 to 2.

• The scheduler first executes all processes in queue 0.

• Only when queue 0 is empty will it execute processes in queue1.

• Similarly, processes in queue 2 will be executed only if queue 0 and 1 are


empty.

• A process entering the ready queue is put in queue 0. a process in queue 0


is given a time quantum of 8 milliseconds.

• If it does not finish within this time ,it is moved to the tail of queue1.

• Processes in queue 2 are run on an FCFS basic, only when queues 0 and 1
are empty.

• This scheduling algorithm gives highest priority to any process with a cpu
burst of 8 milliseconds or less.

• Long processes automatically sink to queue2 and are served in FCFS


order

27 | P a g e
Algorithm Evaluation:

Evaluation may be necessary for choosing a scheduling algorithm and its


parameters to a particular system. Criteria are used as a basis for evaluation
(CPU, Utilization, throughput, response time, waiting time, turnaround time).

How do we select a CPU scheduling algorithm for a particular system?

Each algorithm leave with its own parameters.

The 1st problem is defining the criteria to be used in selecting an algorithm.


Criteria are often defined in terms of CPU utilization, response time or
throughput.

Different evaluation methods:

1.Deterministic modeling:
One type of analytic evaluation is “deterministic modeling”. It takes a
particular predefined workload and defines the performance of each algorithm
for that workload.

For example : all 5 processes arrives at time ‘0’ in the order given with the
length of the CPU burst.

Process Burst time

P1 10

P2 29

P3 3

P4 7

P5 12

Consider the FCFS, SJF and RR(QT=10) scheduling algorithms for this set of
process. Which algorithm would give the minimum waiting time?

FCFS AWT =(0+10+39+42+49)/5=28 ms

Non-preemptive SJF AWT=(10+32+0+3+20)/5=13 ms

RR(qt=10) AWT=(0+32+20+23+40)/5=23 ms

We see that, in this case AWT of SJF is less than FCFS and RR.
28 | P a g e
Deterministic modeling is simple and fast. It gives us exact numbers allowing
us to compare the algorithms. The main use of this is to describe scheduling
algorithms and provide examples.

2.Queuing Models :

Use the distribution of CPU and I/O bursts and distribution of process arrival
time. From these it is possible to compute the average throughput, utilization ,
waiting time.

The result is a mathematical formula and statistical analysis.

The computer system is described as a network of servers. Each server has a


queue of waiting process. The CPU is a server with its ready queue, as is the I/O
systems with its device queue.

Knowing arrival rates and service rates we can compute utilization, average
queue length, average wait time and so on. This area of study is also called
“Queuing network analysis”.

A system in a steady state

n=λ * W it is known as “Little’s formula”

naverage number of processes in the system

λaverage arrival rate for new processes in the queue

waverage waiting time for a process in the queue

it is particularly useful because it is valid for any scheduling algorithm and


arrival distribution.

Useful for comparing scheduling algorithm, but it also has limitations.

for example if we know that 7 processes arrive every second, and that there are
normally 14 processes in the queue, then we can compute the average waiting
time per process as 2 seconds.

3. Simulation:

To get a more accurate evaluation of scheduling algorithms, we can use


simulation.

Simulation involves programming a model of the computer systems.


29 | P a g e
Use software data structure represent the major components of the system.

The simulator has a variable representing a clock, as this variable values is


increased, the simulator modifies the system state to reflect the activities of the
device, the process and the scheduler.

As the simulator executes, satisfies that indicate algorithm performance are


gathered and printed.

Artificial data or trace tapes, Useful but expenses.

Evaluation of CPU scheduling by simulation

We can use trace tapes for it gives the order of their occurrence. We create a
trace tape by monitoring the real systems and recording the sequence of actual
events.

We then use this sequence to drive the simulation. Trace tape provides an
excellent way to compare two algorithms on exactly the same set of real inputs.
This method can produce accurate results for its inputs.

Simulation can be expensive, often requiring hours of computer time. A more


detailed simulation provides more accurate results, but it also requires more
computer time.

In addition trace tape can requires large amount of storage space. Finally the
design, coding and debugging of the simulator can be a major task.

30 | P a g e
4. Implementation :

Even a simulator is of limited accuracy. The only completely accurate way to


evaluate a scheduling algorithm is to code it, put it into the operating system
and see how it works.

This approach puts the actual algorithm in the real system for evaluation under
real operating conditions. The major difficulty is the cost of this approach.

The expense is not only in coding the algorithm and modifying the operating
system. Most users are not interested in building a better operating system.

A constantly changing operating system does not help the users to get their
work done.

………………………………….***********………………………………..

31 | P a g e

You might also like