Course Title: Operating Systems
II B.TECH –I SEM DATA SCIENCE
Unit : II
Topic : Process and CPU Scheduling
Course Syllabus:
UNIT II
Process and CPU Scheduling - Process concepts and
scheduling, Operations on processes, Cooperating
Processes, Threads, Scheduling Criteria, Scheduling
Algorithms, Multiple Processor Scheduling.
Process Concept:
Process: A process is an instance of a program that is running in a computer.
A process is more than the program code, which is sometimes known as the text section. It also
includes the current activity, as represented by the value of the program counter and the contents of
the processor's registers.
A process generally also includes the process stack, which contains temporary data (such as
function parameters, return addresses, and local variables), and a data section, which contains
global variables.
A process may also include a heap, which is memory that is dynamically allocated during process
run time. The structure of a process in memory is shown in the following figure.
Process States:
During execution a process changes its state. The state of a process is defined by
the current activity of that process. Each process may be in one of the following
states:
New: This state represents a newly created process that hasn’t started running yet.
It has not been loaded into the main memory, but its process control block (PCB)
has been created, which holds important information about the process.
Process States:
Ready: A process in this state is ready to run as soon as the CPU becomes available. It is
waiting for the operating system to give it a chance to execute.
Running: This state means the process is currently being executed by the CPU. Since
we’re assuming there is only one CPU, at any time, only one process can be in this state.
Blocked/Waiting: This state means the process cannot continue executing right now. It is
waiting for some event to happen, like the completion of an input/output operation (for
example, reading data from a disk).
Exit/Terminate: A process in this state has finished its execution or has been stopped by
the user for some reason. At this point, it is released by the operating system and removed
from memory.
Operations on Processes:
The processes in most systems can execute concurrently, and they may be created
and deleted dynamically. The following are different types of operations that can be
performed on processes.
1. Process Creation
2. Process Termination
1.Process Creation: This is the initial step of the process execution activity. Process
creation means the construction of a new process for execution. This might be
performed by the system, the user, or the old process itself. There are several events
that lead to the process creation. Some of the such events are the following:
When we start the computer, the system creates several background processes.
• A user may request to create a new process.
• A process can create a new process itself while executing.
• The batch system takes initiation of a batch job.
Process Creation
• A process may create other processes.
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes
• Generally, a process is identified and managed via a process identifier
(pid)
• A Tree of Processes in UNIX
Process Creation
• Resource sharing among parents and children options
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution options
• Parent and children execute concurrently
• Parent waits until children terminate
Process Creation
• Address space
• A child is a duplicate of the parent address space.
• A child loads a program into the address space.
• UNIX examples
• fork() system call creates new process
• exec() system call used after a fork()replaces the process’ memory
space with a new program
Operations on Processes:
2. Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to
run. It means the operating system puts the process from the ready state into the
running state. Dispatching is done by the operating system when the resources are
free or the process has higher priority than the ongoing process.
3. Blocking
When a process invokes an input-output system call that blocks the process, and
operating system is put in block mode. Block mode is basically a mode where the
process waits for input-output. Hence on the demand of the process itself, the
operating system blocks the process and dispatches another process to the
processor. Hence, in process-blocking operations, the operating system puts the
process in a 'waiting' state.
Operations on Processes:
4. Preemption
When a timeout occurs that means the process hadn't been terminated in the allotted
time interval and the next process is ready to execute, then the operating system
preempts the process. This operation is only valid where CPU scheduling supports
preemption.
Process Termination:
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the
execution. Like creation, in termination also there may be several events that may
lead to the process of termination. Some of them are:
• The process completes its execution fully and it indicates to the OS that it has
finished.
• The operating system itself terminates the process due to service errors.
• There may be a problem in hardware that terminates the process.
Processes Scheduling:
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process based on a particular strategy.Throughout its lifetime, a process
moves between various scheduling queues, such as the ready queue, waiting
queue or devices queue.
Categories of Scheduling
Non-Preemptive:A process's resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting
state, resources are switched.
Preemptive:The OS can switch a process from running state to ready state.
This switching happens because the CPU may give other processes priority
and substitute the currently active process for the higher priority process.
Processes Scheduling:
The Operating System maintains the following important process scheduling
queues −
Job queue−This queue keeps all the processes in the system.
Ready queue-This queue keeps a set of all processes residing in main
memory,ready and waiting to execute.A new process is always put in this queue.
Device queues−The processes which are blocked due to unavailability of
an I/O device constitute this queue.
Schedulers
Schedulers are special system software which handle process scheduling in
various ways.Their main task is to select the jobs to be submitted into the
system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler:
It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduling.
When a process changes the state from new to ready, then there is use of long-
term scheduler.
Schedulers
Short Term Scheduler:
It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects a process
among the processes that are ready to execute and allocates CPU to one of
them.Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next. Short-term schedulers are faster than long-term
schedulers.
Medium Term Scheduler:
Medium-term scheduling is a part of swapping. It removes the processes from
the memory. It reduces the degree of multiprogramming. The medium-term
scheduler is in-charge of handling the swapped out-processes.
This process is called swapping, and the process is said to be swapped out or
rolled out.Swapping may be necessary to improve the process mix.
Threads
A thread is a flow of execution through the process code, with its own program counter,
system registers and stack.
• It is a special-purpose register within a computer's CPU that stores the memory address
of the next instruction to be executed.
• It is special memory locations within a computer's CPU used for storing and
manipulating data during program execution.
A thread is also called a light weight process. Threads provide a way to improve
application performance through parallelism. Folowing figure shows the working of the
single and multithreaded processes.
Types of Threads:
User Level Threads: User level threads are managed by a user level library.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.
User level threads:User level threads are typically fast. Creating threads, switching
between threads and synchronizing threads only needs a procedure call.
• Kernel Level Threads: In this case, thread management done by the Kernel. There
is no thread management code in the application area. Kernel threads are supported
directly by the operating system. They are slower than user level threads due to the
management overhead.
Threads
Benefits of multi thread programming:
• Responsiveness: Multi threading is an interactive application may allow program
to continue running even of part of it is blocked or performing lengthy operation,
thereby increases the responsiveness to the user.
• Resource sharing: By default threads share memory and the resources of the
process to which they belong.
• Economy: Allocating memory and resource for process creation is costly. Because
threads share resources of the proves which they belong, it is more economical to
create and context switch threads.
•Utilization of multiprocessor architectures: The benefits of multithreading can be
greatly increased in a multiprocessor architectures, whre threads may be running in
parallel on different processors. A single threaded process can run on one CPU.
Multithreading on multi-cpu machine increases the concurrency.
Multi Thread programming Models:
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
Many to Many Model
Many to One Model
One to One Model
Many to Many Model: In this model, many user level threads multiplexes to the Kernel thread of smaller or
equal numbers. The number of Kernel threads may be specific to either a particular application or a particular
machine. Following diagram shows the many to many model. In this model, developers can create as many
user threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.
Multi Thread programming Models:
• Many to One Model: Many to one model maps many user level threads to one Kernel level thread. Thread
management is done in user space. When thread makes a blocking system call, the entire process will be blocked.
Only one thread can access the Kernel at a time,so multiple threads are unable to run in parallel on
multiprocessors. If the user level thread libraries are implemented in the operating system in such a way that
system does not support them then Kernel threads use the many to one relationship modes.
• One to One Model: There is one to one relationship of user level thread to the kernel level thread.This model
provides more concurrency than the many to one model. It also another thread to run when a thread makes a
blocking system call. It support multiple thread to execute in parallel on microprocessors. Disadvantage of this
model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows
2000 use one to one relationship model.
CPU Scheduling Criteria
CPU Scheduling: Many criteria have been suggested for comparing CPU
scheduling algorithms. The criteria include the following:
1. CPU Utilization
2. Throughput
3. Turnaround Time
4. Waiting Time
5. Response Time
CPU Scheduling Criteria:
CPU Utilization: In general, CPU utilization can range from 0 to 100 percent. In a real system, it should range
from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system).
Throughput: One measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10
processes per second.
Turnaround time: The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
Response time: Response time, is the time it takes to start responding, not the time it takes to output the response.
The turnaround time is generally limited by the speed of the output device.
CPU Scheduling Criteria:
Pre-emptive and Non Pre-emptive scheduling algorithms
S.No. Preemptive Scheduling Non-Preemptive Scheduling
The type of scheduling in which once
the resources (CPU Cycle) have been
The CPU scheduling in which the resources (CPU Cycle) have been allocated to a process, the process holds
1. allocated to a process for a limited amount of time is known as it until it completes its burst time or
preemptive scheduling. switches to the 'wait' state is known as
non-preemptive scheduling.
In the preemptive scheduling, a process can be interrupted when it is In the non-preemptive scheduling, a
2. being executed. process cannot be interrupted until it
terminates itself or its time is over.
If a process that has a high priority arrives frequently in the 'ready' queue, If a process that has a long burst time is
3. the low priority processes may starve. running the CPU, then the process that
has less CPU burst time would starve
4. It has overheads. It does not have overheads.
5. It is flexible in nature. It is not flexible in nature.
6. It is expensive. It is not expensive.
Examples: Round Robin scheduling, Shortest Remaining Time First Examples: First Come First Serve and
7. scheduling. Shortest Job First.
SJF:
Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling process
that selects the waiting process with the smallest execution time to execute next.
This scheduling method may or may not be preemptive. Significantly reduces the
average waiting time for other processes waiting to be executed.
Implementation of SJF Scheduling
Sort all the processes according to the arrival time.
Then select that process that has minimum arrival time and minimum Burst time.
After completion of the process make a pool of processes (a ready queue) that
arrives afterward till the completion of the previous process and select that
process in that queue which is having minimum Burst time.
SJF:
• Example:
Suppose that we have a set of four processes that have arrived at the same time
in the order P1, P2, P3 and P4. The burst time in milliseconds of each process is
given by the following table −
Process CPU Burst Time in ms
P1 6
P2 10
P3 4
P4 6
SJF:
Example of Non Pre-emptive Shortest Job First CPU
Scheduling Algorithm
Consider the following
Process
table of arrival time andArrival
Burst Time
burst Time
time for three
processes P1, P2 and P3.
P1 6 ms 0 ms
P2 8 ms 2 ms
P3 3 ms 4 ms
• Step-by-Step Execution:
• Time 0-6 (P1): P1 runs for 6 ms (total time left: 0 ms)
• Time 6-9 (P3): P3 runs for 3 ms (total time left: 0 ms)
• Time 9-17 (P2): P2 runs for 8 ms (total time left: 0 ms)
SJF:
•Turn Around time = Completion time - arrival time
•Waiting Time = Turn around time - burst time
SJF:
Process Arrival Burst Time Completio Turn Waiting Time
Time (AT) (BT) n Around (WT)
Time (CT) Time (TAT)
P1 0 6 6 6-0 = 6 6-6 = 0
P2 2 8 17 17-2 = 15 15-8 = 7
P3 4 3 9 9-4 = 5 5-3 = 2
•Average Turn around time = (6 + 15 + 5)/3 = 8.6 ms
•Average waiting time = ( 2 + 0 + 7 )/3 = 9/3 = 3 ms
Multiple –Processor Scheduling:
Multiple Processer scheduling is used for load balancing to reduce the load of the CPU.
In Multiple-Processor scheduling multiple CPU’s one available and hence load sharing become possible.
Its move complex than single processor system.
In multi-Processor system are homogenous we can use any processor to run any process in the queue.
There are two approaches
1. Asymmetric Multiprocessing
2. Symmetric Multiprocessing
Asymmetric Multiprocessing (AMP):
Asymmetric Multiprocessing (AMP) is a multiprocessing structure wherein one processor called the master processor
controls the other subordinate processors also known as the slave processors. In this configuration, all OS activities and tasks
scheduling is done by the master processor of the system.
Advantages:
Simplified Control: Another advantage is that there is a single master processor who handles the task of scheduling so as to
ensure co-ordination of tasks.
Reduced Resource Conflicts: This is made possible since only the master processor communicates with the operating
system hence the conflicts between the processors are low.
Specialized Processors: Some processors can be set aside to handle certain operations resulting in better performance for
certain select tasks.
Disadvantages:
Single Point of Failure: Correspondingly, the operations of the system are largely contingent upon the master processor.
When computer is splitter into master and other processors, if the master does not work, then the entire system can be non-
operational.
Limited Scalability: More processor nodes may not bring about the desired leaps in performance owing to the fact that the
master processor becomes the center of processing.
Less Efficient Resource Utilization: Other subordinate processors may even remain idle due to the inability of master
processor to assign tasks appropriately.
Symmetric Multiprocessing (SMP):
Symmetric multiprocessing (SMP) is a multiprocessors system where multiple processors are installed and have an equal
access to the system and memory resources of the system. Every processor works individually, while doing its work and
communicating with the operating system.
Advantages:
Better Resource Utilization: Each processor is able to work on a particular task and share the work load hence enhancing
the performance.
Scalability: Additional processors can also be added to SMP systems hence they have a good potential for scaling.
Fault Tolerance: Due to the notion of distributed processing, where no single processor has control of the entire system,
failure of one processor does not result in failure of the entire system.
Disadvantages:
Complex Scheduling: Tasks distribution among the multiple processors might be more challenging than the simple
division of data among the cores and can involve logical functions of scheduling.
Resource Contention: Several processors might run for the same memory or I/O thus they might experience contention.
Increased Hardware Costs: SMP systems call for more elaborate hardware meant for the shared access between the
processors and the memory and other system assets.