MIT School of Computing
Department of Computer Science & Engineering
Third Year Engineering
21BTCS504-Operating System
Class - T.Y.
PLD(SEM-I)
Unit - II
PROCESS MANAGEMENT
AY 2024-2025 SEM-I
1
MIT School of Computing
Department of Computer Science & Engineering
Unit-II Syllabus/Contents
• Process Concept,
• Operations on Processes,
• Process Scheduling, PLD
• Inter-process Communication, 5 Examples of IPC Systems
• Multithreading Models, Threading Issues,
• CPU Scheduling, Basic Concepts, Scheduling Criteria,
Scheduling Algorithms, Multiple-Processor Scheduling, Thread
Scheduling
2
MIT School of Computing
Department of Computer Science & Engineering
Process
• Program under execution
• An instance of a program
• Schedulable/Dispatch unit (CPU)
• Unit of execution (CPU)PLD
3
MIT School of Computing
Department of Computer Science & Engineering
Process representation
PLD
4
• A process is also known as job or task. A process is
more than the program code, which is sometimes
known as the text section. It also includes the
current activity, as represented by the value of the
program counter and the contents of the
processor’s registers.
• A process generally also includes the process
stack, which contains temporary data (such as
function parameters, return addresses, and local
variables),
• A data section, which contains global variables.
• A process may also include a heap, which is
memory that is dynamically allocated during
process run time.
5
MIT School of Computing
Department of Computer Science & Engineering
Process operations
• Create (space allocation)
• Schedule/run
• Wait/Block
• Suspend/Resume PLD
• Terminate (space deallocation)
6
MIT School of Computing
Department of Computer Science & Engineering
Process Attributes in Process Control Block (PCB)
PLD
7
• Process state. The state may be new, ready, running, and waiting, halted,
and so on.
• Program counter. The counter indicates the address of the next instruction to
be executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward
(Figure 3.4).
• CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such items
as the value of the base and limit registers and the page tables, or the segment
tables, depending on the memory system used by the operating system.
•Accounting information. This information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers, and so
on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on. In brief, the PCB simply
serves as the repository for any information that may vary from process to
8
process.
MIT School of Computing
Department of Computer Science & Engineering
More about PCBs
• OS maintains 1 PCB for each process
• Contents of a PCB are also called collectively as
context of a process
PLD
OS
PCB1, PCB2, PCB3…PCBn
Process 1 (Stack, heap, code, data sections)
Process 2 (Stack, heap, code, data sections)
Process 3 (Stack, heap, code, data sections)
.
.
Process n (Stack, heap, code, data sections)
9
MIT School of Computing
Department of Computer Science & Engineering
Context Switching in OS
• CPU uses registers to store current process
execution state
• When stopped in between, have to save these
values in PCB to resumePLDfrom that point onwards
• Saving current running process’s content(CPU
register values) from CPU to PCB of the process and
loading other process’s context in CPU registers
10
MIT School of Computing
Department of Computer Science & Engineering
Example Context Switching
PLD
11
Thread in Operating
System
• A thread is a single sequence stream
within a process. Threads are also called
lightweight processes as they possess
some of the properties of processes.
• Each thread belongs to exactly one
process. In an operating system that
supports multithreading, the process can
consist of many threads.
• But threads can be effective only if the
CPU is more than 1 otherwise two threads
have to context switch for that single CPU.
12
What is Thread in Operating
Systems?
• In a process, a thread refers to a single
sequential activity being executed.
these activities are also known as
thread of execution or thread control.
Now, any operating system process can
execute a thread. we can say, that a
process can have multiple threads.
13
Why Do We Need
Thread?
• Threads run in parallel improving the application
performance. Each such thread has its own CPU
state and stack, but they share the address space
of the process and the environment.
• Threads can share common data so they do not
need to use inter-process communication. Like the
processes, threads also have states like ready,
executing, blocked, etc.
• Priority can be assigned to the threads just like the
process, and the highest priority thread is
scheduled first.
• Each thread has its own Thread Control Block
(TCB). Like the process, a context switch occurs for
the thread, and register contents are saved in
(TCB). As threads share the same address space
and resources, synchronization is also required for
the various activities of the thread. 14
Types of Thread in Operating
System
• Threads are of
two types.
These are
described
below.
• User Level
Thread
• Kernel Level
Thread
15
User Level Threads
• User Level Thread is a type of thread
that is not created using system calls.
The kernel has no work in the
management of user-level threads.
User-level threads can be easily
implemented by the user. In case when
user-level threads are single-handed
processes, kernel-level thread manages
them.
16
• Advantages of User-Level Threads
• Implementation of the User-Level Thread is
easier than Kernel Level Thread.
• Context Switch Time is less in User Level
Thread.
• User-Level Thread is more efficient than Kernel-
Level Thread.
• Because of the presence of only Program
Counter, Register Set, and Stack Space, it has
a simple representation.
• Disadvantages of User-Level Threads
• There is a lack of coordination between Thread
and Kernel.
• In case of a page fault, the whole process can
be blocked.
17
Kernel Level Threads
• A kernel Level Thread is a type of thread
that can recognize the Operating
system easily. Kernel Level Threads has
its own thread table where it keeps
track of the system. The operating
System Kernel helps in managing
threads. Kernel Threads have somehow
longer context switching time. Kernel
helps in the management of threads.
18
• Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be
handled by the Kernel-Level Threads.
• Whenever any process requires more time
to process, Kernel-Level Thread provides
more time to it.
• Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-
Level Thread.
• Implementation of this type of thread is a
little more complex than a user-level thread.
19
20
MIT School of Computing
Department of Computer Science & Engineering
Preemptive Process States
PLD
21
MIT School of Computing
Department of Computer Science & Engineering
Non-preemptive process states
PLD
22
MIT School of Computing
Department of Computer Science & Engineering
Explanation
• New: All installed processes are here (SSD, HDD)
• Ready: Processes among the installed processes that
are wanting to run on CPU (in MM)
PLD
• Running: Running on CPU (MM)
• Blocked: Waiting for any I/O or event (MM)
• Terminated: All completed processes are in this state
(goes out from the MM)
23
MIT School of Computing
Department of Computer Science & Engineering
Note
• When a new process is formed, OS makes the PCB
• In non preemptive process, no factor can force a
running process to come out of CPU. Termination or
PLD
going to waiting is only its wish.
• PCB will be there in running, ready and waiting
state. In these three states, process will be in MM
only
24
MIT School of Computing
Department of Computer Science & Engineering
CPU scheduling
• Make a selection of ready processes to run on CPU.
• Needed for better performance
• Metric for performancePLDmay vary
• Multiple policies according to metric
25
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Queue
• Job queue: to keep all processes in new state
• Ready queue: to keep all processes in ready state
• Device queue: to keep the processes in blocked
state PLD
26
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Objectives
• Maximize throughput.
• Maximize number of users receiving acceptable
response times.
• Be predictable. PLD
• Balance resource use.
• Enforce Priorities.
• Give preference to processes holding key resources
27
MIT School of Computing
Department of Computer Science & Engineering
CPU scheduling need
• Make a selection of ready processes to run on CPU.
• Needed for better performance
• Metric for performance may vary
PLD
• Multiple policies according to metric
28
MIT School of Computing
Department of Computer Science & Engineering
Types of scheduler
• Long term scheduler: new to ready
• Short term scheduler: ready to running
• Mid term scheduler: swapping
PLD
is done by this.
Swapping leads to suspension
29
MIT School of Computing
Department of Computer Science & Engineering
PLD
30
MIT School of Computing
Department of Computer Science & Engineering
s
CPU Scheduling
• Types: Preemptive and non preemptive
• Assumption initially: No process has I/O requirement
• Some terminologies:
• Arrival Time(AT): Process arrival time(admitted by LTS)
• Burst/Service time(BT): burst time is the time, required the cpu to
PLD
execute that job, it is in milli seconds.
• Completion time (CT): Time when process completes execution
• Turn around time (TAT): Time from arrival until completion (TAT=CT-
AT)
• Waiting time (WT): Waiting for CPU in ready queue (WT=TAT-BT)
• Response time: Time from arrival to the first response on CPU
• Scheduling length: max(CT)-min(AT)
• Throughput: No of processes executed per unit time
31
MIT School of Computing
Department of Computer Science & Engineering
Scheduling Criteria
• CPU Utilization
• Throughput
• Turnaround time PLD
• Waiting Time
• Response Time
32
MIT School of Computing
Department of Computer Science & Engineering
CPU SCHEDULING ALGORITHMS:
• FCFS (First come first serve) – Non
preemptive
• SJF (Shortest Job first) – Non preemptive
PLD
• Priority Scheduling
• Round Robin - Preemptive
• Others
33
MIT School of Computing
Department of Computer Science & Engineering
FCFS (First Come First Serve)
• The process that request the CPU first is holds the cpu first.
• If a process request the cpu then it is loaded into the ready
queue PLD
• Mode: Non preemptive
• Tie breaker: Smallest ID first of the processes at tie (P1
and P2, P1 will execute first)
34
MIT School of Computing
Department of Computer Science & Engineering
FCFS Example no. 1
Process Id Arrival time Burst time
P1 3 4
P2 5 3
PLD
P3 0 2
P4 5 1
P5 4 3
calculate the average waiting time and average turn around
time.
35
MIT School of Computing
Department of Computer Science & Engineering
Solution-
Gantt Chart-
PLD
Here, black box represents the idle time of CPU.
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
36
MIT School of Computing
Department of Computer Science & Engineering
rocess Id Exit time Turn Around time Waiting time
P1 7 7–3=4 4–4=0
P2 13 13 – 5 = 8 8–3=5
PLD
P3 2 2–0=2 2–2=0
P4 14 14 – 5 = 9 9–1=8
P5 10 10 – 4 = 6 6–3=3
37
MIT School of Computing
Department of Computer Science & Engineering
• Average Turn Around time = (4 + 8 + 2 + 9
+ 6) / 5 = 29 / 5 = 5.8 unit
• Average waiting time = (0 + 5 + 0 + 8 + 3) /
PLD
5 = 16 / 5 = 3.2 unit
38
MIT School of Computing
Department of Computer Science & Engineering
Pros and Cons
• Advantages
• Very easy to implement
• No complex logic
• No starvation
PLD
• Disadvantages
• No option of preemption
• Convoy effect
39
MIT School of Computing
Department of Computer Science & Engineering
Convoy effect
• Only FCFS suffers
• If a slow process (large service time) is scheduled first, then it makes
entire system slow
PLD
40
MIT School of Computing
Department of Computer Science & Engineering
SJF
• Criteria: min BT first
• Mode: Non preemptive
• Tie breaker: FCFS
PLD
41
MIT School of Computing
Department of Computer Science & Engineering
Example
PLD
42
MIT School of Computing
Department of Computer Science & Engineering
PLD
43
MIT School of Computing
Department of Computer Science & Engineering
More about SJF
• Advantages:
• Min waiting time among non preemptive scheduling
• Better throughput in continuous execution
• No convoy effect
PLD
• Disadvantages
• No practical implementation because burst time cannot be known in
advance
• No option of preemption
• Longer processes may suffer from starvation
• No fairness
44
MIT School of Computing
Department of Computer Science & Engineering
Priority Scheduling
PLD
45
MIT School of Computing
Department of Computer Science & Engineering
1.Turn Around Time = Completion Time - Arrival Time
2.Waiting Time = Turn Around Time - Burst Time
PLD
46
MIT School of Computing
Department of Computer Science & Engineering
Avg Waiting Time = (0+11+2+7+12+2+18)/7 =
52/7 units
Preemptive Priority Scheduling
PLD
47
MIT School of Computing
Department of Computer Science & Engineering
PLD
48
MIT School of Computing
Department of Computer Science & Engineering
Round Robin Scheduling
• Criteria: Arrival time + (Q)Quantum time/timeslice
• Mode: Preemptive
• Quantum time/time slice: Max amount of time for which a
process runs on CPU beforePLDpreemption
49
MIT School of Computing
Department of Computer Science & Engineering
PLD
50
MIT School of Computing
Department of Computer Science & Engineering
PLD
51
MIT School of Computing
Department of Computer Science & Engineering
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• foreground (interactive)
• background (batch)
• Process permanently in a given queue
PLD
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
52
MIT School of Computing
Department of Computer Science & Engineering
Multilevel Queue Scheduling
PLD
53
MIT School of Computing
Department of Computer Science & Engineering
Multilevel Feedback Queue
• A process can move between the various queues;
aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the
following parameters: PLD
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will
enter when that process needs service
54
MIT School of Computing
Department of Computer Science & Engineering
Example of Multilevel Feedback Queue
• Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling PLD
• A new job enters queue Q0 which is
served FCFS
• When it gains CPU, job receives 8
milliseconds
• If it does not finish in 8
milliseconds, job is moved to
queue Q1
• At Q1 job is again served FCFS and
receives 16 additional milliseconds
• If it still does not complete, it is
preempted and moved to queue Q2
55
MIT School of Computing
Department of Computer Science & Engineering
Thread Scheduling
• Distinction between user-level and kernel-level threads
• When threads supported, threads scheduled, not processes
• Many-to-one and many-to-many models, thread library
schedules user-level threads
PLD
to run on LWP
• Known as process-contention scope (PCS) since scheduling
competition is within the process
• Typically done via priority set by programmer
• Kernel thread scheduled onto available CPU is system-
contention scope (SCS) – competition among all threads in
system
56
MIT School of Computing
Department of Computer Science & Engineering
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available
• Homogeneous processors within a multiprocessor
• Asymmetric multiprocessing – only one processor accesses the
system data structures, alleviating the need for data sharing
PLD
• Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each has
its own private queue of ready processes
• Currently, most common
• Processor affinity – process has affinity for processor on which
it is currently running
• soft affinity
• hard affinity
• Variations including processor sets
57
MIT School of Computing
Department of Computer Science & Engineering
Multiple-Processor Scheduling – Load Balancing
• If SMP, need to keep all CPUs loaded for efficiency
• Load balancing attempts to keep workload evenly distributed
• Push migration – periodic taskPLD
checks load on each
processor, and if found pushes task from overloaded CPU to
other CPUs
• Pull migration – idle processors pulls waiting task from busy
processor
58
MIT School of Computing
Department of Computer Science & Engineering
Multicore Processors
• Recent trend to place multiple processor cores on
same physical chip
• Faster and consumes less power
PLD
• Multiple threads per core also growing
• Takes advantage of memory stall to make progress on
another thread while memory retrieve happens
59
MIT School of Computing
Department of Computer Science & Engineering
Multithreaded Multicore System
PLD
60
MIT School of Computing
Department of Computer Science & Engineering
The inter-process communication examples include the following.
• Posix uses the shared memory technique.
• Windows XP uses message passing technique.
PLD
• Mach uses the message passing technique.
• Pipes.
• Server.
61
MIT School of Computing
Department of Computer Science & Engineering
Advantages
• It permits one application to manage another application
• Allows data communication through different processes to
utilize semaphores, segments & other techniques to
share data & memory. PLD
• helps in transferring efficient messages in between
processes.
• avoid synchronization & named pipes blocking issues by
sending a message.
62
MIT School of Computing
Department of Computer Science & Engineering
Disadvantages
• Similar to the pipe, every data block includes the highest limit
of length.
• Complete system data length for all blocks integrated within
the queue also includes anPLD
upper limit.
• All the processes or programs that utilize the model of shared
memory need to check that they are not writing to a similar
memory location.
• A shared memory model may form issues like synchronization
& protection of memory that require to be addressed.
• As compared to straight function calls, it is slow.
63
MIT School of Computing
Department of Computer Science & Engineering
PLD
This Photo by Unknown Author is licensed under CC BY 64
MIT School of Computing
Department of Computer Science & Engineering
PLD
65
MIT School of Computing
Department of Computer Science & Engineering
PLD
66
MIT School of Computing
Department of Computer Science & Engineering
PLD
67
MIT School of Computing
Department of Computer Science & Engineering
PLD
68
MIT School of Computing
Department of Computer Science & Engineering
PLD
69