0% found this document useful (0 votes)
9 views324 pages

Operating System: Noor Agha "Wafa"

Uploaded by

khan758lala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views324 pages

Operating System: Noor Agha "Wafa"

Uploaded by

khan758lala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 324

MINISTRY OF HIGHER EDUCATIONS

PAKTIA UNIVERSITY
FACULTY OF COMPUTER SCIENCE
DEPARTMENT OF INFORMATION SYSTEM & NETWORK ENGINEERING

Operating
System
By: Noor Agha
Outlines

I (Software)

Advanced Database
II (Operating System)
III (Services & functions of OS)
IV (Process in Operating System)
V (Threads in Operating System)
VI (Process synchronization)
VII (Deadlock in operating system)
VIII (Memory management in OS)
IX (Virtual memory)
X (I/O management)
XI (Secondary Storage management)
XII (File system)
XIII (Security)
Software

Ch
a pt
er
-I By: Noor Agha
What is a software?

Any idea?
Topics in the chapter

Introduction to software
Classification of software

Operating System
On the basis of Purpose/application
On the basis of platform
On the basis of deployment
On the basis of License
On the basis of development model
On the basis of Size
On the basis User interface
On the basis of copy right
Conclusion
What is a software?
"Software is a set of sequence of instructions that allows the users to
perform a well-defined function or some specified task."

Software is responsible for directing all computer-related devices and


instructing them regarding what and how the task is to be performed.

Software is basically a set of instructions or commands that tells a computer


what to do.

Software is a computer program that provides a set of instructions to


execute a user’s commands and tell the computer what to do.

A computer without a software is nothing and can't do anything, it is


software that instructs a computer what to do
Therefore, software programmers write programs using human-readable languages like Java,
Python, and C# instead of directly using binary code. These programs are then converted into a
form that computers can understand and execute.
Classification of software
Software can be classified based on various criteria as following:
Purpose: Software can be classified as system software (e.g. operating systems,
device drivers) or application software (e.g. word processors, games) and utility
software that helps computer runs smoothly (e.g. Antivirus, data compressor and etc.)

Platform: Software can be classified as native software (designed for a specific


operating system) or cross-platform software (designed to run on multiple operating
systems).what to do.

Deployment: Software can be classified as installed software (installed on the user’s


device) or cloud-based software (hosted on remote servers and accessed via the
internet).

License: Software can be classified as proprietary software (owned by a single entity)


or open-source software (available for free with the source code accessible to the
public).
Classification of software
Size: Software can be classified as small-scale software (designed for a single user or small
group) or enterprise software (designed for large organizations).

User Interface: Software can be classified as Graphical User Interface (GUI) software
or Command-Line Interface (CLI) software.

Based on copyright: based on copyright, software is classified as following:


1. Commercial software: paid and copyrighted. e.g.: Microsoft office
2. Shareware software: first free trial then paid
3. Freeware software: free but copyrighted. e.g.: Google chrome
4. Public domain software: free to use.
Classification of software on the basis of purpose/application

System software: System software is software that directly operates the computer hardware
and provides the basic functionality to the users as well as to the other software to operate
smoothly.

system software basically controls a computer’s internal functioning and also controls
hardware devices such as monitors, printers, and storage devices, etc. It is like an interface
between hardware and user applications.

It helps them to communicate with each other because hardware understands machine
language(i.e. 1 or 0) whereas user applications are work in human-readable languages like
English, Hindi, German, etc. so system software converts the human-readable language into
machine language and vice versa

Various types of system software are: Operating system, device drivers, and languages
processor (for conversion purpose)
Operating system

Any idea?
Operating system

operating system is the most important software that runs on a computer.

It is the main program of a computer system. When the computer system is turned ON,
it is the first software that loads into the computer’s memory.

It is just like bridge between computer, user and other software

It manages all the resources such as memory, CPU, printer, hard disk, etc. with help of the
drivers, and provides an interface to the user, which helps the user to interact with the
computer system.
It also allows you to communicate with the computer without knowing how to speak the
computer's language.

It enables communication between user and computer

It also provides various services to other computer software.


What did you understand about:

Software?

Classification of software?

Classification o the basis of application purpose?

System software?

Application software?

Utility software?

Operating system?
Conclusion
A software is nothing but a set and collection of instructions that directs a computer
tells a computer what to do?

There many criteria based on which we can classify software, but based on
applications purpose, a software is of three types: System software, Application
software and utility software.

System software: System software is software that directly operates the computer
hardware and provides the basic functionality to the users as well as to the other
software to operate smoothly.

Operating system: operating system is the most important software that runs on a
computer, It is the main program of a computer system. When the computer system is
turned ON, it is the first software that loads into the computer’s memory.

Operating system: It manages all the resources such as memory, CPU, printer, hard
disk, etc. with help of the drivers, and provides an interface to the user, which helps
the user to interact with the computer system.
END OF THE SECTION
Operating System

Ch
a pt
er
-I I By: Noor Agha
Topics in the chapter
Introduction to OS

Operating System
History of operating system
Need for operating system

Components of operating
system
Types of operating systems

Structure of operating system


Conclusion
What is an operating system?
operating system is the most important software that runs on a computer.

It is the main program of a computer system. When the computer system is turned ON,
it is the first software that loads into the computer’s memory.

It is just like bridge between computer, user and other software

It manages all the resources such as memory, CPU, printer, hard disk, etc. with help of the
drivers, and provides an interface to the user, which helps the user to interact with the
computer system.
It also allows you to communicate with the computer without knowing how to speak the
computer's language.

It enables communication between user and computer

The interface between computer hardware and the user is known as the operating system.
History of operating system

The history of the operating system has five generations. From the days
when computers were running manually without any operating system, to
today's smart, cloud-connected, and AI-integrated platforms. Each
generation brought new innovations that changed the way we interact with
technology. Let’s take a quick journey through this fascinating evolution!
History of operating system
Let us understand each of them in detail.

First Generation (1945-1955)


 No Operating System
 Programs executed directly by hardware
 Users interacted manually (via switches, plugboards)
 No scheduling
 no resource management
 Example System: ENIAC
 Programming: Machine/Assembly language
History of operating system
How it was working?

 Computers were very big and used vacuum tubes.


 People had to set up wires and switches for each task.
 Everything was slow and difficult to change.
 Example: ENIAC – one of the first electronic computers.
History of operating system
Plugboards
History of operating system
How does a plugboards work?
History of operating system
Let us understand each of them in detail.

Second Generation (1955-1965)


🔹 Batch Processing Systems
 Jobs grouped and run one after another
 OS began to emerge for automation
 No interaction during execution
 OS Examples: FMS (Fortran Monitor System) and IBSYS (IBM Batch
OS)
History of operating system
Let us understand each of them in detail.

Third Generation (1965-1980)

Multiprogramming and Time-Sharing OS


 OS allowed multiple programs to reside in memory
 Time-sharing: multiple users interact with the computer simultaneously
 Better CPU utilization
 OS Examples:
 MULTICS (Multiplexed Information and Computing Service)
 UNIX (developed at AT&T Bell Labs)
History of operating system
Let us understand each of them in detail.

Fourth Generation (1980-2000)

🔹 Personal Computer Operating Systems


 OS became user-friendly with Graphical User Interfaces (GUI)
 Used in homes, schools, and offices
 Introduced plug-and-play, file systems, multitasking
 OS Examples:
 MS-DOS, Windows 3.1, Windows 95/98
 Mac OS
 Linux (open-source alternative)
History of operating system
Let us understand each of them in detail.

Fifth Generation OS (2000–Present)


🔹 Mobile, Cloud, and Intelligent OS
 OS support for smartphones, tablets, IoT
 Integration with cloud services
 Real-time OS for mission-critical applications
 AI/Voice interface (Siri, Google Assistant)
 OS Examples:
 Android, iOS
 Windows 10/11
 ChromeOS
History of operating system
Let us understand each of them in detail.
History of operating system
Conclusion
 Operating Systems have evolved from having no interface at all to
becoming smart and user-friendly platforms that we use every day — in
phones, laptops, and even smart TVs.
 The core goal has always remained the same:
To make computers efficient, secure, and easy to use for people and
programs.
 The future of OS is even more exciting:
 Quantum OS will work with quantum computers to solve complex problems
faster.
 AI-enhanced OS will be able to learn from user behavior and optimize
system performance.
 Cloud-native OS will run smoothly on internet-based systems, supporting
remote work, apps, and services.
 From simple switches to smart systems , the OS journey continues!
Need for Operating system

OS as a platform for Application programs:

Managing Input-Output unit:

Multitasking:

Controls memory:

Enables interaction between user and hardware

Provides Security:

In short, we can say that without an OS a operating system is not of use.


Components of an operating system
An operating system has 3 components:

Kernel

Shell

File system
Components of an operating system
Kernel
Components of an operating system
Kernel
A kernel is basically the core and the heart of an OS (Operating system).

It functions to manage the operations of the hardware and the computer.

A kernel basically acts as a bridge between any user and the various
resources offered by a system

core of a computer's operating system and generally has complete control


over everything in the system.

Kernel is central component of an operating system that manages


operations of computer and hardware.

Kernel loads first into memory when an operating system is loaded and
remains into memory until operating system is shut down again.
Components of an operating system
Kernel
To establish communication between user level application and hardware.

To decide state of incoming processes.

To control disk management.

To control memory management.

To control task management.


Components of an operating system
Shell
The shell is the outermost layer of the operating system.
A shell is a program that provides an interface for the user to use operating
system services.
In computing, a shell is a computer program that exposes an operating
system's services to a human or other programs.
Shell accepts human-readable commands from users and converts them
into something which the kernel can understand.

It is a command language interpreter that executes commands read from


input devices such as keyboards or from files. The shell gets started when
the user logs in or starts the terminal.
Components of an operating system
Shell

Shell is broadly classified into two categories:

 Command Line Shell


 Graphical shell
Components of an operating system
File System
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device.
Some common types of file systems include:
 FAT (File Allocation Table): An older file system used by older versions of Windows and other
operating systems.

 NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.

 ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.

 HFS (Hierarchical File System): A file system used by macOS.

 APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.
Types of Operating system
Various types of operating system are as following:
Types of Operating system
Batch Operating System
Batch operating systems were one of the earliest types of operating systems
designed to manage tasks (or "jobs") in a sequential, automated manner.
In a batch processing environment, users submit a series of tasks, which are grouped into
batches and processed without interaction.

Job Collection:
Users would prepare their tasks or programs (jobs) offline, often on punch cards, paper
tapes, or later on files.

Jobs were submitted to the system in a batch, where multiple jobs could be grouped together
for processing.
Types of Operating system
Paper tape
Types of Operating system
Batch Operating System

Job Scheduling:
The batch OS would collect all the jobs and schedule them for execution in the order they
were received .
The system would manage a queue of jobs to be executed sequentially, ensuring that they
didn’t interfere with each other.

No User Interaction:
Once a job was submitted, the user would not interact with the system while it was running.
The OS would process the job to completion.
The jobs ran in a "batch," with no need for real-time input from the user
Types of Operating system
Batch Operating System
Types of Operating system
Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept


busy.

Each process needs two types of system time: CPU time and IO time.

Unlike batch OS, in a multiprogramming environment, when a process does its I/O,
The CPU can start the execution of other processes. Therefore, multiprogramming
improves the efficiency of the system.

Multiprogramming Operating Systems can be simply illustrated as more than one


program is present in the main memory and any one of them can be kept in execution.
Types of Operating system
Multiprogramming Operating System
Types of Operating system
Multiprocessing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one


processors present in the system which can execute more than one process at the
same time. This will increase the throughput of the system.
Types of Operating system
Multiprocessing Operating System
Types of Operating system
Multiprocessing Operating System Advantages

Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one processor
fails, the task can be given to another processor for completion.

Increased throughput: As several processors increase, more work can be done in less
time.
Types of Operating system
Multiprocessing Operating System Disadvantages

Multiprocessing operating system is more complex and sophisticated as it takes care


of multiple CPUs simultaneously.
Types of Operating system
Multitasking Operating System

The multitasking operating system is a logical extension of a multiprogramming


system that enables multiple programs simultaneously. It allows a user to perform
more than one computer task at the same time.

Multitasking Operating System is simply a multiprogramming Operating System with


having facility of a Round-Robin Scheduling Algorithm. It can run multiple programs
simultaneously.
Types of Operating system
Multitasking Operating System
Types of Operating system
Multitasking Operating System

It is of two types:
Types of Operating system
Network Operating System

An Operating system, which includes software and associated protocols to


communicate with other computers via a network conveniently and cost-
effectively, is called Network Operating System.

These systems run on a server and provide the capability to manage data,
users, groups, security, applications, and other networking functions.

These types of operating systems allow shared access to files, printers,


security, applications, and other networking functions over a small private
network.
Types of Operating system
Network Operating System
Types of Operating system
Real-Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the
job is supposed to be completed, otherwise, the huge loss will be there, or
even if the result is produced, it will be completely useless.

Traffic light is a good example of real time operating systems


Types of Operating system
Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a


time-dependent fashion to several programs simultaneously. Thus it helps
to provide a large number of user's direct access to the main computer.

In time-sharing, the CPU is switched among multiple programs given by


different users on a scheduled basis.
Types of Operating system
Time-Sharing Operating System
Types of Operating system
Distributed Operating System

A Distributed Operating System refers to a model in which applications run


on multiple interconnected computers, offering enhanced communication
and integration capabilities compared to a network operating system.

The Distributed Operating system is not installed on a single machine, it is


divided into parts, and these parts are loaded on different machines.

Distributed Operating systems are much more complex, large, and


sophisticated than Network operating systems

In a Distributed OS, multiple CPUs are utilized, but for end-users, it appears
as a typical centralized operating system.
Types of Operating system
Distributed Operating System
Structure of operating system
The approach of interconnecting and integrating multiple operating system
components into the kernel can be described as an operating system structure.

Since the operating system is such a complex structure, it should be created with
utmost care so it can be used and modified easily,

Just as we break down a big problem into smaller, easier-to-solve sub problems.

An easy way to do this is to create the operating system in parts. Each of these parts
should be well defined with clear inputs, outputs and functions.
various sorts of structures are used to implement operating systems

Simple Structure

Monolithic structure

Layered structure

Micro-kernel structure

Modular structure
Simple structure

📌 Description: The Simple Structure is the most basic form of OS design. There is no clear
separation between different OS components. Everything is written as one large program
that directly interacts with hardware and applications. It’s not layered, not modular, and there
is no abstraction—just one block of code with everything inside.
🧱 Components:
System calls
File management
I/O handling
Process control
Memory management
📈 Advantages:
Easy to develop for small systems
Fast execution due to direct calls
📉 Disadvantages:
No modularity (hard to update/debug)
High risk of system crashes
Difficult to maintain or scale
💻 Examples: MS-DOS
Simple structure
Monolithic structure

Description: In a monolithic system, everything runs in a single large kernel: memory


management, process management, I/O operations, device drivers, and system calls.
All functions can call each other directly.

📈 Advantages:
Better debugging (isolate problems in a layer)
Easier to maintain or update a specific layer
📉 Disadvantages:
Slower due to multiple layers
Layer design must be careful; upper layers depend on lower ones
💻 Examples:
THE Operating System
Multics
Monolithic structure
Micro-kernel structure

📌 Description:
Only essential services run in kernel mode (e.g., CPU scheduling, memory management,
IPC). All other OS services (file systems, device drivers) are moved to user space and
communicate via message passing.
🧱 Components:
Kernel: Minimal (IPC, CPU scheduling)
User space services: File system, device drivers, etc.
📈 Advantages:
Very secure and stable
Crashes in services don’t crash the kernel
📉 Disadvantages:
Slower due to message passing
Complex communication between components
💻 Examples:
Minix
QNX
MacOS X (based on Mach kernel)
Micro-kernel structure
Layered approach

📌 Description:
The OS is split into layers. Each layer only interacts with the layer just below it. The lowest
layer is hardware, and the top layer is user interface.
🧱 Components:
Hardware (Layer 0)
CPU scheduling and memory (Layer 1)
Device drivers (Layer 2)
File systems (Layer 3)
Shell/User interface (Layer 4)
📈 Advantages:
Better debugging (isolate problems in a layer)
Easier to maintain or update a specific layer
📉 Disadvantages:
Slower due to multiple layers
Layer design must be careful; upper layers depend on lower ones
💻 Examples:
THE Operating System
Multics
Layered approach
Modular approach

📌 Description:
This is a modern version of monolithic kernel, but it supports loadable modules (e.g.,
device drivers, file systems) that can be added or removed during runtime.
🧱 Components:
Core kernel
Loadable modules (device drivers, etc.)
📈 Advantages:
Easier to maintain and update
Good performance and flexibility
📉 Disadvantages:
Still not as secure as microkernel
Module bugs can still affect the kernel
💻 Examples:
Modern Linux
Solaris
Modular approach
Client server model

Description:
The OS is organized into a set of services, each running as a separate process. These
services act as servers, and applications act as clients that request services like file access
or printing.
🧱 Components:
Kernel (minimal)
Multiple servers: File server, device server, etc.
📈 Advantages:
Highly modular
Suitable for distributed systems
📉 Disadvantages:
More overhead due to multiple context switches and message passing
💻 Examples:
Windows NT
Mac OS X (partially)
OS
Conclusion
END OF THE SECTION
Services & Functions
of Operating System
Ch
a pt
er
-I I By: Noor Agha
I
Topics in the chapter
Operating system services

Operating System
Operating system functions

Conclusion
Operating system services
Operating system is a software that acts as an intermediary between the user and
computer hardware.

It is a program with the help of which we are able to run various applications.

It is the one program that is running all the time. Every computer must have an
operating system to smoothly execute other programs.

The OS coordinates the use of the hardware and application programs for various
users. It provides a platform for other application programs to work.

It controls input-output devices, execution of programs, managing files, etc.


Various Services of Operating System

 Program Execution
 Input Output Operations
 File Management
 Error Handling
 Resource Management
 Communication between Processes
 Networking
 System Utilities
 User Interface
Program Execution

It is the Operating System that manages how a program is going to be executed.

It loads the program into the memory after which it is executed.

The order in which they are executed depends on the CPU Scheduling Algorithms. A
few are FCFS, SJF, etc.

When the program is in execution, the Operating System also handles deadlock i.e. no
two processes come for execution at the same time.

The Operating System is responsible for the smooth execution of both user and
system programs.
Input Output Operations
Operating System manages the input-output operations and establishes
communication between the user and device drivers.

An I/O subsystem comprises of I/O devices and their corresponding driver software.

Device drivers are software that is associated with hardware that is being managed by
the OS so that the sync between the devices works properly.

An Operating System manages the communication between user and device drivers.

I/O operation means read or write operation with any file or any specific I/O device.

Operating system provides the access to the required I/O device when required.
File manipulation & management

A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose.

Examples of storage media include magnetic tape, magnetic disk and optical disk
drives like CD, DVD. Each of these media has its own properties like speed, capacity,
data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
File manipulation & management
Following are the major activities of an operating system with respect to file
management:

 Program needs to read a file or write a file.

 The operating system gives the permission to the program for operation on file.

 Permission varies from read-only, read-write, denied and so on.

 Operating System provides an interface to the user to create/delete files.

 Operating System provides an interface to the user to create/delete directories.

 Operating System provides an interface to create the backup of file system.


Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes.

Multiple processes communicate with one another through communication lines in the
network.

Following are the major activities of an operating system with respect to communication:

 Two processes often require data to be transferred between them

 Both the processes can be on one computer or on different computers, but are
connected through a computer network.

 Communication may be implemented by two methods, either by Shared Memory or


by Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or
in the memory hardware.

An Error in a device may also cause malfunctioning of the entire device.

These include hardware and software errors such as device failure, memory error,
division by zero, attempts to access forbidden memory locations, etc.

To avoid error, the operating system monitors the system for detecting errors and
takes suitable action with at least impact on running applications.
Error handling
The main activities of an operating system with regard to error handling are as follows:

 The OS continuously scans for potential errors.

 The OS takes the proper action to guarantee accurate and reliable computing.
Resource Management
System resources are shared between various processes. It is the Operating system
that manages resource sharing

It also manages the CPU time among processes using CPU Scheduling Algorithms.

It also helps in the memory management of the system. It also controls input-output
devices.

The OS also ensures the proper use of all the resources available by deciding which
resource to be used by whom.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job.

Following are the major activities of an operating system with respect to resource management:

 The OS manages all kinds of resources using schedulers.

 CPU scheduling algorithms are used for better utilization of CPU.


Security and security
In an operating system, protection is a mechanism that controls the access of the
process, programs, or users over to resources of the computer system.

The operating system ensures that all access to system resources must be monitored
and controlled.

It also ensures that the external resources or peripherals must be protected from
invalid access.

It provides authentication by using usernames and passwords

Considering a computer system having multiple users and concurrent execution of


multiple processes, the various processes must be protected from each other's
activities.
Security and security
Protection refers to a mechanism or a way to control the access of programs,
processes, or users to the resources defined by a computer system.

Following are the major activities of an operating system with respect to protection:

 The OS ensures that access to system resources is controlled.

 The OS ensures that external I/O devices are protected from invalid access
attempts.

 The OS provides authentication features for each user by means of passwords.


Networking

This service enables communication between devices on a network, such as


connecting to the internet, sending and receiving data packets, and managing network
connections.

Example of services provides by OS in networking can be usage of windows server,


Linux server, IOS used by CISCO devices such as switch, Router and etc.
User Interface

User interface is essential and all operating systems provide it.

Users either interact with the operating system through the command-line interface or
graphical user interface or GUI.
Functions of Operating System

User interface is essential and all operating systems provide it.

Users either interface with the operating system through the command-line interface
or graphical user interface or GUI.
Functions of Operating System
Conclusion
END OF THE SECTION
Process in
Operating System
Ch
a pt
er
-I V
By: Noor Agha
Topics in the chapter
Introduction to process

Operating System
Process vs program
Process control block (PCB)

Process attributes
Process life cycle

Process scheduling
What is a process in OS?

Process vs program

Program vs software

Different stages a process can go through?

PCB (Process control block)

What is process scheduling?


What is a process in OS?
A process is a program in execution which then forms the basis of all
computation.

For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both
programs. When we actually run the binary code, it becomes a process.

The process is not as same as program code

A process is an 'active' entity while a program is a passive entity


Difference between Process and Program
Process Program

The process is not as same as program. The Program is not as same as Process.

Process is an active entity a program is said to be a passive entity.

A Process requires resources such as A Program is stored by hard-disk and


memory, CPU, Input-Output devices. does not require any resources.

Basically, a process is the running On the other hand, the program is the
instance of the program. executable code.

A process can belong to one program A program can have may processes
Example of Process and program

#Python program to add, subtract, divide and multiply two numbers.


a=7
b=2

# addition
print ('Sum: ', a + b)

# subtraction
print ('Subtraction: ', a - b)

# multiplication
print ('Multiplication: ', a * b)

# division
print ('Division: ', a / b)
Example of Process and program

Notepad is a program, and it is in secondary storage, but when we click on


it`s icon, then it becomes a process and requires resources such as CPU,
memory, I/O devices and etc.
Attributes or Characteristics of a Process
A process has the following attributes:

Process ID: When a process is created, a unique id is assigned to the process which is
used for unique identification of the process in the system.

Process state: The Process, from its creation to the completion, goes through various states
which are new, ready, running and waiting.

Program counter: A program counter stores the address of the last instruction of the
process on which the process was suspended. The CPU uses this address when the
execution of this process is resumed.

Priority: Every process has its own priority. The process with the highest priority among the
processes gets the CPU first.
Attributes or Characteristics of a Process
A process has the following attributes:
General Purpose Registers: Every process has its own set of registers
which are used to hold the data which is generated during the execution of
the process.

List of open files: During the Execution, Every process uses some files
which need to be present in the main memory.

List of open devices: OS also maintain the list of all open devices which are
used during the execution of the process.

Accounts information: Amount of CPU used for process execution, time


limits, execution ID, etc

CPU scheduling information: For example, Priority (Different processes


may have different priorities, for example, a shorter process assigned high
priority in the shortest job first scheduling)
Process Control Block
Process Control Block
There is a Process Control Block for each process, enclosing all the information about
the process.

It is also known as the task control block. It is a data structure, which contains the
following information about a process:

A Process Control Block consists of :

Process ID: It is a Identification mark which is present for the Process. This is very useful for
finding the process. It is also very useful for identifying the process also.

Process State: The Process, from its creation to the completion, goes through various
states which are new, ready, running and waiting.

Program Counter: The address of the following instruction to be executed from memory is
stored in a CPU register called a program counter (PC) in the computer processor.
Process Control Block

CPU Registers: Every process has its own set of registers which are used to hold the data
which is generated during the execution of the process.

CPU Scheduling Information: It is necessary to arrange a procedure for execution. This


schedule determines when it transitions from ready to running. Process priority, scheduling
queue pointers (to indicate the order of execution), and several other scheduling parameters
are all included in CPU scheduling information.

Accounts information: Amount of CPU used for process execution, time limits, execution
ID, etc

I/O Status information: it contains information about the I/O devices used during process
execution.

Priority: Every process has its own priority. The process with the highest priority among the
processes gets the CPU first.
Process life cycle states of a process
When a process is executed, it goes through some states from beginning till the end
called process life cycle or states of a process
Process life cycle or states of a process
S.N. State & Description
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to assign
CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved
to the terminated state where it waits to be removed from main memory.
Process scheduling

Process scheduling is an important part of multiprogramming operating systems.

Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

It is the process of removing the running task from the processor and selecting
another task for processing. It schedules a process into different states like ready,
waiting, and running.
Categories of Scheduling
There are two categories of scheduling:

Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.

Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.

The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue.

When the state of a process is changed, its PCB is unlinked from its current queue
and moved to its new state queue.
Process Scheduling Queues
The Operating System maintains the following important process scheduling queues:

Job queue: This queue keeps all the processes in the system.

Ready queue: This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.

Device queues: The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Process Scheduling Queues
Process scheduling

Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating system.

Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Types of schedulers

Long Term or Job Scheduler: Long term scheduler is also known as job scheduler.

It chooses the processes from secondary memory and keeps them in the ready queue
in the primary memory.

Short term scheduler: Short term scheduler is also known as CPU scheduler.

It selects one of the Jobs from the ready queue and dispatch to the CPU for the
execution.

A scheduling algorithm is used to select which job is going to be dispatched for the
execution.
Types of schedulers

Medium term scheduler: Medium term scheduler takes care of the swapped out
processes.

If the running state processes needs some IO time for the completion then there is a
need to change its state from running to waiting.

It removes the process from the running state to make room for the other processes.

Such processes are the swapped out processes and this procedure is called
swapping.

The medium term scheduler is responsible for suspending and resuming the
processes.
Types of schedulers
What is scheduling algorithm

A CPU scheduling algorithm is used to determine which process will use CPU for
execution and which processes to hold or remove from execution.

The main goal or objective of CPU scheduling algorithms in OS is to make sure that
the CPU is never in an idle state, meaning that the OS has at least one of the
processes ready for execution among the available processes in the ready queue.

A Scheduling Algorithm is the algorithm which tells us how much CPU time we can
allocate to the processes.
Purpose of scheduling algorithm

A process to complete its execution needs both CPU time and I/O time.

In a multiprogramming system, there can be one process using the CPU while another
is waiting for I/O whereas, in a uni-programming system, time spent waiting for I/O is
completely wasted as the CPU is idle at this time.

Multiprogramming can be achieved by the use of process scheduling.


Purpose of scheduling algorithm

 Maximize the CPU utilization, meaning that keep the CPU as busy as possible.

 Fair allocation of CPU time to every process

 Maximize the Throughput

 Minimize the turnaround time

 Minimize the waiting time

 Minimize the response time


Purpose of scheduling algorithm

Parameter Description

Throughput Number of completed processes per time unit

Waiting time Time a process waits in the ready state

Turnaround time Time a process takes from submission to completion

Response time Time between a command request and the beginning of a response
Scheduling Strategies
Scheduling falls into one of two categories:
Scheduling Strategies
Scheduling falls into one of two categories:

Non-preemptive: In this case, a process’s resource cannot be taken before the


process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.

Preemptive: In this case, the OS assigns resources to a process for a predetermined


period of time. The process switches from running state to ready state or from waiting
for state to ready state during resource allocation. This switching happens because
the CPU may give other processes priority and substitute the currently active process
for the higher priority process.
Types of scheduling algorithms
Various scheduling algorithms are as following:
Types of scheduling algorithms
Some famous scheduling algorithms are:

FCFS (First Come First Serve) scheduling algorithm

SJF or SJN (Shortest Job First or Shortest Job Next ) scheduling algorithm

LJF (Longest Job First) scheduling algorithm

Priority scheduling algorithm

Shortest remaining time first scheduling algorithm

Longest remaining time first scheduling algorithm

Round Robin scheduling algorithm


FCFS (First Come First Serve) Scheduling Algorithm
FCFS considered to be the simplest of all operating system scheduling algorithms.

First come first serve scheduling algorithm states that the process that requests the
CPU first is allocated the CPU first and is implemented by using FIFO queue.

Characteristics of FCFS:

 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.


 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
FCFS (First Come First Serve) Scheduling Algorithm
SJF (Shortest Job First) Scheduling Algorithm
Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next.

It is non-preemptive scheduling algorithm

Based on this algorithm, the process whose burst time is the smallest will be executed
first.

Characteristics of FCFS:

 Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
 It is associated with each task as a unit of time to complete.
SJF (Shortest Job First) Scheduling Algorithm
LJF (Longest Job First) Scheduling Algorithm

Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF)

as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first.

Longest Job First is non-preemptive in nature.


LJF (Longest Job First) Scheduling Algorithm

P3 P4 P1 P2

0 6 9 11 12
Priority Scheduling Algorithm

Each process is assigned a priority.

Process with highest priority is to be executed first

Processes with same priority are executed on first come first served basis.

Priorities are generally indicated by some fixed range of numbers

Priorities are generally indicated by some fixed range of numbers


Priority Scheduling Algorithm
Shortest Remaining Time Scheduling Algorithm

Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.

The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
Round Robin Scheduling Algorithm
Round Robin is a preemptive process scheduling algorithm.

Each process is provided a fix time to execute; it is called a quantum.

Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.

The round-robin (RR) scheduling algorithm is similar to FCFS scheduling, but


preemption is added to enable the system to switch between processes.

A small unit of time, called a time quantum or time slice, is defined. A time quantum
is generally from10 to 100milliseconds in length.

The ready queue is treated as a circular queue. The CPU scheduler goes around the
ready queue, allocating the CPU to each process for a time interval of up to 1 time
quantum.
Round Robin Scheduling Algorithm
Multilevel Queue Scheduling Algorithm

Multiple-level queues are not an independent scheduling algorithm. They make


use of other existing algorithms to group and schedule jobs with common
characteristics.

Multiple queues are maintained for processes with common characteristics.

Each queue can have its own scheduling algorithms.

Priorities are assigned to each queue.

With both priority and round-robin scheduling, all processes may be placed
in a single queue, and the scheduler then selects the process with the highest
priority to run.
Multilevel Queue Scheduling Algorithm
if there are multiple processes in the highest-priority queue, they are executed in
round-robin order.
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time.

Using this technique, a context switcher enables multiple processes to share


a single CPU.

Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block.
Context Switch

After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.
Context Switch
IPC (Inter-process Communication)

After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.
Conclusion
END OF THE SECTION
Threads in
Operating System
Ch
a pt
er
-V By: Noor Agha
Topics in the chapter
Introduction to threads in OS

Operating System
Examples of threads in OS

Why threads in OS
Threads VS process

Components of threads
Types of threads in OS

Multi-threading in OS
Conclusion
What is thread in operating system?

In an operating system, a thread is the smallest unit of execution within a


process.

Threads share the same memory space and resources of a process, allowing
for concurrent execution and efficient multitasking.

They enable programs to perform multiple tasks simultaneously, making


them useful for tasks like handling multiple users in a web server or
executing multiple operations in a single application.
What is thread in operating system?

Threads in OS can be of the same or different types. Threads are used to


increase the performance of the applications.

It is a lightweight process that the operating system can schedule and run
concurrently with other threads.

The operating system creates and manages threads, and they share the
same memory and resources as the program that created them.

This enables multiple threads to collaborate and work efficiently within a


single program.
What is thread in operating system?

Each thread belongs to exactly one process.

In an operating system that supports multithreading, the process can


consist of many threads. But threads can be effective only if CPU is more
than 1 otherwise two threads have to context switch for that single CPU.
Examples of threads in OS
Real life example: Let us take an example of a human body. A human body
has different parts having different functionalities which are working
parallelly ( Eg: Eyes, ears, hands, etc.)

Web Browser: Imagine you're using a web browser that needs to download
multiple files simultaneously while also rendering a webpage. Each file
download and rendering task could be handled by a separate thread within
the browser's process

Video Game: In a multiplayer video game, there are often various tasks
running concurrently, such as updating player positions, handling user
input, and managing network communication. Each of these tasks could be
assigned to a separate thread within the game's process. This ensures that
the game remains responsive and can handle multiple players interacting in
real-time without delays or interruptions.
Why do we need threads in OS?
Threads in the operating system provide multiple benefits and improve the
overall performance of the system. Some of the reasons threads are needed
in the operating system are:

 Concurrency: Threads allow multiple tasks to be executed concurrently


within a single process. This enables programs to perform multiple
operations simultaneously, improving overall efficiency and
responsiveness.

 Responsiveness: By utilizing threads, programs can remain responsive


even when performing tasks that may take some time to complete. For
example, a user interface can continue to accept input and update its
display while simultaneously performing background tasks like file
downloads or data processing.
Why do we need threads in OS?

 Resource Sharing: Threads within a process share the same memory


space and resources, allowing them to communicate and share data more
efficiently than separate processes. This makes threads ideal for tasks
that require cooperation and coordination between different parts of a
program and improve performance and speed.
Threads VS Process
Process Thread

Processes use more resources and hence they Threads share resources and hence they are
are termed as heavyweight processes. termed as lightweight processes.

Creation and termination times of processes are Creation and termination times of threads are
slower. faster compared to processes.

Threads share code and data/file within a


Processes have their own code and data/file.
process.

Communication between processes is slower. Communication between threads is faster.

Context Switching in processes is slower. Context switching in threads is faster.

Threads, on the other hand, are interdependent.


Processes are independent of each other. (i.e they can read, write or change another
thread’s data)
Threads VS Process
Components of Thread
The main components of a thread include:

Thread ID: A unique identifier assigned to each thread within a process.

Thread State: Indicates the current condition of the thread, such as running,
waiting, or terminated.
Program Counter: Keeps track of the address of the next instruction to be
executed by the thread.
Stack: Each thread has its own stack, which stores local variables, function
parameters, and return addresses.
Thread Priority: Determines the order in which threads are scheduled for
execution by the operating system.

Thread Synchronization: Mechanisms used to coordinate the execution of


multiple threads, such as locks, semaphores, and barriers.
Types of Threads

A thread has the following two types:

 User Level thread (ULT)

 Kernel Level Thread (KLT)


User Level thread (ULT)

 User-level threads are implemented and managed by the user and the
kernel is not aware of it.

 User-level threads are implemented using user-level libraries and the OS


does not recognize these threads.

 User-level thread is faster to create and manage compared to kernel-level


thread.

 Context switching in user-level threads is faster.

 If one user-level thread performs a blocking operation then the entire


process gets blocked. Eg: POSIX threads, Java threads, etc.
User Level thread (ULT) Example

Imagine you're baking a cake, and you have two friends helping you: Alice
and Bob.

You are the process, managing the overall baking task.

Alice and Bob are user-level threads, representing smaller tasks within the
baking process.

Here's how it works:

User-Level Threads (Alice and Bob):


Alice is responsible for mixing the ingredients, and Bob is in charge of preheating the oven. You
coordinate with Alice and Bob directly, telling them what to do and when. If Alice encounters a
problem (like running out of sugar), she stops and waits for your instructions before continuing.
Similarly, if Bob's oven takes longer to preheat, he waits for your signal to proceed.
User Level thread (ULT) Example

Process (You):
You oversee the entire baking process, coordinating Alice and Bob's tasks. While Alice is waiting
for sugar or Bob is waiting for the oven to heat, you can't proceed with other tasks like preparing
the frosting. If there's a delay or problem with one task, it affects the entire baking process since
you're managing everything.

In this scenario, Alice and Bob represent user-level threads because they're managed directly by
you (the process), without involving any external authorities (like a professional baker or a recipe
book).
They're lightweight and responsive to your instructions, but they rely on your coordination for the
overall success of the baking project.
Kernel Level Thread (KLT)

Kernel level threads are implemented and managed by kernel of OS.

 Kernel level threads are implemented using system calls and Kernel level
threads are recognized by the kernel of OS.

 Kernel-level threads are slower to create and manage compared to user-


level threads.

 Context switching in a kernel-level thread is slower.

 Even if one kernel-level thread performs a blocking operation, it does not


affect other threads.
Kernel Level Thread (KLT) example

Imagine you're running a big event, like a concert, with multiple security
guards managing different entrances. In this analogy:

• You represent the operating system kernel, overseeing the entire event.
• Each security guard represents a kernel-level thread, responsible for
managing a specific entrance.
Kernel Level Thread (KLT) example

Here's how it works:

Kernel-Level Threads (Security Guards):

• Each security guard is stationed at a different entrance gate, managing


the flow of people entering the event.
• They work independently of each other and can handle their entrance
gate's tasks without waiting for instructions from you (the kernel).
• If there's a delay at one entrance (like a bag check taking longer than
usual), it doesn't affect the other entrances because each guard manages
their gate separately.
• If one guard needs assistance (like dealing with a disruptive attendee),
they can call for backup or notify you (the kernel) for additional support.
Kernel Level Thread (KLT) example

Operating System Kernel (You):


• You oversee the entire event, ensuring that each entrance gate operates
smoothly and efficiently.
• While you keep an eye on everything, you don't need to micromanage
each guard's actions; they're capable of managing their tasks
independently.
• If there's an issue at one entrance, you can allocate resources or
intervene if necessary to maintain overall event security and coordination.
• In this scenario, the security guards represent kernel-level threads
because they're managed by you (the kernel), operate independently, and
are responsible for specific tasks (managing entrance gates) within the
larger event (the operating system).
ULT VS KLT
Multi-threading in OS

You are already aware of the term multitasking that allows processes to run
concurrently.

Similarly, multithreading allows sub-processes (threads) to run concurrently


or parallelly

Also, we can say that when multiple threads run concurrently it is known as
multithreading
Multi-threading in OS

Multithreading is the ability of a program or an operating system to enable


more than one user at a time without requiring multiple copies of the
program running on the computer.

Multithreading can also handle multiple requests from the same user.

Multithreading allows the application to divide its task into individual


threads. In multi-threads, the same process or task can be done by the
number of threads, or we can say that there is more than one thread to
perform the task in multithreading.
Multi-threading in OS

The term multithreading refers to an operating system's capacity to support


execution among fellow threads within a single process.

In short we can say that multithreading is ability of a program, application or


operating system to allow multiple threads be executed concurrently other
threads that are part of the same root process while sharing the same
process resources.

Process is a program in execution, and it is further divided into sub-


processes called light-weight-processes known as threads, and when these
threads are executed concurrently, it is called multithreading.
Advantages of multithreading

Responsiveness: it enhances responsiveness, for example: in a non multi


threaded environment, a server listens to the port for some request and
when the request comes, it processes the request and then resume listening
to another request. The time taken while processing of request makes other
users wait unnecessarily. Instead a better approach would be to pass the
request to a worker thread and continue listening to port:
Advantages of multithreading

Resource Sharing: Processes may share resources only through techniques


such as:
 Message Passing
 Shared Memory

threads share the memory and the resources of the process to which they
belong by default.

The benefit of sharing code and data is that it allows an application to have
several threads of activity within same address space.
Advantages of multithreading

Economy: Allocating memory and resources for process creation is a costly


job in terms of time and space. Since, threads share memory with the
process it belongs, it is more economical to create and context switch
threads. Generally much more time is consumed in creating and managing
processes than in threads.

In Solaris, for example, creating process is 30 times slower than creating


threads and context switching is 5 times slower.
Advantages of multithreading

Better Communication System: communication between processes is done


through techniques such as shared memory and message passing which is
not as efficient, quick, cost effective as in case of threads where all the
threads are in same address space

Microprocessor Architecture Utilization: Multithreading enhances


parallelism on a multi CPU machine and also the CPU switches among
threads very quickly in a single processor architecture
Parallelism VS concurrency
Concurrency: Parallelism:

Concurrency is the ability of a Parallelism, on the other hand, uses multiple


system to execute multiple tasks processing units to execute multiple tasks
using a single processing unit. simultaneously

The technique in which multiple processors are


used to accomplish multiple computations
Concurrency is achieved by
simultaneously is known as parallelism. In
context switching.
parallelism, multiple tasks are executed at the
same instant.
Multiprocessing VS multithreading VS multitasking
Characteristic Multiprocessing Multithreading Multitasking
Meaning The availability of more A process is divided into More than one task can be
than one processor per several different sub- executed concurrently is
system, which can execute processes called threads, called multitasking.
several sets of instructions which have their own path
in parallel is called of execution. This concept
multiprocessing. is called multithreading.

Number of CPU More than one Can be one or more than One
one
Number of the process Number of processes to be Execution of various Processes are executed
being executed executed at a time can be components of the same one by one at a time.
more than one process are done at a time.

Cost efficiency It is less cost-effective It is cost-effective It is cost-effective


Number of users Can be one or more than Usually one. More than one.
one.
Throughput Throughput is maximum. Throughput is moderate. Throughput is moderate.
Efficiency It is the most efficient Efficiency is moderate Efficiency is moderate
Conclusion
END OF THE SECTION
Process
Synchronization
Ch
a pt
er
-V
I By: Noor Agha
Topics in the chapter
Introduction to process
synchronization
Race condition

Operating System
Critical section

Critical section problem


Solutions to critical section
problems
Peterson`s solutions
semaphores

Mutex
Conclusion
Process synchronization

Processes Synchronization or Synchronization is the way by which


processes that share the same memory space are managed in an operating
system.

Process Synchronization is the coordination of execution of multiple


processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner.

It aims to resolve the problem of race conditions and other synchronization


issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple


processes access shared resources without interfering with each other, and
to prevent the possibility of inconsistent data due to concurrent access.
Process synchronization

Process synchronization is an important aspect of modern operating


systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.

On the basis of synchronization, processes are categorized as one of the


following two types:

 Independent Process: The execution of one process does not affect the execution
of other processes.

 Cooperative Process: A process that can affect or be affected by other processes


executing in the system.

Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
Process synchronization

An operating system is a software that manages all applications on a device


and basically helps in the smooth functioning of our computer.

Because of this reason, the operating system has to perform many tasks,
and sometimes simultaneously.

This isn't usually a problem unless these simultaneously occurring processes use a
common resource.

Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
Process synchronization

It is specially needed in a multi-process system when multiple processes are


running together, and more than one processes try to gain access to the
same shared resource or data at the same time.

This can lead to the inconsistency of shared data.

To avoid this type of inconsistency of data, the processes need to be synchronized


with each other.

Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
Process synchronization

For example, consider a bank that stores the account balance of each
customer in the same database. Now suppose you initially have x rupees in
your account. Now, you take out some amount of money from your bank
account, and at the same time, someone tries to look at the amount of
money stored in your account. As you are taking out some money from your
account, after the transaction, the total balance left will be lower than x. But,
the transaction takes time, and hence the person reads x as your account
balance which leads to inconsistent data.

If in some way, we could make sure that only one process occurs at a time, we could
ensure consistent data, and here is why we need process synchronization.
Race condition

When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the
race to say that my output is correct this condition known as a race condition.

Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place.

A race condition is a situation that may occur inside a critical section.


Example

In the above image, if Process1 and Process2 happen at the same time, user 2 will get
the wrong account balance as Y because of Process1 being transacted when the
balance is X.

Inconsistency of data can occur when various processes share a common resource in
a system which is why there is a need for process synchronization in the operating
system
Example

 Let us consider a scenario where money is withdrawn from the bank


by both the cashier(through cheque) and the ATM at the same time.

 Consider an account having a balance of $10,000. Let us consider


that, when a cashier withdraws the money, it takes 2 seconds for the
balance to be updated in the account.

 It is possible to withdraw $7000 from the cashier and within the


balance update time of 2 seconds, also withdraw an amount
of $6000 from the ATM.
 Thus, the total money withdrawn becomes greater than the balance
of the bank account.

 This happened because of two withdrawals occurring at the same


time. In the case of the critical section, only one withdrawal should
be possible and it can solve this problem.
Example

 Consider an example of two processes, p1 and p2. Let value=3 be a


variable present in the shared resource.

 Let us consider the following actions are done by the two processes,

value+3 // process p1
value=6
value-3 // process p2
value=3

The original value of value should be 6, but due to the interruption of the process p2,
the value is changed back to 3. This is the problem of synchronization.
Solutions to critical section problem
Conditions to be fulfill

Any solution to critical section problem must fulfill the following condition:

Mutual exclusion: when one process is executing in critical section , no other process
should enter the critical section.

Progress: When no process is executing in its critical section, and there exists a
process that wishes to enter its critical section, it should not have to wait indefinitely
to enter it.

Bounded waiting: Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical section.
Software based solution
Peterson`s solution

Video link

https://www.youtube.com/watch?v=gYCiTtgGR5Q&t=946s
Peterson`s solution
Peterson`s solution
Software based solution
Semaphore

Video link

https://www.youtube.com/watch?v=XDIOC2EY5JE&t=10s
Semaphore
Semaphore
Hardware based solutions
Test & Lock

Video link

https://www.youtube.com/watch?v=5oZYS5dTrmk&t=112s
Test & Lock
Test & Lock
Conclusion
END OF THE SECTION
Deadlock in
Operating System
Ch
a pt
er
-V
II By: Noor Agha
Topics in the chapter

Operating System
Introduction to deadlock

Conditions of deadlock
Handling deadlock

Conclusion
Introduction to deadlock
Real world example of deadlock
Example of deadlock
Deadlock conditions
Handling deadlock
Deadlock handling strategies
Conclusion
END OF THE SECTION
Memory Management

Ch
ap
t er
-V
II I By: Noor Agha
Topics in the chapter
Introduction to memory
management
What is the need for memory

Operating System
Memory Management management?
Techniques
Contiguous memory
Non-Contiguous memory management schemes
management schemes
What is paging?
What is Segmentation?
Swapping
Fragmentation

Conclusion
Introduction to memory management

Memory is the important part of the computer that is used to store the data.

Its management is critical and important to the computer system because


the amount of main memory available in a computer system is very limited.

At any time, many processes are competing for it. Moreover, to increase
performance, several processes are executed simultaneously.

For this, we must keep several processes in the main memory


Introduction to memory management

Memory management is an important function of operating system

Memory management in OS is a technique of controlling and managing the


functionality of Random access memory (primary memory).

It is used for achieving better concurrency, system performance, and


memory utilization.

It handles or manages primary memory and moves processes back and forth
between main memory and disk during execution.
Introduction to memory management

Memory management keeps track of each and every memory location

It checks how much memory is to be allocated to processes. It decides


which process will get memory at what time.
Memory Management Techniques
The memory management techniques can be classified into following main categories:

 Contiguous memory management schemes

 Non-Contiguous memory management schemes


Memory Management Techniques
The memory management techniques can be classified into following main categories:
Contiguous memory management schemes

In a Contiguous memory management scheme, each program occupies a single


contiguous block of storage locations, i.e., a set of memory locations with consecutive
addresses.

Contiguous memory allocation is one of these memory allocation strategies. We use


this strategy to allocate contiguous blocks of memory to each process, as the name
suggests. Therefore, we allot a continuous segment from the entirely empty area to
the process based on its size whenever a process requests to reach the main memory.
Contiguous memory management schemes
Single contiguous memory management schemes

The Single contiguous memory management scheme is the simplest memory


management scheme used in the earliest generation of computer systems.

. In this scheme, the main memory is divided into two contiguous areas or partitions

The operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.
Multiple Partitioning

The single Contiguous memory management scheme is inefficient as it limits


computers to execute only one program at a time resulting in wastage in memory
space and CPU time.

The problem of inefficient CPU use can be overcome using multiprogramming that
allows more than one program to run concurrently.

To switch between two processes, the operating systems need to load both processes
into the main memory.

The operating system needs to divide the available main memory into multiple parts to
load multiple processes into the main memory. Thus multiple processes can reside in
the main memory simultaneously.
Multiple Partitioning
The multiple partitioning schemes can be of two types:

 Fixed Partitioning

 Dynamic Partitioning
Fixed Partitioning

The main memory is divided into several fixed-sized partitions in a fixed partition
memory management scheme or static partitioning.

These partitions can be of the same size or different sizes. Each partition can hold a
single process.

The number of partitions determines the degree of multiprogramming, i.e., the


maximum number of processes in memory.

These partitions are made at the time of system generation and remain fixed after that.
Dynamic Partitioning

The dynamic partitioning was designed to overcome the problems of a fixed


partitioning scheme.

In a dynamic partitioning scheme, each process occupies only as much memory as


they require when loaded for processing.

Requested processes are allocated memory until the entire physical memory is
exhausted or the remaining space is insufficient to hold the requesting process.

In this scheme the partitions used are of variable size, and the number of partitions is
not defined at the system generation time.
Non-contiguous memory allocation/management technique

It is a technique where the operating system allocates memory to a process in non-


contiguous blocks.

The blocks of memory allocated to the process need not be contiguous, and the
operating system keeps track of the various blocks allocated to the process.

Non-contiguous memory allocation is suitable for larger memory sizes and where
efficient use of memory is important.
Non-contigious memory allocation/management technique
Non-contiguous memory allocation can be done in two ways

Paging

Segmentation
Paging
In paging, the memory is divided into fixed-size pages, and each page is assigned to a
process.

Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory.

In paging, the physical memory is divided into fixed-size blocks called page frames
which are the same size as the pages used by the process.

The process’s logical address space is also divided into fixed-size blocks called
pages, which are the same size as the page frames.

When a process requests memory, the operating system allocates one or more page
frames to the process and maps the process’s logical pages to the physical page
frames.
Paging

The mapping between logical pages and physical page frames is maintained by the
page table, which is used by the memory management unit to translate logical
addresses into physical addresses.

The page table maps each logical page number to a physical page frame number.

The mapping from virtual to physical address is done by the Memory Management
Unit (MMU) which is a hardware device and this mapping is known as the paging
technique.
Example

 Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

 There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each
process is divided into pages of 1 KB each so that one page can be stored in one frame.

 Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.

 Frames, pages and the mapping between the two is shown in the image below.
Paging
Paging
Segmentation
In segmentation, the memory is divided into variable-sized segments, and each
segment is assigned to a process. This technique is more flexible than paging but
requires more overhead to keep track of the allocated segments.

he segmentation technology is used in operating systems and the process is divided


into many pieces called segments and these segments are of variable size. In this,
segmentation uses a variable partitioning method

one segment is equal to one complete memory block.

The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.
Segment Table
The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment

2. Limit: It is the length of the segment.


Example
Fragmentation

Fragmentation is an unwanted problem in the operating system in which the


processes are loaded and unloaded from memory, and free memory space is
fragmented.

Processes can't be assigned to memory blocks due to their small size, and the
memory blocks stay unused.
Types of Fragmentation

There are mainly two types of fragmentation in the operating system. These are as follows:

 Internal Fragmentation

 External Fragmentation
Internal Fragmentation

There are mainly two types of fragmentation in the operating system. These are as follows:

When a process is allocated to a memory block, and if the process is smaller than the

amount of memory requested, a free space is created in the given memory block. Due

to this, the free space of the memory block is unused, which causes internal

fragmentation.
Example of Internal Fragmentation

Assume that memory allocation in RAM is done using fixed partitioning (i.e., memory
blocks of fixed sizes). 2MB, 4MB, 4MB, and 8MB are the available sizes. The Operating
System uses a part of this RAM.

Let's suppose a process P1 with a size of 3MB arrives and is given a memory block of
4MB. As a result, the 1MB of free space in this block is unused and cannot be used to
allocate memory to another process. It is known as internal fragmentation.
Example of Internal Fragmentation
How to avoid internal fragmentation?

The problem of internal fragmentation may arise due to the fixed sizes of the memory
blocks. It may be solved by assigning space to the process via dynamic partitioning.

Dynamic partitioning allocates only the amount of space requested by the process. As
a result, there is no internal fragmentation.
External fragmentation

External fragmentation happens when a dynamic memory allocation method allocates


some memory but leaves a small amount of memory unusable.

The quantity of available memory is substantially reduced if there is too much external
fragmentation.

There is enough memory space to complete a request, but it is not contiguous. It's
known as external fragmentation.
External fragmentation
How to remove external fragmentation?

With help of paging and segmentation, we can remove external fragmentation


problem.
Conclusion
END OF THE SECTION
Virtual Memory

Ch
ap
t er
-I X By: Noor Agha
Topics in the chapter
Introduction to virtual memory

Operating System
Advantages of Virtual memory
Disadvantages of Virtual
memory

Demand paging
Swapping

Page replacement algorithms


What is virtual memory

Virtual Memory is a storage scheme that provides user an illusion of having


a very big main memory. This is done by treating a part of secondary
memory as the main memory.

In this scheme, User can load the bigger size processes than the available
main memory by having the illusion that the memory is available to load the
process.

Instead of loading one big process in the main memory, the Operating
System loads the different parts of more than one process in the main
memory.

By doing this, the degree of multiprogramming will be increased and


therefore, the CPU utilization will also be increased.
How virtual memory works?

In this scheme, whenever some pages needs to be loaded in the main


memory for the execution and the memory is not available for those many
pages, then in that case, instead of stopping the pages from entering in the
main memory, the OS search for the RAM area that are least used in the
recent times or that are not referenced and copy that into the secondary
memory to make the space free for the new pages in the main memory.

Since all this procedure happens automatically, therefore it makes the


computer feel like it is having the unlimited RAM.
What limits the size of virtual memory

The size of virtual storage is limited by:

 CPU Architecture
(32bit can address up to 2^32, 64bit can address up to 2^64)

 Operating system

 Disk space
Advantages of virtual memory

1. The degree of Multiprogramming will be increased.

2. User can run large application with less real RAM.

3. There is no need to buy more memory RAMs.


Disadvantages of virtual memory

1. The system becomes slower since swapping takes time.

2. It takes more time in switching between applications.

3. The user will have the lesser hard disk space for its use.
Demand Paging

Demand Paging is a popular method of virtual memory management.

In demand paging, the pages of a process which are least used, get stored
in the secondary memory.

A page is copied to the main memory when its demand is made.

replacement algorithms which are used to determine the pages which will be
replaced.
Page replacement algorithms
In an operating system that uses paging for memory management, a page
replacement algorithm is needed to decide which page needs to be replaced
when a new page comes in.

Most used and famous page replacement algorithms are:

 First In First Out (FIFO):


 Optimal Page replacement (OPR): In this algorithm, pages are replaced which
would not be used for the longest duration of time in the future.
 Least Recently Used (LRU): In this algorithm, page will be replaced which is
least recently used.
 Most Recently Used (MRU): In this algorithm, page will be replaced which has
been used recently.
Page fault
Thrashing

At any given time, only a few pages of any process are in the main memory,
and therefore more processes can be maintained in memory.

when the OS brings one page in, it must throw another out. If it throws out a
page just before it is used, then it will just have to get that page again
almost immediately. Too much of this leads to a condition called Thrashing.

The system spends most of its time swapping pages rather than executing
instructions. So a good page replacement algorithm is required.
Conclusion
END OF THE SECTION
I/O Device
Management
Ch
ap
t er
-X
By: Noor Agha
Topics in the chapter
Introduction to I/O device
management
Device drivers

Operating System
Device controllers

Synchronous vs asynchronous I/O


Communication to I/O Devices

Polling
Interrupts

Conclusion
Input/output device management

We have many input devices for computers like keyboard, mouse, scanner,
microphone, etc and there are output devices like printer, speakers, etc.
Other than these, there are other hardware devices such as Disk, USB, etc.

I/O device management is a crucial function of an operating system, as it


involves managing how data is transferred between the computer and its
input/output devices, such as keyboards, printers, hard drives, and network
interfaces.
1.Device drivers

The operating system and hardware or the devices of the computer are not
connected directly to each other they are connected to each other through
special programs known as drivers.

Device drivers are software modules that can be plugged into an OS to


handle a particular device. Operating System takes help from device drivers
to handle all I/O devices.

A device driver is a special kind of software program that controls a specific


hardware device attached to a computer. Device drivers are essential for a
computer to work properly.

Device drivers help operating system to communicate with the attached


devices.
1.Device drivers

It translates OS commands into device-specific instructions.

For example, a printer driver convert OS commands into signals that printer
can understand.
Device controller

The Device Controller works like an interface between a device and a device
driver.

I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical


component and an electronic component where electronic component is
called the device controller.

There is always a device controller and a device driver for each device to
communicate with the Operating Systems. A device controller may be able
to handle multiple devices.

As an interface its main task is to convert serial bit stream to block of bytes
and perform error correction .
Device controller
Any device connected to the computer is connected by a plug and socket,
and the socket is connected to a device controller.

Following is a model for connecting the CPU, memory, controllers, and I/O
devices where CPU and device controllers all use a common bus for
Synchronous vs asynchronous I/O

Synchronous I/O − In this scheme CPU execution waits while I/O proceeds

Asynchronous I/O − I/O proceeds concurrently with CPU execution


2.Communication to I/O Devices

The CPU must have a way to pass information to and from an I/O device.

There are three approaches available to communicate with the CPU and Device.

1. Special Instruction I/O

2. Memory-mapped I/O

3. Direct memory access (DMA)


Special Instruction I/O

This uses CPU instructions that are specifically made for controlling I/O devices.

These instructions typically allow data to be sent to an I/O device or read from an I/O
device.
Memory-mapped I/O

A space in memory is specified that both CPU and I/O have access to.

Both CPU and I/O read to and write from this shared apace
Memory-mapped I/O
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte
is transferred. If a fast device such as a disk generated an interrupt for each byte, the
operating system would spend most of its time handling these interrupts.

computer uses direct memory access (DMA) to reduce this overhead.

Direct Memory Access (DMA) means CPU grants I/O module authority to read from or
write to memory without involvement.

DMA module itself controls exchange of data between main memory and the I/O
device.

CPU is only involved at the beginning and end of the transfer and interrupted only
after entire block has been transferred.
Direct Memory Access (DMA)

Direct Memory Access needs a special hardware called DMA controller (DMAC) that
manages the data transfers
Polling vs Interrupts I/O

A computer must have a way of detecting the arrival of any type of input.

There are two ways that this can happen, known as polling and interrupts.

Both of these techniques allow the processor to deal with events that can happen at
any time and that are not related to the process it is currently running.
Polling I/O

Polling is the simplest way for an I/O device to communicate with the processor.

The process of periodically checking status of the device is called polling.

The I/O device simply puts the information in a Status register, and the processor
must come and get the information.
Interrupts I/O

An interrupt is a signal to the microprocessor from a device that requires attention.

A device controller puts an interrupt signal on the bus when it needs CPU’s attention
when CPU receives an interrupt, It saves its current state

When the interrupting device has been dealt with, the CPU continues with its original
task as if it had never been interrupted.
Which one is better?

Polling:Compare this method to a teacher continually asking every student in a class,


one after another, if they need help.

Interrupt: student informs the teacher whenever they require assistance.


Conclusion
END OF THE SECTION
Storage Management

Ch
ap
t er
-X
I By: Noor Agha
Topics in the chapter
Introduction to storage
management
Goals Storage Management

Operating System
Disk management

Goals of Disk Management


Disk Management tasks

Disk Scheduling
Disk Scheduling Algorithms

Conclusion
Introduction to storage management

Organizations rely on local and premise storage, thus they store all the
sensitive and important data using various storage devices.

Storage Management refers to the management of the data storage


equipment’s that are used to store the user/computer generated data.

Storage management is a process for users to optimize the use of storage


devices and to protect the integrity of data for any media on which it resides.

The purpose of storage management is to help organizations find a balance


between costs, performance and storage capacity.
Storage management key attributes

These are given below:

1. Performance

2. Reliability

3. Recoverability

4. Capacity
Disk Management

As a computer user, you might have noticed that your computer's hard drive
can become cluttered and slow over time, This is why disk management
comes into play.

Disk management is a process used by your computer's operating system to


manage the storage of your data on your hard drive.

Disk management is the process of organizing and maintaining the storage


on a computer's hard disk.
Goals of Disk Management

Followings are goals of disk management:

To provide convenient and organized storage for user to store data.

To make accessing to stored data quick enough and efficient

to ensure that the computer runs smoothly and efficiently.


Disk Management tasks

Followings are some disk management tasks:

partitioning

formatting partitions to different file systems

Defragmentation

Back up
Disk Scheduling

Disk scheduling is done by operating systems to schedule I/O requests


arriving for the disk.

Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller.

Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.

Two or more requests may be far from each other so this can result in
greater disk arm movement.
Disk Scheduling Algorithms
FCFS (First Come First Serve)

Suppose the order of request is (82,170,43,140,24,16,190)


Disk Scheduling Algorithms
SSTF (Shortest Seek Time First)

Suppose the order of request is- (82,170,43,140,24,16,190)


Disk Scheduling Algorithms
SCAN

Suppose the requests to be addressed are 82,170,43,140,24,16,190


Conclusion
END OF THE SECTION
File System

Ch
ap
t er
-X
II By: Noor Agha
Topics in the chapter
What is file?
File attributes

Operating System
File types
Introduction to file system

Advantages of file system


Disadvantages of file system

Various file systems


Operations performed by file
system
File access mechanism
Conclusion
What is a file?

A file is a collection of correlated information which is recorded on


secondary or non-volatile storage like magnetic disks, optical disks, and
tapes.

It is collection of interrelated data.

a file is a sequence of bits, bytes, or records whose meaning is defined by


the file creator and user.
File Attributes

Here, are some important File attributes used in OS:

Name: It is the only information stored in a human-readable form.

Identifier: Every file is identified by a unique tag number within a file system known as an
identifier.

Location: Points to file location on device.

Type: This attribute is required for systems that support various types of files.

Size: Attribute used to display the current file size.

Protection: This attribute assigns and controls the access rights of reading, writing, and
executing the file.

Time, date and security: It is used for protection, security, and also used for monitoring
File Types
Introduction to File system

File system is the part of the operating system which is responsible for file
management.

A file system is a method an operating system uses to store, organize, and


manage files and directories on a storage device.

It provides a mechanism to store the data and access to the file contents
including data and programs.
The advantages of using a file system

Organization: A file system allows files to be organized into directories and


subdirectories, making it easier to manage and locate files.

Data protection: File systems often include features such as file and folder
permissions, backup and restore, and error detection and correction, to protect
data from loss or corruption.

Improved performance: A well-designed file system can improve the


performance of reading and writing data by organizing it efficiently on disk.
Disadvantages of using a file system

Compatibility issues: Different file systems may not be compatible with each
other, making it difficult to transfer data between different operating systems.

Disk space overhead: File systems may use some disk space to store metadata
and other overhead information, reducing the amount of space available for user
data.

Vulnerability: File systems can be vulnerable to data corruption, malware, and


other security threats, which can compromise the stability and security of the
system.
Various file systems
Some common types of file systems include:

FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.

NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and
encryption.

ext (Extended File System): A file system commonly used on Linux and Unix-
based operating systems.

HFS (Hierarchical File System): A file system used by macOS.

APFS (Apple File System): A new file system introduced by Apple for their Macs
and iOS devices.
Various operations performed by file system

Create:

Read:

Write:

Open:

Append: it means simply add the data to the file without erasing existing data.

Truncate: it means to reduce its size by removing some of its content.

Delete:

Close:
File Access Mechanisms
File access mechanism refers to the manner in which the records of a file
may be accessed. There are several ways to access files:

1. Sequential access

2. Direct/Random access

3. Indexed sequential access


Sequential access

A sequential access is that in which the records are accessed in some


sequence.

the information in the file is processed in order, one record after the other.

Compilers usually access files in this fashion.


Sequential access
Direct/Random access

Random access file organization provides, accessing the records directly.

Each record has its own address on the file with by the help of which it can
be directly accessed for reading or writing.

The records need not be in any sequence within the file and they need not
be in adjacent locations on the storage medium.
Direct/Random access
Indexed sequential access

This mechanism is built up on base of sequential access.

An index is created for each file which contains pointers to various blocks.

Index is searched sequentially and its pointer is used to access the file
directly.
Indexed sequential access
Conclusion
END OF THE SECTION
Security

Ch
ap
t er
-X
II I By: Noor Agha
Topics in the chapter
What is security & protection?

Operating System
What is the need for security
and protection?
What are various threats and
challenges?
How to secure our system and
data?
conclusion
What is security & protection?

Protection and security refers to the mechanism based on which access to


system resources (RAM, Disk and CPU is controlled and security of stored
data is ensured?

It helps in preventing unauthorized to system, system resources and data.

Based on various procedures and policies vulnerabilities are defined and


eliminated?

Process of ensuring CIA triad.


What is CIA triad?
What is CIA triad?
Confidentiality means that only authorized individuals/systems can view
sensitive or classified information.

 Integrity means that data has not been modified and should not be
modified.
 For this we use HASH function.
 It is performed on the data at both the sides, sender as well as receiver.
 Produced values at both the sides must match with each other.

Availability: system and network must always be ready and available.


What are various threats and challenges?

 A security threat is a threat that has the potential to harm computer


systems, data and organizations.

 The threat could be physical as well as non physical


What are various threats and challenges?

 Physical Threats: It is a potential cause of an occurrence/event that could


result in data loss or physical damage.

 It can be classified as:


I. Internal: Short circuit, fire, non-stable supply of power, hardware failure due to
excess humidity, etc. cause it.

II. External: Disasters such as floods, earthquakes, landscapes, etc. cause it.
What are various threats and challenges?

 Non-physical threats: A non-physical threat is a potential source of an


incident that could result in:

I. Hampering/restricting of the business operations that depend on computer systems.

II. Sensitive data or information loss

III. Keeping track of other’s computer system activities illegally.

IV. Hacking id & passwords of the users, etc.


What are various threats and challenges?
The non-physical threads can be commonly caused by:

I. Malware: malicious software, U don`t know presence, damage computer, affects performance.

II. Virus: a program that gets into computer and replicates self, infect files, affects performance.

III. Spyware: a computer program that tracks, records, and reports a user’s activity (offline and online) and share with intruder.

IV. Worms: a program that gets into computer and replicates self, affects performance, it is stand alone.
V. Trojan: also known as Trojan horse looks a useful program, performs a harmful/unwanted action
(Trick, wooden horse, by Greeks, enter Troy, winning war)

VI. Denial Of Service Attack: attacker tries to introduce self legitimate user, attacker tries to make a system or
network resource unavailable such banking, commerce, trading organizations, etc.

VII. Phishing: Deceive users, theft of sensitive info such as credit card, username & password.
How to secure our system and data?

 Policies and procedures must be made and enforced:

 Training
How to secure our system and data?

 Physically:

I. Keep computer and network clean.

II. Keep computer and network devices away from what it can be damaged by.

III. Prevent physically access of unauthorized person to system.


How to secure our system and data?

 Regular update, upgrade, and maintenance:


I. Software and hardware parts must be regularly upgraded.

II. Software especially ones that can help us in controlling access, detecting and
preventing malwares must be updated.

III. Regular maintenance.


How to secure our system and data?

 Regular Backup
END OF THE SECTION

You might also like