0% found this document useful (0 votes)
8 views53 pages

B.SC II - Year (III Semester) : Operating Systems

The document outlines the curriculum for a B.Sc II Year Operating Systems course, covering key topics such as the definition and functions of operating systems, process management, memory management, and file and I/O management. It also discusses various types of operating systems, including batch systems, multiprogramming systems, real-time systems, and operating systems for personal computers, workstations, and handheld devices. Additionally, it provides an overview of the process control block and real-time systems, highlighting their significance in managing processes and meeting time constraints.

Uploaded by

Pacha Omkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views53 pages

B.SC II - Year (III Semester) : Operating Systems

The document outlines the curriculum for a B.Sc II Year Operating Systems course, covering key topics such as the definition and functions of operating systems, process management, memory management, and file and I/O management. It also discusses various types of operating systems, including batch systems, multiprogramming systems, real-time systems, and operating systems for personal computers, workstations, and handheld devices. Additionally, it provides an overview of the process control block and real-time systems, highlighting their significance in managing processes and meeting time constraints.

Uploaded by

Pacha Omkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

B.

Sc-2 (Semester-IV) Operating Systems

B.Sc II - Year (III Semester) : Operating Systems

UNIT-I
What is Operating System? History and Evolution of OS, Basic OS functions,
ResourceAbstraction,TypesofOperatingSystems–
MultiprogrammingSystems,BatchSystems,Time Sharing Systems; Operating
Systems for Personal Computers, Workstations and Hand-held Devices, Process
Control & Real time Systems.
UNIT-II
Processor and User Modes, Kernels, System Calls and System Programs,
System View of the Process and Resources, Process Abstraction, Process
Hierarchy, Threads,ThreadingIssues,ThreadLibraries;ProcessScheduling,Non-
Preemptive and Preemptive Scheduling Algorithms.

UNITIII
ProcessManagement:Deadlock,DeadlockCharacterization,NecessaryandSuffici
entConditions for Deadlock, Deadlock Handling Approaches: Deadlock
Prevention, Deadlock Avoidance and Deadlock Detection and Recovery.
Concurrent and Dependent Processes, Critical
Section,Semaphores,MethodsforInter-
processCommunication;ProcessSynchronization,ClassicalProcessSynchronizatio
nProblems:Producer-Consumer,Reader-Writer.

UNITIV
MemoryManagement:PhysicalandVirtualAddressSpace;MemoryAllocationStr
ategies–Fixed and -VariablePartitions,Paging,Segmentation,VirtualMemory.

UNITV
FileandI/
OManagement,OSSecurity:DirectoryStructure,FileOperations,FileAllocation
Methods, Device Management, Pipes, Buffer, Shared Memory, Security
PolicyMechanism,Protection,AuthenticationandInternalAccessAuthorization

REFERENCEBOOKS:
1. OperatingSystemPrinciples byAbrahamSilberschatz,PeterBaer
GalvinandGregGagne (7thEdition)WileyIndiaEdition.
2. OperatingSystems:InternalsandDesignPrinciplesbyStallings (Pearson)
3. OperatingSystemsbyJ. Archer Harris(Author),JyotiSingh(Author)(TMH)

Zenex Vision Degree College Page 1 of


53
B.Sc-2 (Semester-IV) Operating Systems

UNIT-1
Operating System Introduction

What is Operating System:


Definition:An operating system is system software, that acts as
interface between user and computer. It is used to control
resources such as the CPU, memory, input/output devices and
overall operations of computer system.
An Operating System provides an environment, in which
a user can execute programs efficiently and conveniently.It is
the first program loaded during the booting, and remains in the
memory all the time.

Objectives and Functions (or) Services of Operating System:


An operating system is system software, that acts as interface between user and
computer.Thevarious services or functions provided by an operating system are as
follows:
1. Program Execution 6. User Interface
2. I/O Operations 7. Multitasking
3. File System Manipulation 8. Security
4. Error Handling 9. Networking
5. Resource Manager
1. Program Execution:
 The number of steps is needed to perform a program execution.
 Program must be loaded into main memory, I/O devices and files must be
initialized, and other resources must be prepared.
 The OS handles these tasks for the user to execute programs.
2. I/O Operations:
 Each Input/Output device requires its own set of instructions or signals for
operation.
 I/O operation means read or write operation with specific I/O device.
 The Operating System provides to access the I/O devices, whenever required.
3. File System manipulation:

Zenex Vision Degree College Page 2 of


53
B.Sc-2 (Semester-IV) Operating Systems

 A file is a collection of related information. The files are stored in secondary


storage device.
 For easy access, files are grouped together into directories.
 The various file operations are creating/deleting files, backup the files, Mapping
files onto secondary memory etc.
4. Error

Handling:
 The various types of errors can occur, while a computer system is running.
 These include internal and external hardware errors, such as memory errors,
device failure errors etc.
 In each case, the OS is responsible to clear the errors, without effect on running
applications.
5. Resource Manager:
 A computer has a set of resources for storing, processing of data, and also
control of these functions.
 The OS is responsible for managing these resources.
6. User Interface:
 OS provides an environment like CUI (Character User Interface) or GUI
(Graphical User Interface) to theuser,to use the computer easily.
 It also translates various instructions given by user.
 Hence it acts as interface between user and computer.
7. Multitasking:
 OS automatically skips from one task to another, when multiple tasks are
executed.
 For example, typing text, listening to music, printing the information and so on.
 The Operating System is responsible for executing all the tasks at the same
time.
8. Security:
 OS provides the security to protect various resources against unauthorized
users.
 It also uses timer, that do not allow unauthorized processes to access the CPU.
9. Networking:
 Networking is used for exchanging the information between different
computers.
 These computers are connected by using various communication links, such as
telephone lines or buses.
Zenex Vision Degree College Page 3 of
53
B.Sc-2 (Semester-IV) Operating Systems

Resource Abstraction & Types of Operating Systems (Evaluation of OS):


Resource abstraction It is the process of “hiding the details of how the
hardware operates, thereby making computer hardware relatively easy for an
application programmer to use
Operating Systems are classified into different categories,Following are
some of the most widely used types of Operating system.
1. Simple Batch System
2. Multiprogramming System
3. Distributed Systems
4. Real Time Systems
5. Time sharing Operating Systems
1. Simple Batch Systems:
In Batch Processing System, computer programs are executed as 'batches'. In
this system, programs are collected, grouped and executed at a time.
 In Batch ProcessingSystem,the user has to submit a job (written on cards or
tape) to a Computer operator.
 The Computer operator grouped all the jobs together as batches serially.
 Then computer operator places a batch of several jobs into input device.
 Then a special program called the Monitor,itexecutes each program in the
batch.
 The Batches are executed one after another at a defined time interval.
 Finally, the operator receives the output of all jobs, and returns them to the
concerned users.
2. Multiprogramming (Multitasking) Systems:
In a multiprogramming system, one or more programs are loaded into main
memory. Only one program is executed at a time by the CPU. All the other
programs are waiting for execution.

Process 1

Process 2 CPU

Process 3

 This operating system picks and begins to execute one job from memory.
 Once this job needs an I/O operation, then operating system switches to another
job (CPU and OS always busy).
 Jobs in the memory are always less than the number of jobs on disk (Job Pool).
 If several jobs are ready to run at the same time, then OS chooses which one to
run based on CPU Scheduling methods.
Zenex Vision Degree College Page 4 of
53
B.Sc-2 (Semester-IV) Operating Systems

 In Multiprogramming system, CPU will never be idle and keeps on processing.

3. Distributed Operating System:


InDistributedOperating System,the workload is shared between two or more
computers, linked together by a network.
 A network is a communication path between two or more computer systems.
 In Distributed Operating system, computers are called Nodes.
 It provides an illusion (imagination) to its users, that they are using single
computer.
 Different computers are linked together by a communication network. i.e., LAN
(Local Area Network) or WAN (Wide Area Network).
 The Distributed Operating system has the following two models:
1. Client-Server Model
2. Peer-to-Peer Model
1. Client-Server Model: In this model, the Client sends a resource request to
the Server,and the server provides the requested resource to the Client. The
following diagram shows the Client-Server Model.
Server
Network

Client Client Client Client

2. Peer-to-Peer Model: In P2PModel, the Peers are computers, which are


connected to each other via network.Files can be shared directly between
systems on the network, without need of a central Server. The following
diagram shows the Peer-to-Peer Model.
Network

Client Client Client Client

4. Real-Time Operating System


 A Real Time Operating System (RTOS) is a special-purpose operating system.
 RTOS is a very fast and small operating system. It is also called Embedded
system.
 It is used to control scientific experiments, industrial control systems, rockets,
home appliances, weapon systems etc.
 RTOS is divided into the following two categories:
1. Hard Real-Time System
2. Soft Real-Time System

Zenex Vision Degree College Page 5 of


53
B.Sc-2 (Semester-IV) Operating Systems

1. Hard Real-Time System:It is guarantees that, critical tasks are completed


within the time. If the task is not completed within the time, then the system
is considered to be failed.
Ex: Nuclear systems, some medical equipment, flight control systems etc.
2. Soft Real-Time System: It is less restrictive system. If the task is not
completed within the time, then the system is not considered to be failed.
Ex:Multimedia (Games), Home Appliances etc.
5. Time-Sharing OS:
 It allows multiple users simultaneously share CPU’s
time.
 This OS, allots a time slot to each user for execution.
 When the job expires, then the OS allocates the CPU
time to next user on the system.
 The time slot period is between 10-100ms this time is
called as time slice or a quantum

Operating Systems for Personal Comp’s


1. Microsoft Windows
 Microsoft created the Windows operating system in the mid-1980s.
 There have been many different versions of Windows, but the most recent ones are
Windows 10 (released in 2015), Windows 8 (2012), Windows 7 (2009), and
Windows Vista (2007).
 Windows comes pre-loaded on most new PCs, which helps to make it the most
popular operating system in the world.

2. macOS
 macOS (previously called OS X) is a line of operating systems created by Apple.
 It comes preloaded on all Macintosh computers, or Macs.
 Some of the specific versions include Mojave (released in 2018), High Sierra
(2017), and Sierra (2016).
3. Solaris
 Best for Large workload processing, managing multiple databases, etc.

Zenex Vision Degree College Page 6 of


53
B.Sc-2 (Semester-IV) Operating Systems

 Solaris is a UNIX based operating system which was originally developed by Sun
Microsystems in the mid-’90s.
 In 2010 it was renamed as Oracle Solaris after Oracle acquired Sun Microsystems.
It is known for its scalability and several other features that made it possible such
as Dtrace, ZFS and Time Slider
4. Linux
 The Linux was introduced by Linus Torvalds and the Free Software Foundation
(FSF).
 Linux (pronounced LINN-ux) is a family of open-source operating systems,
 which means they can be modified and distributed by anyone around the world.
 This is different from proprietary software like Windows, which can only be
modified by the company that owns it.
 The advantages of Linux are that it is free, and there are many different
distributions—or versions—you can choose from.
5. Chrome OS
Best For a Web application.
Chrome OS is another Linux-kernel based operating software that is designed by
Google. As it is derived from the free chromium OS, it uses the Google Chrome web
browser as its principal user interface. This OS primarily supports web applications.
WORKSTATIONS

 Workstation is a computer used for engineering applications


(CAD/CAM), desktop publishing, software development,
and other such types of applications which require a
moderate amount of computing power and relatively high
quality graphics capabilities.
 Workstations generally come with a large, high-resolution graphics screen, large
amount of RAM, inbuilt network support, and a graphical user interface. Most
workstations also have mass storage device such as a disk drive, but a special type
of workstation, called diskless workstation, comes without a disk drive.
 Common operating systems for workstations are UNIX and Windows NT. Like PC,
workstations are also single-user computers like PC but are typically linked together
to form a local-area network, although they can also be used as stand-alone systems.

HANDHELD DEVICES
Zenex Vision Degree College Page 7 of
53
B.Sc-2 (Semester-IV) Operating Systems

 Handheld devices are portable data storage devices that provide


communications, digital photography, audio and video recording, navigation
systems, access to the internet, data storage, and personal information.
 The most commonly used hand-held device are Apple iOS, Android,
Blackberry, Windows Mobile, Symbian
 Some type of handheld devices include Global Positioning Systems (GPS),
smartwatches or fitness bands, digital cameras and videos, tablets and PDA’s
can all contain digital evidence.

Process Control
Process Control Block is a data structure that contains information of the
process related to it. The process control block is also known as a task control
block, entry of the process table, etc.

Process Control Block (PCB):


 The Process Control Block (PCB) is a data structure, which is created and
managed by Operating System. It is also called Task Control Block.
 Each process is represented in the operating system by a Process Control Block.
 Each and every process has its own PCB. The information in the PCB is updated,
during the process execution.
 The PCB contains sufficient information.So that it is possible to interrupt a
running process, and later resume execution.
 The Process Control Block contains the following information:
1. Identifier: It contains unique value, which is assigned Identifier
by OS, at the time of process creation. Process state
2. State: It contains the process current state. The Priority
process state may be new, ready, running, waiting, or Program Counter
terminated. Memory Pointers
3. Priority:It contains the priority level value. It is Context Data
related to other processes. I/O Status Information
4. Program Counter: It contains the address of the
Accounting Information
next instruction to be executed in the program.
:
5. Memory Pointers: It contains addresses of the
:
instructions and data related to process.

Zenex Vision Degree College Page 8 of


53
B.Sc-2 (Semester-IV) Operating Systems

6. Context Data: It contains the data, which isstored in CPU registers, while the
process is executing.
7. I/O Status Information: It contains a list of I/O devices allocated to the
process, a list of open files, and so on.
8. Accounting Information: It contains the amount of processor time used, time
limits, account numbers, and so on
REAL TIME SYSTEMS

 Real-time operating systems (RTOS) are used in environments where a large


number of events, mostly external to the computer system, must be accepted
and processed in a short time
 This system is time-bound and has a fixed deadline. The processing in this type
of system must occur within the specified constraints
 Examples of real-time operating systems are airline traffic control systems,
Command Control Systems, airline reservation systems, Heart pacemakers,
Network Multimedia Systems, robots, etc.
 The real-time operating systems can be of 3 types
 Hard Real-Time Operating System: These operating systems guarantee that
critical tasks are completed within a range of time. EX: scientific experiments,
medical imaging systems
 Soft real-time operating system: This operating system provides some
relaxation in the time limit. EX: Multimedia systems, digital audio systems,
etc.
 Firm Real-time Operating System: RTOS of this type have to follow
deadlines as well. In spite of its small impact, missing a deadline can have
unintended consequences, including a reduction in the quality of the product.
Example: Multimedia applications.

Zenex Vision Degree College Page 9 of


53
B.Sc-2 (Semester-IV) Operating Systems

UNIT-2

Processor
Processor is a hardware component which Controls the all operations of the
computer system. It is regularly called to as Central Processing Unit (CPU).

 A processor is an integrated electronic circuit that performs the calculations that


run a computer.
 A processor performs arithmetical, logical, input/output (I/O) and other basic
instructions that are passed from an operating system (OS).
 Most other processes are dependent on the operations of a processor.
 The CPU is just one of the processors inside a personal computer (PC).
 The Graphics Processing Unit (GPU) is another processor, and even some hard
drives are technically capable of performing some processing.
Processor Registers: A Register is a small memory, that resides in the processor. It
provides data quickly currently executing programs (process). A register can be 8-
bit, 16-bit, 32-bits, or 64-bit.
a) PC:PC stands for Program Counter .It contains the address of next instruction to
be executed.
b) IR:IR stands for Instruction Register. It stores the currently being executed
instruction.

Zenex Vision Degree College Page 10 of


53
B.Sc-2 (Semester-IV) Operating Systems

c) MAR:MAR stands for Memory Address Register. It stores the address of the data
or instruction, fetched from the main memory.
d) MBR:MBR stands for Memory Buffer Register.It stores the data or instruction
fetched, from the main memory. It then copied into Instruction Register (IR) for
execution.
e) I/OAR: I/O AR stands for Input/Output Address Register. It specifies a particular
I/O device.
f) I/OBR:I/O BR stands for Input/Output Buffer Register. It is used for exchanging
the data between an I/O module and the processor.

User Mode and Kernel Mode.


There are two modes of operation in the operating system to make sure it works
correctly. These are
1. User mode
2. Kernel mode.
1. User Mode
The system is in user mode when the operating system is running a user
application such as handling a text editor.
While in the User Mode, the CPU executes the processes that are given
by the user in the User Space.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when
switching from user mode to kernel mode.
2. Kernel Mode
 A Kernel is a computer program that is the heart of an Operating System.
 The system
starts in
kernel
mode
when it

Zenex Vision Degree College Page 11 of


53
B.Sc-2 (Semester-IV) Operating Systems

boots and after the operating system is loaded, it executes applications in user
mode.
 There are certain instructions that need to be executed by Kernel only. So, the CPU
executes these instructions in the Kernel Mode only.
Ex:- memory management should be done in Kernel-Mode only
 The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when
switching from kernel mode to user mode.
 The Operating System has control over the system,
 The Kernel also has control over everything in the system.
 The Kernel remains in the memory until the Operating System
is shut-down.
 It provides an interface between the user and the hardware
components of the system. When a process makes a request to the Kernel, then it is
called System Call.
Functions of a Kernel
 Access Computer resource:
 Resource Management
 Memory Management:
 Device Management:
In the above image, the user process executes in the user mode until it
gets a system call. Then a system trap is generated and the mode bit is set to
zero. The system call gets executed in kernel mode. After the execution is
completed, again a system trap is generated and the mode bit is set to 1. The
system control returns to kernel mode and the process execution continues.
System Call

 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it makes a request to the operating
system’s kernel.

Zenex Vision Degree College Page 12 of


53
B.Sc-2 (Semester-IV) Operating Systems

 System call provides the services of the operating system to the user programs via
Application Program Interface(API).
 It provides an interface between a process and operating system. All programs
needing resources must use system calls.
Services Provided by System Calls :
1. Process creation and management
2. Main memory management
3. File Access, Directory & File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication
System Programs

System Programming can be defined as the act of building Systems Software using
System Programming Languages.
According to Computer Hierarchy, which comes at last is Hardware.
Then it is Operating System, System Programs, and finally Application
Programs.

Process Concepts
Process:
A Process is a
program in the execution. A system consists of a collection of processes. All the
Zenex Vision Degree College Page 13 of
53
B.Sc-2 (Semester-IV) Operating Systems

processes are executed in sequential fashion.Operating system processes are


executing the system code, and user processes are executing the user code.
Process Hierarchy

Process States:
 The process state is
defined as the current
activity of the process.
 A process goes through various states, during its execution.
 The Operating system placed all the processes in a FIFO (First In First Out) queue
for execution.
 Adispatcheris a program;itswitches the processor from one process to another for
execution.
 The different process states are as follows.

New Terminated
admitted
dispatch complete
Ready Running
timeout

Event occurs Event wait


Waiting

1. New State: The New state defines that, a process is being admitted (created) by an
operating system.
2. Ready State: The Ready state defines that, the process ready to execute. i.e.,
waiting for a chance of execution.
3. Running State: The Running state defines that, the instructions of a process are
being executed.
4. Waiting State: The Waiting state defines that, the process is waiting for some
event to occur, such as the completion of an I/O operation. It is also known as
Blocked state.
Zenex Vision Degree College Page 14 of
53
B.Sc-2 (Semester-IV) Operating Systems

5. Terminated State: The Terminated state defines that, the process has finished its
execution. The process can be either completely executed or aborted for some
reasons.
State Transitions of a Process
The process states are divided into different combinations
1. NullNew
2. NewReady
3. ReadyRunning
4. RunningTerminated
5. RunningReady
6. RunningWaiting
7. WaitingReady

Process Creation and Termination (or) Operations on Process


The Operating System must provide a facility for process creation and
termination. The processes are created and deleted dynamically.
1. Process Creation:
When a new process is added, the Operating System creates a Process Control
Block and allocates space in main memory. These steps are called as Process
Creation.
Example: Opening MS-Word software
When the O.S creates a new process, by the request of another process then it is
referred as “Process Spawning”. When one process spawns (produces) another,
then the former process is called as Parent process, and the spawned (produced)
process is called as Child process.
Example: Printing from MS-Word software
2. Process Termination:
An operating system terminates a process in different situations. While
termination, all the process related information is released from the main memory.
Example: Closing MS-Word software
Reasons for process termination:
A process can be terminated due to the following reasons:
 Normal completion of the process
 Time limit exceeded
 I/O Failure
 Invalid instruction executed
 Parent process terminated

Zenex Vision Degree College Page 15 of


53
B.Sc-2 (Semester-IV) Operating Systems

Process Scheduling (or) CPU Scheduling


The process scheduling is to assign processes to the processor for execution.
It is the method of executing multiple processes at a time in a multiprogramming
system.
Hence, the CPU scheduling helps to achieve system objectives such as response
time, CPU utilization, waiting time etc. In many systems, the scheduling task is
divided into three separate functions. They are
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
New

Long-term Long-term
scheduler scheduler
Ready/ Suspend
Ready Running Exit
Medium-term Short-term
scheduler scheduler

1. Long-Term Scheduler:
 A Long-Term Scheduler determines, which programs are admitted to the system
for processing.
 Once admitted a program, it becomes a process, and is added to the queue.
 It controls the degree of Multi-programming.i.e., theno.of processes present in
ready state at any time.
 The Long-Term Scheduler is also called as Job Scheduler.

2. Short-Term Scheduler:
 The Short-Term Scheduler is also known as CPUScheduleror Dispatcher.
 It decides which process will execute next in the CPU. i.e., Ready to Running
state.
 It also preemptsthe currently running process,to execute another process.
 The main aim of this scheduler is, to enhance CPU performance and increase
process execution rate.
3. Medium-Term Scheduler:
 The Medium-Term Scheduler is responsible for suspending and resuming the
processes.
 It mainly does Swapping. i.e., moving processes from Main memory to
secondary memory and vice versa.
 The Medium-Term Scheduler reduces the degree of Multi-programming.

Zenex Vision Degree College Page 16 of


53
B.Sc-2 (Semester-IV) Operating Systems

Process Scheduling Algorithms


Scheduling algorithms are used to decide, which of the process in the queue should
be allocated to the CPU. An Operating System uses Dispatcher, which assigns a
process to the CPU.
Types of Scheduling Algorithms:
The scheduling algorithms are classified into two types. They are as follows:
Scheduling Algorithms

Non- Preemptive Algorithms Preemptive


Algorithms

I. Non-Preemptive Algorithms:
A non-preemptive algorithm will not prevent currently running process. In
this case, once the process enters into CPU execution, it cannot be pre-empted,
until it completes its execution.
Ex: (1). First Come First Serve (FCFS)
(2). Shortest Job First (SJF)
II. Preemptive Algorithms:
A preemptive algorithm willprevent the currently running process.In this
case, the currently running process may be interrupted and moves to the Ready
state. The preemptive decision is performed, when a new process arrives or when
an interrupt occurs or a time-out occurs.
Ex: Round Robin (RR)
1) First Come First Serve [FCFS] Algorithm:
 The FCFS algorithm is a simplest and straight forward scheduling algorithm.
 It follows Non-Preemptive scheduling algorithm method.
 In this algorithm, processes are executed on first-come and first-served basis.
 This algorithm is easy to understand and implement.
 The problem with this algorithm is, the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the processes arrive in the order P1, P2, P3, then Gantt chart of this
scheduling is as follows.
Zenex Vision Degree College Page 17 of
53
B.Sc-2 (Semester-IV) Operating Systems

P1 P2 P3

2) Shortest0 Job First [SJF] Algorithm:2 2 30


 It is also called as Shortest Process Next (SPN).
 It follows Non-Preemptive scheduling algorithm method.
 The SJF algorithm is faster than the FCFS.
 The process withleast burst-time is selected from the ready queue for execution.
 This is the best approach to minimize waiting time.
 The problem with SJF is that, it requires the prior knowledge of burst-time of
each process.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 6
P2 8
P3 7
P4 3

TheGantt chart of SJF scheduling is as follows.


P4 P1 P3 P2

0 3 9 1 24
3) Round Robin [RR] Algorithm:
 The Round Robin scheduling algorithmwas used in Time-sharing System.
 It is one of the most widely used algorithms.
 A fixed time (Quantum) is allotted to each process for execution.
 If the running process doesn’t complete within the quantum, then the process is
preempted.
 The next process in the ready queue is allocated the CPU for execution.
 The problem with this algorithm is , the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the time quantum is 4 milliseconds, then Gantt chart of this scheduling is as


follows.
Zenex Vision Degree College Page 18 of
53
B.Sc-2 (Semester-IV) Operating Systems

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 1 1 22 26 30
Threads
 A Thread is also called as “Light Weight Process” (or) a single unit of a process.
 Threadhas its own Program Counter (PC), a register set, and a stack.
 It shares some information from other threads like process code, data, and open
files.
 A traditional process has a single thread of control. It is also called “Heavy Weight
Process”.
 If the process contains multiple threads of control, then it can do more than one
task at a time.
 Many software packages that run on modern computers are multi threaded.
 For Example,MS-Word software uses multiple threads like performing spelling
and grammar checking in background, auto save, etc.

Reference Image:

Threading Issues
Following threading issues are:
a) The fork() and exec() system call
b) Signal handling

Zenex Vision Degree College Page 19 of


53
B.Sc-2 (Semester-IV) Operating Systems

c) Thread cancelation
d) Thread Pools
e) Thread local storage
a. The fork() and exec() system calls
 The fork() is used to create a duplicate process. The meaning of the fork() and
exec() system calls change in a multithreaded program.
 If a thread calls the fork(), does the new process duplicate all threads
 If a thread calls the exec() system call, the program specified in the parameter to
exec() will replace the entire process which includes all threads.
b. Signal Handling
Generally, signal is used in UNIX systems to notify a process that a
particular event has occurred.
A signal received either synchronously or asynchronously, based on the
source of and the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same
pattern as given below
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process.
 Once delivered, the signal must be handled.
c. Cancellation
Termination of the thread in the middle of its execution is called ‘thread
cancellation.
Threads that are no-longer required can be cancelled by another thread in
one of two techniques:
1. Asynchronies cancellation
2. Deferred cancellation
1. Asynchronies Cancellation
It means cancellation of thread immediately

Zenex Vision Degree College Page 20 of


53
B.Sc-2 (Semester-IV) Operating Systems

2. Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself
when it is feasible
For example − If multiple database threads are concurrently searching
through a database and one thread returns the result the remaining threads might
be cancelled.
d. Thread polls
 Multithreading in a web server, whenever the server receives a request it creates a
separate thread to service the request
 A thread pool is to create a number of threads at process start-up and place them
into a pool, where they sit and wait for work.
e. Thread Local Storage
The benefit of using threads in the first place is that Most data is shared
among the threads but, sometimes threads also need thread explicit data.
The Major libraries of threads are pThreads, Win32 and java which
provide support for thread specific which is called as TLS thread local storage

Thread Libraries
 Thread libraries provide programmers with an Application Program Interface for
creating and managing threads.
 Thread libraries may be implemented either in user space or in kernel space
There are two primary ways of implementing thread library, Those are
 The first way is to provide a library entirely in user space with kernel support
 The second way is to implement a kernel level library supported directly by the
operatingsystem.
 There are Three Main Thread Libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an
extension to the POSIX standard.
 pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public
domain shareware for Windows.
Zenex Vision Degree College Page 21 of
53
B.Sc-2 (Semester-IV) Operating Systems

 Global variables are shared amongst all threads.


 One thread can wait for the others to rejoin before continuing.
2. Win32 threads - provided as a kernel-level library on Windows systems.
 It is Similar to pThreads.
3. Java threads–
 Since Java generally runs on a Java Virtual Machine,
 The implementation of threads is based upon whatever OS and hardware
 The JVM is running on, i.e. either Pthreads or Win32 threads depending on
the system.

 End Unit- 2

Process Management
UNIT-3

Deadlock
Deadlock:“Deadlock is a situation, when a set of processes are blocked, because each
process is loading a resource, and waiting for another resource, acquired by some
other process”. (or) The Deadlock is a situation when several processes may compete
for a finite number of resources.
In a multiprogramming system, a process requests a resource, and if the
resource is not available then the process enters a waiting state. The Waiting
process may never change state, because the resources are held by other waiting
process. This situation is called a Deadlock.
Consider the following Resource Allocation Graph.
R1

Assigned to
Waiting for

P1 P2

Assigned to
Waiting for

Zenex Vision Degree College Page 22 of


53
B.Sc-2 (Semester-IV) Operating Systems

R2

From the above Resource allocation graph, process P1 is holding the


resource R1, and waiting for the resource R2, which is assigned by process P2,
and process P2 is waiting for resource R2. This situation is called Deadlock.

Deadlock characterization (or) Conditions for Deadlock


There are following 4 conditions that cause the occurrence of a deadlock.
1) Mutual exclusion: At least one resource must be held in a non-sharable mode. It
means only one process at a time can use the resource. If another process requests
the same resource, the requesting process must be wait until the resource has been
released.
2) Hold and wait: A process must be holding at least one resource and is waiting
foranother resource thatisheld by other process.
3) No preemption: Resources cannot be preempted. It means,no resource can be
forcibly removed from a process holding it.
4) Circular wait:If processes are waiting for resources in a circle. For example, P1 is
holding resource R1, and is waiting for resource R2. Similarly, P2holding the
resource R2, and is waiting for resource R1.

Resource –Allocation Graph:


A Resource-Allocation graph is an analytical tool, that is used to verify whether a
system is in a deadlock state or not.
R1

Assigned to
Waiting for

P1 P2

Assigned to
Waiting for

R2

Circular
R1Wait

Held
Request

P1 P2

Zenex Vision Degree College Page 23 of


53 Held
Request
B.Sc-2 (Semester-IV) Operating Systems

R1 R1

P1 P1

Resource is required Resource is held

From the above all diagrams, P1, P2 represents processes. R1, R2


represents resources. Dot ( ) represents each instance of that resource.

Methods for Handling Deadlocks


The deadlock problem can be solved in three ways. They are
1. Deadlock prevention
2. Deadlock Avoidance
3. Deadlock detection and recovery

(1). Deadlock Prevention


When the four conditions (mutual exclusion, hold and wait, no preemption,
circular wait) hold in the system, then deadlock occurs. If one of these conditions
cannot hold, we can prevent the occurrence of a deadlock. The strategy of dead lock
prevention is simply designing a system, in such a way that the possibility of dead
lock is excluded.
a) Mutual exclusion:
 The Mutual exclusion condition can be prevented, whenever the resources are
sharable.
 Sharable resources do not require mutually exclusive access, and thus cannot be
involved in a deadlock.
 Read-only files are a good example of a sharable resource.

Zenex Vision Degree College Page 24 of


53
B.Sc-2 (Semester-IV) Operating Systems

 Some resources such as files may allow multiple access for read, but exclusively
access for writing.
 If more than one process requires write permission, a dead lock can occurs.
b) Hold & wait:
 The hold-and-wait condition can be prevented, whenever a process requests a
resource, it does not hold any other resources.
 There are two approaches for this:
 One approach is, Requires that all processes request all resources at one time.
 Another approach is, Requires that processes holding resources must release
them, before requesting new resource. And then re-acquire the released
resource along with new request.
c) No preemption:
 If a process request for a resource, which is held by another waiting resource, then
the requested resource may be preempted from the waiting resource.
 In the second approach, if a process request for a resource, which are not presently
available, then all other resources that it holds are preempted.
d) Circular wait:
 The circular-wait condition can be prevented,when the each resource will be
assigned with a numerical number.
 A process can request for the resource, only in increasing order of numbering.
 For example, if P1process is allocated R5 resource. Now next time, if P1 ask for
R4, R3, which are lesser than R5, such request will not be granted. Only request
for resources, more than R5 will be granted.
(2). Deadlock Avoidance
 In the dead lock avoidance, we restrict resources requests to prevent at least one of
the four conditions of dead lock.
 This leads to inefficient use of resources and inefficient execution of processes.
 With dead lock avoidance, a decision is made dynamically where the current
resource allocation request will be granted.

Zenex Vision Degree College Page 25 of


53
B.Sc-2 (Semester-IV) Operating Systems

 If it is granted potentially, it leads to a dead lock. Dead lock avoidance requires the
knowledge of further process resource request.
 In this we can describe two approaches to dead lock avoidance.
 Don’t start a process, if its demands may leads to dead lock.
 Don’t grant an incremental resource requested by a process, if this allocation
lead to dead lock.
 The Deadlock Avoidance algorithm ensures that, a process will never enter into
unsafe or deadlock state.
 Each process declare the maximum number of resources of each type that it may
need, number of available resources, allocated resources, maximum demand of the
processes.
 Processes inform operating system in advance, that how many resources they will
need.
 If we allocated the resources in an order for each process, according to
requirements, and deadlock cannot be occur. Then this state is called as Safe state.
 A safe state in not a deadlocked state, and not all unsafe states are deadlocked. But
an unsafe state, deadlock may occur.
 We can recognize deadlock by using Banker’s algorithm.

Unsafe

Deadlock

Safe

Resource allocation:
Consider a system with a finite number of processes and finite number of
resources. At any time a process may have zero or more resources allocated to it.The
state of the system is reflected by the current allocation of resources to processes. The
state may be safe state or unsafe state.

Zenex Vision Degree College Page 26 of


53
B.Sc-2 (Semester-IV) Operating Systems

Safe State:

Unsafe State:

(3). Deadlock Detection and Recovery


 If a system does not use either a deadlock-prevention or a deadlock avoidance
algorithm, when a deadlock situation may occur.
 Deadlock Detection and Recovery technique is used, after system into deadlock
situation.
 Resource allocation Graph (RAG) is used in deadlock detection algorithm.

Zenex Vision Degree College Page 27 of


53
B.Sc-2 (Semester-IV) Operating Systems

 The Detection algorithm that examines the state of the system, to detect whether a
deadlock has occurred.
 The Recovery algorithm is used to recover from the deadlock.
1. Dead Lock Detection:Deadlock detection is the process of whether a deadlock
exists or not, and identify the processes and resources involved in the deadlock.
The basic idea is, to check allocation of resource availability, and to determine if
the system is in deadlocked state.
Detection strategies do not restrict process actions. With deadlock detection,
requested resources are granted to processes whenever possible. Periodically, the
OS performs an algorithm, to detect the circular wait condition.
1. A deadlock exists, if and only if, there are unmarked processes at the end of
the algorithm.
2. Each unmarked process is deadlocked.
3. The strategy in this algorithm is to find a process, whose request can be
satisfied with the available resources.
2. Deadlock Recovery: When a detection algorithm finds that a deadlock exists, then
several recovery methods used.
a) Process Termination: To eliminate deadlocks by aborting a process, we use
one of two methods. In both methods, the system reclaims all resources
allocated to the terminated processes.
1. Abort all deadlocked processes: This method clearly will break the
deadlock cycle. These processes are computed for a long time, and the
results of these partial computations must be discarded, and recomputed
later.
2. Abort one process at a time, until the deadlock cycle is eliminated: This
method is very complicated to implement, even after each process is
aborted.A deadlock-detection algorithm determines, whether any processes
are still deadlocked.

Zenex Vision Degree College Page 28 of


53
B.Sc-2 (Semester-IV) Operating Systems

b) Resource Preemption:Resources are preempted from the processes that are


involved in deadlock. Then preempted resources are allocated to other
processes. So that, there is a possibility of recovering the system from deadlock.

ConcurrencyCondition

Concurrency means that an application is making progress on more than one task
atthe same time (concurrently). Well, if the computer only has one CPU the
applicationmaynotmakeprogress onmorethanonetaskatexactlythesametime,
butmore
thanonetaskisbeingprocessedatatimeinsidetheapplication.Itdoesnotcompletelyfinis
honetaskbeforeitbeginsthenext.
There are several kinds of concurrency.In a single processor operating
system,there really is little point to concurrency except to support multi users, or
supportthreads that are likely to become blocked waiting on I/O and you don't
want to wasteCPU cycles. In a multi-processor or core system, then concurrency
can greatly speedup somethroughput.
ProcessSynchronization
Process Synchronization means sharing system resources by processes in such a
waythat, Concurrent access to shared data is handled thereby minimizing the
chance
ofinconsistentdata.Maintainingdataconsistencydemandsmechanismstoensuresynch
ronized executionofcooperating processes.
 ProcessSynchronizationwasintroducedtohandleproblemsthatarosewhile
multipleprocessexecutions.
 Processsynchronizationcanbeprovidedbyusingseveraldifferenttoolslikes
emaphores,mutexandmonitors.
 Synchronizationisimportantforbothuserapplicationsandimplementationofo
peratingsystem.

CriticalSectionProblem

Considerasystemconsistingofnprocesses(P0,P1,………Pn-
1 )eachprocesshasasegmentofcodewhichisknownascriticalsectioninwhichtheprocess
Zenex Vision Degree College Page 29 of
53
B.Sc-2 (Semester-IV) Operating Systems

maybechangingcommonvariable,updatinga table, writing a file and so on. The


important feature of the system is that when the process
isexecutinginitscriticalsectionnootherprocessistobeallowedtoexecuteinitscriticalse
ction.Theexecutionofcriticalsectionsby theprocessesis
amutuallyexclusive.Thecriticalsectionproblemistodesignaprotocolthattheprocess
canusetocooperateeachprocessmustrequestpermissiontoenter its critical
section. The section of code implementing this request is the entry section.
Thecriticalsectionisfollowedonexitsection.Theremainingcodeistheremainderse
ction.
Example:
While(1)
{
EntrySection;
CriticalSe
ction;ExitSectio
n;
RemainderSection;
}
Asolutiontothecriticalsectionproblemmustsatisfythefollowingthreeconditions.
1. MutualExclusion:IfprocessPiisexecutinginitscriticalsectionthennoanyother
processcanbeexecutingintheircriticalsection.
2. Progress:Ifnoprocessisexecutinginitscriticalsectionandsomeprocesswish
toentertheircriticalsectionsthenonlythoseprocessthatarenotexecutinginthe
irremaindersectioncanenteritscriticalsectionnext.
3. Boundedwaiting:Thereexistsaboundonthenumberoftimesthatotherpr
ocessesareallowedtoentertheircriticalsectionsafteraprocesshasmadear
equest.

Semaphores
SemaphoreisasynchronizationtooldefinedbyDijkstrain1965formanagingconcurrent
process byusing thevalueofsimplevariable.

Zenex Vision Degree College Page 30 of


53
B.Sc-2 (Semester-IV) Operating Systems

Semaphore is a simply a variable. Thisvariable is used tosolve critical


sectionproblemandtoachieveprocesssynchronizationinthemultiprocessingenvironm
ent.
Forthesolutiontothecriticalsectionproblemonesynchronizationtoolisusedwhichisknownass
emaphores.Asemaphore ‘S‘is an integervariable which isaccessedthroughtwostandard
operationssuchaswaitandsignal.Theseoperationswereoriginallytermed‘P‘(forwaitme
ansto test)and‘V‘(forsinglemeanstoincrement).Theclassicaldefinitionofwaitis
Wait(S)
{
While(S<=0)
{
Test;
}
S--;
}
The classical definition of the
signal isSignal(S)
{
S++;
}
Incaseofwaitthetestconditionisexecutedwithinterruptionandthedecrementisexecute
dwithoutinterruption.
Wait: The wait operation decrements the value of its argument S, if it is positive.
If Sisnegativeorzero, thennooperationisperformed.
Signal:ThesignaloperationincrementsthevalueofitsargumentS.

TypesofSemaphores:
BinarySemaphore:
Abinarysemaphoreisasemaphorewithanintegervaluewhichcanrangebetween0
and1.Let‘S‘beacountingsemaphore.Toimplementthebinarysemaphoreweneed
followingthestructureofdata.
BinarySemaphores
S1,S2;intC;

Zenex Vision Degree College Page 31 of


53
B.Sc-2 (Semester-IV) Operating Systems

Initially S1 =1,S2=0andthevalueof Cis set totheinitialvalueof the counting


semaphore
‗S‘.Thenthewaitoperationofthebinarysemaphorecanbeimplementedasfollows.
Wait (S1)
C--;
if(C<0)
{
Signal
(S1);Wai
t(S2);
} Signal(S1);
Thesignaloperationofthebinarysemaphorecanbeimplementedasfoll
ows:
Wait(S1);
C++;
if(C <=0)
Signal (S2);
Else
Signal(S1);

ClassicalProblemonSynchronization
Therearevarioustypesofproblemwhichareproposedforsynchronizationschemesuchas
Bounded Buffer Problem(Producer-Consumer): This problem was
commonly used to illustrate the power
ofsynchronizationprimitives.Inthisschemeweassumedthatthepoolconsistsof‘N
‘bufferandeachcapableofholdingoneitem.The‘mutex‘semaphoreprovidesmutu
alexclusionfor
accesstothebufferpoolandisinitializedtothevalueone.Theemptyandfullsemapho
rescountthenumberofemptyandfullbufferrespectively.Thesemaphoreemptyisin
itializedto‘N‘andthesemaphorefullisinitializedtozero.Thisproblemisknownasp
rocedureandconsumer problem. The code of the producer is producing full
buffer and the code
ofconsumerisproducingemptybuffer.Thestructureofproducerprocessisasfo
llows:
do {
produceaniteminnextp
Zenex Vision Degree College Page 32 of
53
B.Sc-2 (Semester-IV) Operating Systems

............
Wait(empty);
Wait(mutex);
...........
addnextptobuffer
............
Signal(mute
x);Signal(f
ull);
}While(1);
Thestructureofconsumerprocessisasfollows:
do {
Wait(full);
Wait(mutex)
;
...........
Removeanitemfrombuffertonextc
...........
Signal(mute
x);
Signal(empt
y);
............
Consumetheiteminnextc;
..............
}While(1);

ReaderWriterProblem:Inthistypeofproblemtherearetwotypesofprocessareu
sed
suchasReaderprocessandWriterprocess.Thereaderprocessisresponsibleforo
nlyreadingandthewriterprocessisresponsibleforwriting.Thisisanimportantpr
oblemofsynchronizationwhichhasseveralvariationslike
o The simplest one is referred as first reader writer problem which
requires that
noreaderwillbekeptwaitingunlessawriterhasobtainedpermissiontous
ethesharedobject. In other words no reader should wait for other
reader to finish because awriteriswaiting.
Zenex Vision Degree College Page 33 of
53
B.Sc-2 (Semester-IV) Operating Systems

o Thesecondreaderwriterproblemrequiresthatonceawriterisreadythenth
ewriterperformsitswriteoperationassoonaspossible.
Thestructureofareaderprocessisasfo
llows:Wait(mutex);
Read count++;
if(readcount==
1)
Wait(wrt);
Signal(mutex);
...........
Readingisperformed
...........
Wait(mutex);
Readcount--;
if(readcount==
0)Signal(wrt);
Signal(mutex);
Thestructureofthewriterprocessisasfollows:
Wait(wrt);
Writingisperformed;
Signal(wrt);

1. WhatisDeadlock?ExplainaboutDeadlock?
2. WhatisDeadlock?ExplainDeadlockPrevention?
3. WhatisDeadlock?ExplainaboutDeadlockAvoidance?
4. WhatisDeadlock?ExplainaboutDeadlockDetectionandRecovery?
5. WhatisSemaphore?DiscussaboutSemaphore?
6. WhatarethesolutionsofCriticalSectionproblem?
7. DefineConcurrency?
8. Explain Classical Inter process Communication problems? (OR) Explain
Zenex Vision Degree College Page 34 of
53
B.Sc-2 (Semester-IV) Operating Systems

ClassicproblemsOfSynchronization?

UNIT -Memory
4 Management
End Unit-4 & Virtual Memory

Operating System - Memory Management

 Memory management is the functionality of an operating system which handles or


manages primary memory and moves processes back and forth between main
memory and disk during execution.
 Memory management keeps track of each and every memory location,
 It checks how much memory is to be allocated to processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
The operating system takes care of mapping the logical addresses to physical
addresses at the time of memory allocation to the program.
Memory Addresses & Description
There are three types of addresses used in a program before and after memory is
allocated
1 Symbolic addresses :-The addresses used in a source code. The variable names,
constants, and instruction labels are the basic elements of the symbolic address space.
2 Relative addresses :-At the time of compilation, a compiler converts symbolic
addresses into relative addresses.
3 Physical addresses :-The loader generates these addresses at the time when a program
is loaded into main memory.
Zenex Vision Degree College Page 35 of
53
B.Sc-2 (Semester-IV) Operating Systems

The set of all logical addresses generated by a program is referred to as a


logical address space. The set of all physical addresses corresponding to these
logical addresses is referred to as a physical address space.

VIRTUAL (LOGICAL) AND PHYSICAL ADDRESS SPACE:

 An address generated by the CPU is commonly referred to as a logical address.


 It loaded into MEMORY ADDRESS REGISTER of memory which is called as
physical address.
 The compile-time and load-time address-binding methods generate identical logical
and physical addresses.
 The set of all logical addresses generated by a program is a logical address space
 The set of all physical addresses corresponding to these logical addresses is a
physical address space
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the MEMORY MANAGEMENT UNIT (MMU)

 With respect to above diagram,

If the base is at 14000, then an attempt by the user to address location 0 is


dynamically relocated to location 14000; an access to location 346 is mapped
to location 14346.

Memory Allocation
 One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions.
 Each partition may contain exactly one process. Thus, the degree of
multiprogramming is bound by the number of partitions.
 In this multiple partition method, when a partition is free, a process is selected
from the input queue and is loaded into the free partition.
Zenex Vision Degree College Page 36 of
53
B.Sc-2 (Semester-IV) Operating Systems

 When the process terminates, the partition becomes available for another process.
 In the Variable partition method, the operating system keeps a table, indicating
which parts of memory are available and which are occupied.
 Initially, all memory is available for user processes and is considered one large
block of available memory, a hole.
 The first fit, best fit and worst fitstrategies are the most commonly used schemes to
select a free hole from the set of available holes.

First fit: Allocate the first hole that is big enough. Searching can start either at
the beginning of the set of holes or at the location where the previous first-fit
search ended. We can stop searching as soon as we find a free hole that is large
enough.
Best fit: Allocate the smallest hole that is big enough. We must search the entire
list, unless the list is ordered by size. This strategy produces the smallest leftover
hole.
Worst fit: Allocate the largest hole. Again, we must search the entire list, unless
it is sorted by size. This strategy produces the largest leftover hole, which may be
more useful than the smaller leftover hole from a best-fit approach.

MEMORY ALLOCATION STRATEGIES


1. Fixed Partitioning :
 The fixed partitioning is a contiguous memory management technique
 In the main memory is divided into fixed sized partitions which can be equal or
unequal size.
 Whenever we have to allocate a process memory then a free partition that is big
enough to hold the process is found. Then the memory is allocated to the process.
 If there is no free space available then the process waits in the queue to be allocated
memory.

Zenex Vision Degree College Page 37 of


53
B.Sc-2 (Semester-IV) Operating Systems

 It is one of the most oldest memory management technique which is easy to


implement.

2. Variable Partitioning :
 The variable partitioning is a contiguous memory management technique
 in the main memory is not divided into partition
 The space which is left is considered as the free space which can be further used by
other processes.
 It also provides the concept of compaction.
 In compaction the spaces that are free and the spaces which not allocated to the
process are combined and single large memory space is made.

Paging
A computer can address more memory than the amount physically
installed on the system. This extra memory is actually called virtual memory
and it is a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.

Zenex Vision Degree College Page 38 of


53
B.Sc-2 (Semester-IV) Operating Systems

Paging is a memory management technique in which process address


space is broken into blocks of the same size called pages (size is power of 2,
between 512 bytes and 8192 bytes). The size of the process is measured in the
number of pages.
Similarly, main memory is divided into small fixed-sized blocks of
(physical) memory called frames and the size of a frame is kept the same as
that of a page to have optimum utilization of the main memory and to avoid
external fragmentation

Address Translation

Page address is called logical address and represented by page number


and the offset.

Logical Address = Page number + page offset

Frame address is called physical address and represented by a frame


number and the offset.

Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the
relation between a page of a process to a frame in physical memory.

Zenex Vision Degree College Page 39 of


53
B.Sc-2 (Semester-IV) Operating Systems

When the system allocates a frame to any page, it translates this logical
address into a physical address and create entry into the page table to be used
throughout execution of the program
Advantages and Disadvantages of Paging
 Paging reduces external fragmentation, but still suffer from internal fragmentation.
 Paging is simple to implement and assumed as an efficient memory management
technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having
small RAM
Segmentation
Segmentation is another way of dividing the addressable memory. It is
another scheme of memory management and it generally supports the user view
of memory. The Logical address space is basically the collection of segments.
Each segment has a name and a length.
Basically, a process is divided into segments. Like paging, segmentation
divides or segments the memory. But there is a difference and that is while the
pagingdivides the memory into a fixed size and on the other hand,
segmentation divides the memory into variable segments these are then loaded
into logical memory space.
A Program is basically a collection of segments. And a segment is a logical
unit such as:
 Main program
 Procedure
Zenex Vision Degree College Page 40 of
53
B.Sc-2 (Semester-IV) Operating Systems

 Function
 Method
 Object
 Local variable and global variables.
 Symbol table
 Common block
 Stack
 Arrays
Types of Segmentation
Given below are the types of Segmentation:
 Virtual Memory Segmentation With this type of segmentation, each process is
segmented into n divisions and the most important thing is they are not
segmented all at once.
 Simple Segmentation With the help of this type, each processis segmented into
n divisions and they are all together segmented at once exactly but at the
runtime and can be non-contiguous (that is they may be scattered in the
memory).
Characteristics of Segmentation
Some characteristics of the segmentation technique are as follows:
 The Segmentation partitioning scheme is variable-size.
 Partitions of the secondary memory are commonly known as segments.
 Partition size mainly depends upon the length of modules.
 Thus with the help of this technique, secondary memory and main memory are
divided into unequal-sized partitions.

Zenex Vision Degree College Page 41 of


53
B.Sc-2 (Semester-IV) Operating Systems

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms
Operating System - Virtual Memory
A computer can address more memory than the amount physically
installed on the system. This extra memory is actually called virtual memory
and it is a section of a hard disk that's set up to emulate the computer's RAM.
The main advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to
extend the use of physical memory by using disk. Second, it allows us to have
memory protection, because each virtual address is translated to a physical
address.
Following are the situations, when entire program is not required to be
loaded fully in main memory.
 User written error handling routines are used only when an error occurred in the
data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.

Zenex Vision Degree College Page 42 of


53
B.Sc-2 (Semester-IV) Operating Systems

 The ability to execute a program that is only partially in memory would counter
many benefits.
 Less number of I/O would be needed to load or swap each user program into
memory.
 A program would no longer be constrained by the amount of physical memory
that is available.
 Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory
management unit, or MMU, is built into the hardware. The MMU's job is to
translate virtual addresses into physical addresses. A basic example is given
below

Virtual memory is commonly implemented by demand paging. It can also


be implemented in a segmentation system. Demand segmentation can also be
used to provide virtual memory.
End of Unit -4 

UNIT - File
5 and I/O Management, OS Security

Directory Structure

Zenex Vision Degree College Page 43 of


53
B.Sc-2 (Semester-IV) Operating Systems

 A Directory (folder) is a special file, which contains information about stored


files.
 The directory structure organizes all the files in the system.
 A disk in the system can store huge information in the form of files.
 To manage all the files, a directory structure must exist on each disk.
 Each disk in a system is divided into separate partitions. These partitions are
called virtual disks or drives.
 Each partition is treated as a separate storage device.
 Each partition contains a default directory called root directory.
 The root directory may contain subdirectories. Again each subdirectory may
contain subdirectories.
Various operations on the directory:
1. Searching a file:Finding the entry for a particular file in the directory structure.
2. Creating a file: New file is created, and added to the directory
3. Deleting a file:When a file is not needed, it can be removed from the directory.
4. Listing directory: To view all or some of files of the directory.
5. Renaming a file:Changing the name of file in a directory.
Types of Directory Structure:
The directory structure can be classified into two types. They are
1. Single-Level directory
2. Two-Level directory.
1. Single-Level Directory: The simplest directory structure is the single-level
directory. All files are contained in the same directory, which is easy to support and
understand.
However, when the number of files increases or when the system has more
than one user, a single-level directory has limitations. All files are in the same
directory, they must have different names.
\ Root Directory

File1 File2 File3 File4

2. Two-Level Directory: A Two-level directory contains separate directories for each


user. Each directory can contain subdirectories. So, different users may create files
with the same name within a directory.
\ Root Directory

Dir A Dir B File4

Dir C File1
File2 File3

Zenex Vision Degree College Page 44 of


53
B.Sc-2 (Semester-IV) Operating Systems

FILE OPERATIONS IN OS
A file is a collection of related information. The files are stored in
secondary storage devices. In general, a file is a sequence of bits, bytes, lines,
or records.
The information in a file is defined by its creator. Different types of
information may be stored in a file. The information may be source programs,
object programs, executable programs, numeric data, text, images, sound
recordings, video information, and so on.

Here are some common file operations are:


 File Create operation
 File Delete operation
 File Open operation
 File Close operation
 File Read operation
 File Write operation
 File Append operation
 File Seek operation
 File Get attribute operation
 File Set attribute operation
 File Rename operation
File Create Operation
 The file is created with no data.
 The file create operation is the first step of the file.
 Without creating any file, there is no any operation can be performed.
File Delete Operation
 File must has to be deleted when it is no longer needed just to free up the disk
space.
 The file delete operation is the last step of the file.
 After deleting the file, it doesn't exist.
File Open Operation
 The process must open the file before using it.
File Close Operation
 The file must be closed to free up the internal table space, when all the accesses
are finished and the attributes and the disk addresses are no longer needed.
File Read Operation
 The file read operation is performed just to read the data that are stored in the
required file.
File Write Operation
 The file write operation is used to write the data to the file, again, generally at
the current position.
File Append Operation
Zenex Vision Degree College Page 45 of
53
B.Sc-2 (Semester-IV) Operating Systems

 The file append operation is same as the file write operation except that the file
append operation only add the data at the end of the file.
File Seek Operation
 For random access files, a method is needed just to specify from where to take
the data. Therefore, the file seek operation performs this task.
File Rename Operation
 The file rename operation is used to change the name of the existing file.

File Allocation Methods


The file allocation methods define how the files are stored in the disk blocks.
There are three main disk space or file allocation methods.
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed
below:

1. Contiguous Allocation
In this scheme, each file occupies a
contiguous set of blocks on the disk.
For example, if a file requires n blocks
and is given a block b as the starting location,
then the blocks assigned to the file will be: b,
b+1, b+2,……b+n-1. This means that given
the starting block address and the length of the
file (in terms of blocks required), we can
determine the blocks occupied by the file.
The directory entry for a file with contiguous
allocation contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts
from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24
blocks.
2. Linked List Allocation
In this scheme, each file is a linked list of
disk blocks which need not be contiguous. The
disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the
Zenex Vision Degree College Page 46 of
53
B.Sc-2 (Semester-IV) Operating Systems

starting and the ending file block. Each block contains a pointer to the next
block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly
distributed. The last block (25) contains -1 indicating a null pointer and does
not point to any other block.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all
the blocks occupied by a file. Each file has its own index block. The ith entry in the
index block contains the disk address of the ith file block. The directory entry contains
the address of the index block as shown in the image:
Device Management in Operating System

 Device management means controlling the Input/Output devices like disk,


microphone, keyboard, printer, magnetic tape, USB ports, etc.
 A process may require various resources, including main memory, file access, and
access to disk drives, and others.
 If resources are available, they could be allocated, and control returned to the CPU.
Otherwise, the procedure would have to be postponed until adequate resources
become available.
 The system has multiple devices, and in order to handle these physical or virtual
devices, the operating system requires a separate program known as device
controller. It also determines whether the requested device is available.
The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device
1. Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.
2. Character Device
It transmits or accepts a stream of characters, none of which can be addressed
individually. For instance, keyboards, printers, etc.
3. Network Device
It is used for transmitting the data packets.
File Protection
File protection is keeping information safe in a computer system from
physical damage and improper access. Physical damage of files in the disks can
occur due to hardware problems. Improper access is due to misuse of files by
the unauthorized users.
1. Protection from Physical damage:
 Protection from physical damage is, generally provided by maintaining
duplicate copies of files.

Zenex Vision Degree College Page 47 of


53
B.Sc-2 (Semester-IV) Operating Systems

 Many computers have systems programs that automatically copy files to tape
regularly (once per day or week or month) to maintain a copy.
 The administrator or the user must maintain this procedure to protect important
information.
Reasons for physical damage: File systems can be damaged by various reasons.
Some of them are:
 Continuous use of hardware: Disks can be damaged due to continuous use
of reading and writing of files
 Power failures: Frequent Power (electrical) problems can damage the system
physically.
 Sabotage: Sabotage means intentional damage.
 Accidental damage: A user can damage a disk unintentionally.
 Software errors: Various bugs in the software also sometimes damage the
hardware.
2. Protection from improper access:
Protection from improper access can be provided in many ways. This can be
permit or reject to access files from authorised or unauthorised users.
a) Controlled Access:
 Granting or removing only necessary access rights to the users. So, a user
has controlled access.
 Access is permitted or denied depending on the type of access requested.
 Different types of operations may be controlled such as read file, write file,
execute file and so on.
b) Password protection:
 Another approach to the protection problem is, to set a password to each
file.
 Therefore each file can be controlled by a password.
 Only those users who know the password can access file.
c) Authentication:
 Authentication is the proper verification of the users’ identity, whetherthe
user is authorised or not.
 Various biometric systems can be used to prevent unauthorised user entry.
Inter Process Communication - Pipes
 A Pipe is a communication between two or more related or interrelated processes.
 It can be either within one process or a communication between the child and the
parent processes.
 Communication can also be multi-level such as between the parent, child and the
grand-child, etc. Communication is achieved by one process writing into the pipe
and other reading from the pipe.
Zenex Vision Degree College Page 48 of
53
B.Sc-2 (Semester-IV) Operating Systems

 To achieve the pipe system call, create two files, one to write into the file and
another to read from the file.
Ex:- Pipe mechanism can be viewed with a real-time
such as filling water with the pipe into some
container, say a bucket, and someone retrieving it, say with a mug. The filling process
is nothing but writing into the pipe and the reading process is nothing but retrieving
from the pipe. This implies that one output (water) is input for the other (bucket).=
Two-way Communication Using Pipes
 In this process the parent and the child needs to
write and read from the pipes simultaneously, the
solution is a two-way communication using
pipes.
 Two pipes are required to establish two-way
communication
Step 1 − Create two pipes. First one is for the parent
to write and child to read, say as pipe1. Second one
is for the child to write and parent to read, say as pipe2.
Step 2 − Create a child process.
Buffer
The buffer is an area in the main memory used to store or hold the data
temporarily.
In other words, buffer temporarily stores data transmitted from one place
to another, either between two devices or an application.
The act of storing data temporarily in the buffer is called buffering.
Types of Buffering:-
There are three main types of buffering in the operating system, such as:

1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between
two devices.
The producer produces one block of data into the buffer. After that, the
consumer consumes the buffer. Only when the buffer is empty, the processor
Zenex Vision Degree College Page 49 of
53
B.Sc-2 (Semester-IV) Operating Systems

again produces the data.

2. Double Buffer
In Double Buffering, two schemes or two buffers are used in the place of
one.
In this buffering, the producer produces one buffer while the consumer
consumes another buffer simultaneously. So, the producer not needs to wait for
filling the buffer. Double buffering is also known as buffer swapping.

3. Circular Buffer
When more than two buffers are used, the buffers' collection is called a
circular buffer.
Each buffer is being one unit in the circular buffer. The data transfer rate
will increase using the circular buffer rather than the double buffering.

Buffering Works
In an operating system, buffer works in the following way:

Zenex Vision Degree College Page 50 of


53
B.Sc-2 (Semester-IV) Operating Systems

Shared Memory
 Shared memory is a memory shared between two or more processes.
 Each process has its own address space;
 if any process wants to communicate with some
information from its own address space to other processes,
then it is only possible with IPC (inter-process
communication) techniques.
 Shared memory is the fastest inter-process communication
mechanism.
 The operating system maps a memory segment in the address space of several
processes to read and write in that memory segment without calling operating
system functions.
To use shared memory, we have to perform two basic steps:
1. Request a memory segment that can be shared between processes to the
operating system.
2. Associate a part of that memory or the whole memory with the address space of
the calling process.
Operating System Security Policy Mechanism

The process of ensuring OS availability, confidentiality, integrity is


known as operating system security.
Zenex Vision Degree College Page 51 of
53
B.Sc-2 (Semester-IV) Operating Systems

OS security refers to the processes or measures taken to protect the


operating system from dangers, including viruses, worms, malware, and remote
hacker intrusions.
Security refers to providing safety for computer system resources like software, CPU,
memory, disks, etc.
It can protect against all threats, including viruses and unauthorized
access. It can be enforced by assuring the operating system's integrity,
confidentiality, and availability.

System security may be threatened through two violations, and these are
as follows:
1. Threat
A program that has the potential to harm the system seriously.
2. Attack
A breach of security that allows unauthorized access to a resource.

The goal of Security System

There are several goals of system security. Some of them are as follows:
1. Integrity
Unauthorized users must not be allowed to access the system's objects
2. Secrecy
The system's objects must only be available to a small number of
authorized users. The system files should not be accessible to everyone.
3. Availability
All system resources must be accessible to all authorized users, i.e., no
single user/process should be able to consume all system resources

Authentication and Authorization

Authentication in Operating System


Zenex Vision Degree College Page 52 of
53
B.Sc-2 (Semester-IV) Operating Systems

 In authentication process, the identity of users are checked for providing the access
to the system
 Authentication is done before the authorization process,
 Authentication mechanism determines the users identity before revealing the
sensitive information.
 It is very crucial for the system or interfaces where the user priority is to protect the
confidential information.
 In the process, the user makes a provable claim about individual identity (his or
her) or an entity identity.
 The credentials or claim could be a username, password, fingerprint etc.

Access Authorization
 The authorization process, person’s or user’s authorities are checked for accessing
the resources.
 whereas authorization process is done after the authentication process.
 Authorization is the process of giving permission to do or have something.
 The system administrator defines for the system which users are allowed access the
file directories, hours of access, amount of allocated storage space, etc.
 End of Unit – 5 

Zenex Vision Degree College Page 53 of


53

You might also like