0% found this document useful (0 votes)
7 views59 pages

Os Unit 4

This document covers memory and storage management strategies in operating systems, including memory allocation methods like contiguous memory allocation, paging, and virtual memory management. It discusses memory protection mechanisms, address binding, and the differences between logical and physical address spaces. Additionally, it addresses fragmentation issues and dynamic storage allocation strategies, providing insights into efficient memory utilization and management techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views59 pages

Os Unit 4

This document covers memory and storage management strategies in operating systems, including memory allocation methods like contiguous memory allocation, paging, and virtual memory management. It discusses memory protection mechanisms, address binding, and the differences between logical and physical address spaces. Additionally, it addresses fragmentation issues and dynamic storage allocation strategies, providing insights into efficient memory utilization and management techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

II B.

Tech IT Regulation: R23 OS-UNIT-4

UNIT-4
Memory & Storage Management
Syllabus: Memory-Management Strategies: Introduction, Contiguous memory allocation,
Paging, Structure of the Page Table, Swapping. Virtual Memory Management: Introduction,
Demand paging, Copy-on-write, Page replacement, Allocation of frames, Thrashing Storage
Management: Overview of Mass Storage Structure, HDD Scheduling.
Introduction
 The main purpose of a computer system is to execute programs. These
programs, together with the data they access, must be at least partially in
main memory during execution..
 To improve both the utilization of the CPU and the speed of the computer’s
response to its users, general-purpose computers must keep several
programs in memory, creating a need for memory management.
 The operating system is responsible for the following activities in
connection with memory management:
 Keeping track of which parts of memory are currently being used and
who is using them
 Deciding which processes (or parts of processes) and data to move
into and out of memory
 Allocating and de-allocating memory space as needed.
Memory Hardware:
 Memory consists of a large array of bytes, each with its own address. The
CPU fetches instructions from memory according to the value of the
program counter.
 Main memory and the registers built into the processor itself are the only
general-purpose storage that the CPU can access directly.
 Registers that are built into the CPU are generally accessible within one
cycle of the CPU clock.
 A memory access may take many cycles of the CPU clock. In such cases,
the processor normally needs to stall, since it does not have the data
required to complete the instruction.
 The remedy is to add fast memory between the CPU and main memory,
typically on the CPU chip for fast access called as CACHE.

Fig: Memory Hierarchy


Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 1
II B.Tech IT Regulation: R23 OS-UNIT-4

Memory Protection:
 For proper system operation we must protect the operating system from
access by user processes.
 Each process has a separate memory space. Separate per-process memory
space protects the processes from each other.
 The hardware protection of memory is provided by two registers
 Base Register
 Limit Register
 The base register holds the smallest legal physical memory address, called
the starting address of the process.
 The Limit register specifies the size of range of the process.
 If the base register holds 300040 and the limit register is 120900, then the
program can legally access all addresses from 300040 through 420939

 Protection of memory space is accomplished by having the CPU hardware


compare every address generated in user mode with the registers.
 Any attempt by a program executing in user mode to access OS memory or
other users’ memory results in a trap to the operating system, resulting in
addressing error.
 The base and limit registers can be loaded only by the operating system into
the CPU hardware.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 2
II B.Tech IT Regulation: R23 OS-UNIT-4

 This scheme prevents a user program from modifying the code or data
structures of either the operating system or other users.
 The address generated by the CPU for a process should lie between the Base
address of the process and base + Limit of the process, Else the hardware
sends an interrupt to the OS.
Address Binding:
(Multistep processing of a User Program (or) Steps a user program goes through before being executed)
 Address binding is the process of mapping the program's logical or virtual
addresses to corresponding physical or main memory addresses.
 Addresses in the source program are generally symbolic.
 A compiler typically binds these symbolic addresses to relocatable addresses.
 The linkage editor or loader in turn binds the relocatable addresses to
absolute addresses
 Each binding is a mapping from one address space to another

 The binding of instructions and data to memory addresses can be done in


three ways.
1) Compile time: If we know at compile time where the process will reside
in memory, then absolute code can be generated.
2) Load time: If it is not known at compile time where the process will
reside in memory, then the compiler must generate relocatable code at
load time.
3) Execution time: If the process can be moved during its execution from
one memory segment to another, then binding must be delayed until run
time.
Note: The first address of the user process need not be 00000

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 3
II B.Tech IT Regulation: R23 OS-UNIT-4

Logical vs. Physical Address Space:


 An address generated by the CPU is commonly referred to as a logical
address, whereas an address seen by the memory unit, that is, the one
loaded into the memory-address register of the memory is commonly
referred to as a physical address.
 The compile-time and load-time address-binding methods generate identical
logical and physical addresses. However, the execution-time address
binding scheme results in differing logical and physical addresses.
 The set of all logical addresses generated by a program is a logical address
space. The set of all physical addresses corresponding to these logical
addresses is a physical address space.
Mapping from virtual to physical addresses:
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU).
 The base register is also called as relocation register.
 The value in the relocation register is added to every address generated by a
user process at the time the address is sent to memory.
 For example, if the base is at 14000, then an attempt by the user to address
location 0 is dynamically relocated to location 14000; an access to location
346 is mapped to location 14346.

Logical Address Physical Address


1. Logical addresses are generated by the Physical addresses refer to the actual location
CPU during a program's execution. in the computer's memory.
2. Logical addresses are visible to the Physical addresses are not directly visible to
program or process running on the CPU. the program.
3. The Logical Address is Virtual The Physical Address is the actual address of
the memory.
4. The Logical Address is generated by the Physical Address is Computed by MMU
CPU
5. Set of all logical addresses generated by CPU Set of all physical addresses mapped to the
in reference to a program is referred as Logical corresponding logical addresses is referred as
Address Space. Physical Address Space

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 4
II B.Tech IT Regulation: R23 OS-UNIT-4

Note:
 The user program never sees the real physical addresses
 We can use logical address and virtual address interchangeably.
i.e.: The logical address is a virtual as it does not exist physically. Hence, it is also called
as virtual address.
 The user program generates only logical addresses.
Dynamic Loading:
 Dynamic Loading is the process of loading a routine only when it is called
or needed during runtime.
 Initially all routines are kept on disk in a relocatable load format.
 The main program is loaded into memory and is executed. When a routine
needs to call another routine, the calling routine first checks to see whether
the other routine has been loaded. If it has not, the relocatable linking loader
is called to load the desired routine into memory.
 The advantage of dynamic loading is that a routine is loaded only when it is
needed.
 This method is particularly useful when large amounts of code are needed to
handle infrequently occurring cases, such as error routines.
Dynamic Linking and Shared Libraries:
 Dynamically linked libraries are system libraries that are linked to user
programs when the programs are in execution.
 In Dynamic linking the linking of system libraries are postponed until
execution time.
 Static Linking combines the system libraries to the user program at the time
of compilation.
 Dynamic linking saves both the disk space and the main memory space.
 The libraries can be replaced by a new version and all the programs that
reference the library will use the new version. This system is called as
Shared libraries which can be done with dynamic linking.
Note:
 Compiling is process of converting source program into object files.
 Linking is process of combining object files and system libraries into an executable file.
 Loading is process of loading a program from secondary storage to main memory.
Contiguous Memory Allocation
 In contiguous memory allocation, each process is contained in a single
section of memory that is contiguous to the section containing the next
process.
 Contiguous memory management strategies involve allocating memory to
processes in contiguous blocks (adjacent slots in main memory), where each
process's memory is located in a single, contiguous region of physical
memory.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 5
II B.Tech IT Regulation: R23 OS-UNIT-4

 There are two methods of contiguous memory allocation:


 Fixed-partition scheme (or) Static partition
(MFT (Multiprogramming with Fixed Number of Tasks))
 Variable-partition scheme (or) Dynamic partition
(MVT (Multiprogramming with Variable Number of Tasks))
MFT (Multiprogramming with Fixed Number of Tasks):
 In MFT, memory is divided into fixed-size partitions, and each partition is
assigned to a specific job or process.
 The number of partitions is fixed and predetermined. Each partition may
hold a single process or job or task.
 If a process requires more memory than available in any partition, it cannot
be accommodated, leading to internal fragmentation.
 MFT is relatively simpler to implement but can lead to inefficient memory
utilization, especially if processes have varying memory requirements.
 This method was originally used by the IBM OS/360 operating system but is
no longer in use.
 Advantages: Simple to implement and little operating system overhead.
 Disadvantage:
 Inefficient use of memory due to internal fragmentation.
 Maximum number of active processes is fixed.
MVT (Multiprogramming with Variable Number of Tasks):
 In MVT, memory allocation is dynamic, and partitions are allocated based
on the actual memory needs of processes.
 Memory is not divided into fixed partitions. Instead, free memory blocks of
varying sizes are allocated to processes as needed.
 MVT allows for more flexible memory allocation and can accommodate
processes with different memory requirements more efficiently, reducing
internal fragmentation.
 However, MVT is more complex to implement compared to MFT, as it
requires sophisticated algorithms for dynamic memory allocation and
management.
 This method is used primarily in batch environments.
 Advantages: No internal fragmentation and more efficient use of main
memory.
 Disadvantages: Inefficient use of processor due to the need for compaction
to counter external fragmentation.
Note:
 MFT allocates memory into fixed-size partitions, each assigned to a specific process, while MVT
dynamically allocates memory based on the actual memory needs of processes.
 MFT is simpler but less efficient in terms of memory utilization, while MVT offers better efficiency
at the cost of increased complexity.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 6
II B.Tech IT Regulation: R23 OS-UNIT-4

Dynamic Storage Allocation Problem:


 In general the memory blocks available (holes) comprise a set of holes of
various sizes scattered throughout memory. When a process arrives and
needs memory, the system searches the set for a hole that is large enough for
this process. If the hole is too large, it is split into two parts. One part is
allocated to the arriving process; the other is returned to the set of holes.
 When a process terminates, it releases its block of memory, which is then
placed back in the set of holes. If the new hole is adjacent to other holes,
these adjacent holes are merged to form one larger hole.
 At this point, the system may need to check whether there are processes
waiting for memory and whether this newly freed and recombined memory
could satisfy the demands of any of these waiting processes.
 This procedure is a particular instance of the general dynamic storage
allocation problem, which concerns how to satisfy a request of size n from
a list of free holes.
 There are many solutions to this problem. The first-fit, best-fit, and worst-
fit strategies are the ones most commonly used to select a free hole from the
set of available holes.
 First fit. Allocate the first hole that is big enough. Searching can start
either at the beginning of the set of holes or at the location where the
previous first-fit search ended. We can stop searching as soon as we
find a free hole that is large enough.
 Best fit. Allocate the smallest hole that is big enough. We must search
the entire list, unless the list is ordered by size. This strategy produces
the smallest leftover hole.
 Worst fit. Allocate the largest hole. Again, we must search the entire
list, unless it is sorted by size. This strategy produces the largest
leftover hole, which may be more useful than the smaller leftover hole
from a best-fit approach.
First fit: Allocate the first hole that is big enough.
Best fit: Allocate the smallest hole that is big enough.
Worst fit: Allocate the largest hole that is big enough.
Note:
 Both first fit and best fit are better than worst fit in terms of decreasing time and storage
utilization.
 Neither first fit nor best fit is clearly better than the other in terms of storage utilization,
but first fit is generally faster.
Fragmentation:
 Fragmentation in memory management refers to the phenomenon where the
total amount of free memory in a system is fragmented into smaller, non-
contiguous blocks that cannot be efficiently utilized for allocation.
 Memory fragmentation can be internal as well as external.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 7
II B.Tech IT Regulation: R23 OS-UNIT-4

External Fragmentation:
 External fragmentation occurs when the free memory space is fragmented
into small, non-contiguous blocks, making it difficult to satisfy memory
allocation requests, even though the total free memory might be sufficient.
 As processes are loaded and removed from memory, the free memory space
is broken into little pieces. External fragmentation exists when there is
enough total memory space to satisfy a request but the available spaces are
not contiguous, so that the memory cannot be allocated to the process.
 Both the first-fit and best-fit strategies for memory allocation suffer from
external fragmentation.
 Statistical analysis of first fit, for instance, reveals that, even with some
optimization, given N allocated blocks, another 0.5 N blocks will be lost to
fragmentation. That is, one-third of memory may be unusable! This property
is known as the 50-percent rule.
 Compaction: One solution to the problem of external fragmentation is
compaction. The goal is to shuffle the memory contents so as to place all
free memory together in one large block.
 Another possible solution to the external-fragmentation problem is to permit
the logical address space of the processes to be noncontiguous, thus
allowing a process to be allocated physical memory wherever such memory
is available. Two complementary techniques achieve this solution:
segmentation and paging. These techniques can also be combined.
Internal Fragmentation:
 Internal fragmentation occurs when the allocated memory is larger than
required by the process. It leads to wasted space within allocated memory
blocks.
 In fixed size partitions, each process is allocated with a partition,
irrespective of its size. The allocated memory for a process may be slightly
larger than requested memory; this memory that is wasted internal to a
partition, is called as internal fragmentation.
 Consider a multiple-partition allocation scheme with a hole of 18,464 bytes.
Suppose that the next process requests 18,462 bytes. If we allocate exactly
the requested block, we are left with a hole of 2 bytes. With this approach,
the memory allocated to a process may be slightly larger than the requested
memory. The difference between these two numbers is internal
fragmentation—unused memory that is internal to a partition.

Note:
 Internal fragmentation occurs within allocated memory blocks, while external
fragmentation occurs in the overall free memory space.
 Both types of fragmentation can impact system performance and memory utilization,
and various strategies can be employed to mitigate their effects in memory
management systems.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 8
II B.Tech IT Regulation: R23 OS-UNIT-4

Paging: Non-Contiguous Memory Allocation


 Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. It allows the physical address space
of a process to be non-contiguous, which solves the problem of external
fragmentation.
 In Paging, memory is divided into fixed-size blocks called frames, and
processes are divided into fixed-size blocks called pages. Pages of a process
can be scattered across non-contiguous physical memory locations.
Basic Method:
 The basic method for implementing paging involves breaking physical
memory into fixed-sized blocks called frames and breaking logical memory
into blocks of the same size called pages.
 When a process is to be executed, its pages are loaded into any available
memory frames from their source (a file system or the backing store). The
backing store is divided into fixed-sized blocks that are the same size as the
memory frames or clusters of multiple frames.
 For example, the logical address space is now totally separate from the
physical address space, so a process can have a logical 64-bit address space
even though the system has less than 264 bytes of physical memory
Paging Hardware:
 The hardware support for paging is illustrated in following figure.

 Every address generated by the CPU is divided into two parts: a page
number (p) and a page offset (d).
 The page number is used as an index into a page table.
 The page table contains the base address of each page in physical memory.
 This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 9
II B.Tech IT Regulation: R23 OS-UNIT-4

Paging Model:
 The paging model of memory is shown in following figure:

 The page size (like the frame size) is defined by the hardware. The size of a
page is a power of 2, varying between 512 bytes and 1 GB per page,
depending on the computer architecture.
 The selection of a power of 2 as a page size makes the translation of a
logical address into a page number and page offset particularly easy.
 If the size of the logical address space is 2 m, and a page size is 2n bytes, then
the high-order m-n bits of a logical address designate the page number, and
the n low-order bits designate the page offset. Thus, the logical address is as
follows:

where p is an index into the page table and d is the displacement within
the page.
Paging Example:
 Suppose, the logical address, n= 2 and m = 4. Using a page size of 4 bytes
and a physical memory of 32 bytes (8 pages), we show how the
programmer’s view of memory can be mapped into physical memory.
 Physical address = (frame number  page size) + offset
 Logical address 0 is page 0, offset 0. Indexing into the page table, we find
that page 0 is in frame 5. Thus, logical address 0 maps to physical address
20 [= (5 × 4) + 0].
 Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) +
3].
 Logical address 4 is page 1, offset 0; according to the page table, page 1 is
mapped to frame 6. Thus, logical address 4 maps to physical address
24 [= (6 × 4) + 0].
 Logical address 13 maps to physical address 9.
 Logical address 15 maps to physical address 11.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 10
II B.Tech IT Regulation: R23 OS-UNIT-4

 The size of physical memory in a paged memory system is different from


the maximum logical size of a process.
 When we use a paging scheme, we have no external fragmentation: any free
frame can be allocated to a process that needs it. However, we may have
some internal fragmentation.
 Paging avoids external fragmentation and the need for compaction,
 It also solves the considerable problem of fitting memory chunks of varying
sizes onto the backing store.
 Paging is implemented through cooperation between the operating system
and the computer hardware.
Free Frame List:
 Each page of the process needs one frame. Thus, if the process requires n
pages, at least n frames must be available in memory.
 If n frames are available, they are allocated to this arriving process.
 The operating system is managing physical memory and knows the
allocation details of physical memory—which frames are allocated, which
frames are available, how many total frames there are, and so on.
 This information is generally kept in a data structure called a frame table.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 11
II B.Tech IT Regulation: R23 OS-UNIT-4

Hardware Support:
(Paging Hardware with TLB)
 The hardware implementation of the page table can be done in several ways.
The page table is implemented as a set of dedicated registers if the size of
the page table is too small.
 If the size of the page table is too large then the page table is kept in main
memory and a page table base register is used to point to the page table.
 When the page table is kept in main memory then two memory accesses are
required to access a byte.
 One for accessing the page table entry
 Another one for accessing the byte.
 Thus the overhead of accessing the main memory increases.
 The standard solution to this problem is to use a special, small, fast lookup
hardware cache called a translation look-aside buffer (TLB).

 Each entry in the TLB consists of two parts: a key (or tag) and a value.
 The TLB contains only a few of the page-table entries.
 When a logical address is generated by the CPU, its page number is
presented to the TLB. If the page number is found (TLB hit), its frame
number is immediately available and is used to access memory.
 If the page number is not in the TLB (TLB miss), a memory reference to the
page table must be made. When the frame number is obtained, we can use it
to access memory.
 The percentage of times that the page number of interest is found in the TLB
is called the hit ratio.
 The access time of a byte is said to be effective when the TLB hit ratio is
high.
 Thus the effective access time is given by
Effective Access Time = TLB hit ratio  (TLB Search Time+ Memory Access Time)
+ (1-TLB hit ratio)  (TLB Search Time +2  Memory Access Time)

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 12
II B.Tech IT Regulation: R23 OS-UNIT-4

Memory Protection:
 Memory protection in a paged environment is accomplished by protection
bits associated with each frame.
 One bit can define a page to be read–write or read-only. When the physical
address is being computed, the protection bits can be checked to verify that
no writes are being made to a read-only page
 One additional bit is generally attached to each entry in the page table: a
valid–invalid bit. When this bit is set to valid, the associated page is in the
process’s logical address space and is thus a legal.
 When the bit is set to invalid, the page is not in the process’s logical address
space.
 Page-table length register (PTLR), is used to indicate the size of the page
table. This value is checked against every logical address to verify that the
address is in the valid range for the process.

Shared Pages:
 An advantage of paging is the possibility of sharing common code.
 If the code is reentrant code (or pure code), however, it can be shared.
 Reentrant code is non-self-modifying code: it never changes during
execution. Thus, two or more processes can execute the same code at the
same time
 Example: Consider three processes that share a page editor which is of three
pages. Each process has its own data page.
 Only one copy of the editor need be kept in physical memory. Each user’s
page table maps onto the same physical copy of the editor, but data pages
are mapped onto different frames.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 13
II B.Tech IT Regulation: R23 OS-UNIT-4

Note:
 Pages and frames are identical in size; a page refers to a fixed-size block of a process's
virtual memory space and a frame refers to a fixed-size block of main memory. A virtual
memory page must be loaded into a frame in main memory before a processor can
access its contents.
 Physical Address Space (PAS) = Size of Main Memory
 Size of Main Memory = Total number of frames  Frame size
 Size of logical Address space =Total number of pages  Page size
 Frame size = Page size
 If the number of frames in the Main Memory =2n,then number of bits required in
frame number =n
 If page size = 2n bytes, then number of bits in page offset=n bits
 To represent 2n Bytes, we need n bits
 With n bits we can represent 2n Bytes
 If size of Main Memory=2n Bytes ,then number of bits in Main Memory=n bits
 Number of pages = Logical Address Space /Page size
 Number of frames =Physical Address Space /frame size

Problem-1: Consider a logical address space of 256 pages with a 4-KB page size,
mapped onto a physical memory of 64 frames.
a. How many bits are required in the logical address?
b. How many bits are required in the physical address?
Solution:
Number of pages in Logical Address Space: 256 pages
Page Size: 4 KB
Physical Memory: 64 frames
Total logical address space = 256 * 4-KB =220
Number of bits required=log2(address space Size) =log2(220) = 20 bits
Total physical address space = 64 * 4-KB =218 (size of frame = size of page)
Number of bits required=log2(address space Size) =log2(218) = 18 bits

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 14
II B.Tech IT Regulation: R23 OS-UNIT-4

Problem-2: Consider a logical address space of 64 pages of 1,024 words each,


mapped onto a physical memory of 32 frames.
a. How many bits are there in the logical address?
b. How many bits are there in the physical address?
Solution:
Logical Address Space: 64 pages
Page Size: 1024 words
Physical Memory: 32 frames
Total logical address space = 64 * 1024 =216
Number of bits required=log2(address space Size) =log2(216) = 16 bits
Total physical address space = 32 * 1024 =215
Number of bits required=log2(address space Size) =log2(215) = 15 bits
Problem 3: Compute effective access time for 70% hit ratio, 20 ns to search TLB
and 100 ns to access memory. Observe the difference when it is changed to 90%
hit ratio.
Solution:
The formula to compute effective access time is:
EAT= Hit Ratio × (TLB search Time+ Memory Access Time +
(1-Hit Ratio) × (TLB search Time + 2×Memory Access Time)
Given:
 Hit Ratio = 70% = 0.70
 TLB search time = 20 ns
 Memory Access Time = 100 ns
Using the formula:
EAT= 0.70×(20 +100) +(1-0.7)×( 20 + 2×100 )
= 0.70×120 +0.30×220
= 84+66 =150 ns
So, the effective access time is 150 ns for a 70% hit ratio.
Now, let's compute the effective access time for a 90% hit ratio:
Given:
Hit Ratio = 90% = 0.90
Using the formula:
EAT = 0.90×(20 +100) +(1-0.90)×( 20 + 2×100 )
= 0.90×120 +0.10×220
= 108+22 =130 ns
So, the effective access time is 130 ns for a 90% hit ratio.
Problem-4: In a paging system, it takes 30 ns to search translation Look-aside
Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%,
find the effective memory access time.
Solution:
The formula to compute effective access time is:
EAT= Hit Ratio × (TLB search Time+ Memory Access Time +
(1-Hit Ratio) × (TLB search Time + 2×Memory Access Time)

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 15
II B.Tech IT Regulation: R23 OS-UNIT-4

Given:
 Hit Ratio = 70% = 0.70
 TLB search time = 30 ns
 Memory Access Time = 90 ns
Using the formula:
EAT = 0.70×(30 +90) +(1-0.7)×( 30 + 2×90 )
= 0.70×120 +0.30×210
= 84+63 =147 ns
Problem-5: If physical address is 12 bits and logical address is 13 bits find logical
address space and physical address space.
Solution:
Logical Address Space= 2bits required = 212 = 22  210 =4K
Physical Address Space= 2bits required = 213 =23  210 =8K
Problem-6: If logical address space is 128 KB and physical address space is
512KB and page size is 16 KB. Find
i) Number of bits for logical address
ii) Number of bits for physical address
iii) Number of pages in logical address space
iv) Number of frames in physical address space
v) Page table size
vi) Address fields of logical address
vii) Address fields of physical address

Solution:
i) Logical Address Space (LAS): 128 KB
Number of bits required for LAS=log2(Logical address space Size)
=log2(128 KB) = log2(217) = 17 bits
ii) Physical Address Space (PAS): 512 KB
Number of bits required for PAS=log2(physical address space Size)
=log2(512 KB) = log2(219) = 19 bits
LAS 128 KB 27
iii) Number of pages = = = 4 =23 = 8 pages
pagesize 16 KB 2
PAS 512 KB 29
iv) Number of frames = = = 4 =25 =32 pages
framesize 16 KB 2
v) Page table size = number of pages  page entry size (number bits for frame
number)
= 8  5 = 40 bits ( since number frames = 25)
vi) Number of bits required for page number=3 bits (since number of pages =23)
Page offset = 14 bits ( since page size = 16KB =214 )
Address fields of logical address:

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 16
II B.Tech IT Regulation: R23 OS-UNIT-4

vii) Number of bits required for frame number=5 bits (since number of frames =25)
Frame offset = 14 bits ( since frame size = 16KB =214 )
Address fields of physical address:

Structure of the Page Table


(Different Types of Page Tables)
 Some of the characteristics of the Page Table are:
 It is stored in the main memory.
 Generally; the Number of entries in the page table = the Number of
Pages in which the process is divided.
 PTBR means page table base register and it is basically used to hold
the base address for the page table of the current process.
 Each process has its own independent page table
 Different systems use different page table structures based on needs (speed
vs. memory efficiency)
 The most common techniques for structuring the page table are:
 Hierarchical paging
 Hashed page tables
 Inverted page tables
Hierarchical paging:
 In hierarchical paging, the page table is organized in a tree-like structure.
i.e.: page table is divided into smaller pieces.
 Hierarchical paging is a technique used in memory management where the
page table is divided into multiple levels of hierarchy. This helps in reducing
the memory overhead required for storing page tables, especially in systems
with large address spaces.
 The virtual address is divided into multiple parts, with each part used to
index different levels of the page table. The top-level index is used to select
a page directory, which points to a page table. Then, another index selects a
specific page within that table.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 17
II B.Tech IT Regulation: R23 OS-UNIT-4

 This hierarchical approach allows for efficient use of memory because it


only requires storing page tables for the portions of the address space that
are actually in use.
 This approach is used in an environment, were the page table itself becomes
excessively large.
 We can accomplish page table division in several ways:
 Two-level paging algorithm, in which the page table itself is also
paged.
 Three-level paging scheme, where we can page the outer page table.
 Four-level paging scheme, where the second-level outer page table
itself is also paged
 Example of two-level paging algorithm: Consider the system with a 32-bit
logical address space and a page size of 4 KB. A logical address is divided
into a page number consisting of 20 bits and a page offset consisting of 12
bits. Because we page the page table, the page number is further divided into
a 10-bit page number and a 10-bit page offset. Thus, a logical address is as
follows:

where p1 is an index into the outer page table and p2 is the displacement
within the page of the inner page table.
The address-translation method for this architecture is shown in following
Figure.

Because address translation works from the outer page table inward, this
scheme is also known as a forward-mapped page table.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 18
II B.Tech IT Regulation: R23 OS-UNIT-4

 Example System: Modern operating systems like Linux and Windows often
employ hierarchical paging. For instance, Linux typically uses a two-level
paging scheme (page global directory and page table) or even a three-level
paging scheme (PML4, PDPT, PD, and PT) on x86-64 architectures.
Advantages:
 Memory Efficiency: It allows for efficient use of memory by organizing the
page table in a hierarchical structure.
 Scalability: Hierarchical paging scales well with the size of the address
space. As the address space grows, additional levels of the page table can be
added to accommodate it.
 Protection: It provides a natural way to implement protection mechanisms,
such as different access permissions for different parts of the address space,
by managing permissions at different levels of the hierarchy.
Disadvantages:
 Traversal Overhead: Traversing multiple levels of the page table hierarchy
can introduce overhead in address translation, potentially impacting
performance.
 Fragmentation: It can lead to fragmentation of the page table itself,
especially if the address space is not evenly distributed or if there are many
small allocations.
Hashed Page Tables:
 A common approach for handling address spaces larger than 32 bits is to use
hashed page table, with the hash value being the virtual page number.
 Hashed page tables use a hash function to map virtual page numbers to page
table entries.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 19
II B.Tech IT Regulation: R23 OS-UNIT-4

 Each entry in the hash table contains a linked list of elements that hash to
the same location (to handle collisions). Each element consists of three
fields:
(1) The virtual page number
(2) The value of the mapped page frame
(3) A pointer to the next element in the linked list.
 The algorithm works as follows: The virtual page number in the virtual
address is hashed into the hash table. The virtual page number is compared
with field 1 in the first element in the linked list. If there is a match, the
corresponding page frame (field 2) is used to form the desired physical
address. If there is no match, subsequent entries in the linked list are
searched for a matching virtual page number.
 This scheme is shown in following figure.

 A variation of this scheme that is useful for 64-bit address spaces has been
proposed. This variation uses clustered page tables, which are similar to
hashed page tables except that each entry in the hash table refers to several
pages (such as 16) rather than a single page. Therefore, a single page-table
entry can store the mappings for multiple physical-page frames. Clustered
page tables are particularly useful for sparse address spaces, where memory
references are noncontiguous and scattered throughout the address space.
 Example System: One example of a system that could use hashed page
tables is the early versions of the MIPS architecture. For instance, MIPS
R2000 and R3000 processors used hashed page tables in their memory
management units.
Advantages:
 Flexibility: Hashed page tables can efficiently handle large or sparse
address spaces, as they only allocate memory for pages that are actually in
use.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 20
II B.Tech IT Regulation: R23 OS-UNIT-4

 Reduced Memory Overhead: They can have smaller memory overhead


compared to other techniques, especially when dealing with large address
spaces, as they only allocate memory for used pages.
 Fast Lookup: If collisions are minimized, hashed page tables can provide
fast lookup times for page translations.
Disadvantages:
 Collision Handling: Dealing with collisions in the hash table can introduce
complexity and potentially degrade performance, especially if collisions are
frequent.
 Difficulty in Implementation: Designing an effective hash function that
minimizes collisions and provides efficient lookup can be challenging.
 Limited Sequential Access: Hashed page tables may not perform well in
cases where sequential access patterns are common, as the hashing can lead
to non-contiguous mappings.
Inverted Page Tables:
 In inverted page tables, instead of having one entry in the page table for
each virtual page, there is one entry for each physical page frame. Each
entry contains the virtual address of the page stored in that frame, along
with additional information such as process ID.
 This technique is more memory-efficient than traditional page tables when
there are many processes running with small address spaces. However, it
requires searching through the entire table to find a mapping, which can be
slow.
 To mitigate this, additional data structures like hash tables or binary search
trees are often used to improve lookup performance.

 Example Systems: Inverted page tables are commonly used in virtual


memory management systems, particularly in systems with limited memory
where efficiency is crucial. For example, some versions of the Unix
operating system, such as Solaris, have used inverted page tables in the past
to manage memory efficiently on SPARC-based systems.
Advantages:
 Memory Efficiency: Inverted page tables can be more memory-efficient
compared to other techniques, especially in scenarios with a large number of
small processes, as they only store information for active page frames.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 21
II B.Tech IT Regulation: R23 OS-UNIT-4

Reduced Memory Overhead: They have a fixed-size table, which reduces


memory overhead compared to traditional page tables that require an entry
for each virtual page.
 Support for Large Address Spaces: Inverted page tables can support large
address spaces more efficiently than traditional page tables, as they only
require space for active page frames rather than the entire address space.
Disadvantages:
 Search Overhead: Lookups in inverted page tables require searching the
entire table, which can introduce overhead, especially in systems with a
large number of active page frames.
 Complexity: Managing and searching through the inverted page table can
be complex, especially in systems with high memory pressure and frequent
page swaps.
 Concurrency: Ensuring concurrent access and updates to the inverted page
table can be challenging and may require synchronization mechanisms,
which can introduce overhead.
Hierarchical Paging
(Multi-Level Page Tables) Hashed Page Tables Inverted Page Tables
Divides the page table Uses a hash function to quickly Instead of tracking all virtual
into smaller parts (like a find page table entries. pages, it tracks only used physical
tree) to save memory frames.
Used in: Linux, Windows. Used in: MIPS processors. Used in: Solaris (SPARC), PowerPC.

Pros: Saves memory, scales Pros: Good for large/sparse Pros: Saves memory (good for
well. memory. many small processes).
Cons: Slower due to multiple Cons: Slower if too many Cons: Slower lookups (must search
lookups collisions entire table
Advantages and disadvantages of paging:
Advantages:
 Paging reduces external fragmentation, but still suffers from internal
fragmentation.
 Paging is simple to implement and assumed as an efficient memory
management technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
Disadvantages:
 Paging does not preserve user view of memory allocation
 Page table requires extra memory space, so may not be good for a system
having small main memory
 Multi-level paging may lead to memory reference overhead.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 22
II B.Tech IT Regulation: R23 OS-UNIT-4

Contiguous vs. Non-contiguous Memory Allocation:

Swapping
 A process must be in memory to be executed. A process, however, can be
swapped temporarily out of memory (swap-out) to a backing store (Disk)
and then brought back into main memory (swap-in) for continued execution.
This phenomenon is called swapping.
 Swapping is a memory management technique that allows OS to efficiently
utilize main memory (MM) by temporarily moving unused or less
frequently accessed portions of memory to secondary storage (SS) and
bringing them back into MM when needed.
 Swapping allows the OS to move processes between main memory and
secondary storage.
 Swapping makes it possible for the total physical address space of all
processes to exceed the real physical memory of the system, thus increasing
the degree of multiprogramming in a system.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 23
II B.Tech IT Regulation: R23 OS-UNIT-4

 There are two main types of swapping:


 Standard (Traditional) Swapping (used in desktop/server OS)
 Swapping in Mobile Systems (modified due to hardware constraints)
Standard Swapping:
 Standard swapping involves moving processes between main memory and a
backing store.
 The system maintains a ready queue consisting of all processes whose
memory images are on the backing store or in memory and are ready to run.
Whenever the CPU scheduler decides to execute a process, it calls the
dispatcher. The dispatcher checks to see whether the next process in the
queue is in memory. If it is not, and if there is no free memory region, the
dispatcher swaps out a process currently in memory and swaps in the
desired process. It then reloads registers and transfers control to the selected
process.
 The context-switch time in such a swapping system is fairly high.
 Standard swapping is not used in modern operating systems. It requires too
much swapping time and provides too little execution time to be a
reasonable memory-management solution.
 Modified versions of swapping, however, are found on many systems,
including UNIX, Linux, and Windows.
Swapping on Mobile Systems:
 Mobile systems typically do not support swapping in any form.
 Mobile devices generally use flash memory rather than more spacious hard
disks as their persistent storage.
 The resulting space constraint is one Reason why mobile OS designers
avoid swapping. Other reasons include the limited number of writes that
flash memory can tolerate before it becomes unreliable and the poor
throughput between main memory and flash memory in these devices.
 Instead of using swapping, when free memory falls below a certain
threshold, Apple’s iOS asks applications to voluntarily relinquish allocated
memory. Read-only data (such as code) are removed from the system and
later reloaded from flash memory if necessary. Data that have been modified
(such as the stack) are never removed. However, any applications that fail to
free up sufficient memory may be terminated by the operating system.
 Android does not support swapping and adopts a strategy similar to that
used by iOS. It may terminate a process if insufficient free memory is
available.
 Both iOS and Android support paging.
Note:
 Swapping is the technique of temporarily removing a process from the memory of a
computer system.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 24
II B.Tech IT Regulation: R23 OS-UNIT-4

 The technique of swapping is employed to service a larger number of processes than can
fit into the computer’s memory.
 Swapping has the potential to improve both system performance and response times of
processes.
Paging Vs. Swapping:
Paging Swapping
Divides memory into fixed-size blocks (pages) to Moves entire processes between main
manage physical and logical address spaces memory (RAM) and secondary storage (disk)
efficiently. when needed.
Works at the page level (small fixed-size units, Works at the process level (entire processes
e.g., 4KB). are swapped).
Reduces external fragmentation; may cause page Can cause high overhead due to large I/O
faults (minor delays). operations (slower than paging).
Example: A process’s pages are loaded on- Example: An idle process is moved to disk to
demand (demand paging). free RAM for active processes.
Analogy: Swapping is like returning the
Analogy: Paging is like fetching only the required
entire book to the shelf and borrowing
chapters of a book from a library.
another one.

Virtual Memory Introduction


 Virtual memory is a technique that allows the execution of processes that
are not completely in memory.
 Virtual memory is a memory management technique that provides an
illusion to users of a very large (mainly contiguous) address space, which is
available for use by programs.
 Virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from
physical memory.
Benefits of Virtual Memory System:
 In this scheme, programs can be larger than physical memory.
 Virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from
physical memory.
 Virtual memory allows us to run extremely large processes and to raise the
degree of multiprogramming, increasing CPU utilization.
 This technique frees programmers from the concerns of memory-storage
limitations. i.e.: it frees application programmers from worrying about
memory availability.
 Virtual memory also allows processes to share files easily and to implement
shared memory.
 It provides an efficient mechanism for process creation.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 25
II B.Tech IT Regulation: R23 OS-UNIT-4

 The ability to execute a program that is only partially in memory would


confer many benefits:
 A program would no longer be constrained by the amount of physical
memory that is available. Users would be able to write programs for
an extremely large virtual address space, simplifying the
programming task.
 Because each user program could take less physical memory, more
programs could be run at the same time, with a corresponding
increase in CPU utilization and throughput but with no increase in
response time or turnaround time.
 Less I/O would be needed to load or swap user programs into
memory, so each user program would run faster.
 In addition to separating logical memory from physical memory, virtual
memory allows files and memory to be shared by two or more processes
through page sharing. This leads to the following benefits:
 System libraries can be shared by several processes through mapping
of the shared object into a virtual address space.
 Processes can share memory
 Pages can be shared during process creation with the fork() system
call, thus speeding up process creation.
Virtual Memory Management:
(Virtual Memory Implementation)
 Virtual memory involves the separation of logical memory as perceived by
users from physical memory. This separation allows an extremely large
virtual memory to be provided for programmers when only a smaller
physical memory is available.

 The virtual address space of a process refers to the logical (or virtual) view
of how a process is stored in memory.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 26
II B.Tech IT Regulation: R23 OS-UNIT-4

 Typically, this view is that a process begins at a certain logical address, say,
address 0 and exists in contiguous memory
 Physical memory may be organized in page frames and that the physical
page frames assigned to a process may not be contiguous. It is up to the
memory management unit (MMU) to map logical pages to physical page
frames in memory.

 We allow the heap to grow upward in memory as it is used for dynamic


memory allocation. Similarly, we allow for the stack to grow downward in
memory through successive function calls.
 The large blank space (or hole) between the heap and the stack is part of the
virtual address space but will require actual physical pages only if the heap
or stack grows.
 Virtual address spaces that include holes are known as sparse address
spaces. Using a sparse address space is beneficial because the holes can be
filled as the stack or heap segments grow or if we wish to dynamically link
libraries (or possibly other shared objects) during program execution.
Shared Library using Virtual Memory:
 Virtual memory allows files and memory to be shared by two or more
processes through page sharing.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 27
II B.Tech IT Regulation: R23 OS-UNIT-4

Disadvantages of Virtual memory:


 Virtual memory is not easy to implement, however, and may substantially
decrease performance if it is used carelessly.
 It also introduces complexities and performance overheads.

Note:
 Swapping involves moving entire processes between RAM and disk, while virtual
memory involves managing memory at the page level and allows programs to use more
memory than physically available by transparently moving data between RAM and disk.
Demand Paging
 Virtual memory is commonly implemented by demand paging.
 In demand paging, instead of loading the entire program in to memory,
pages are loaded only as they are needed.
 With demand-paged virtual memory, pages are loaded only when they are
demanded during program execution. Pages that are never accessed are thus
never loaded into physical memory.
 A demand-paging system is similar to a paging system with swapping where
processes reside in secondary memory. When we want to execute a process,
we swap it into memory. Rather than swapping the entire process into
memory, a swapper never swaps a page into memory unless that page will
be needed.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 28
II B.Tech IT Regulation: R23 OS-UNIT-4

Hardware to support demand paging


 The hardware to support demand paging is the same as the hardware for
paging and swapping:
 Page Table: This table has the ability to mark an entry invalid
through a valid–invalid bit or a special value of protection bits.
 Secondary Memory: This memory holds those pages that are not
present in main memory. The secondary memory is usually a high-
speed disk. It is known as the swap device, and the section of disk
used for this purpose is known as swap space.
 The valid–invalid bit is used to distinguish between the pages that are in
memory and the pages that are on the disk.
 This time, however, when this bit is set to “valid,” the associated page is
both legal and in memory. If the bit is set to “invalid,” the page either is not
valid (that is, not in the logical address space of the process) or is valid but
is currently on the disk.

Note:
 A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 29
II B.Tech IT Regulation: R23 OS-UNIT-4

Page Fault:
 If a page requested by a process is in memory, then the process can access it.
If the requested page is not in main memory, then it is page fault. i.e.:
Access to a page marked invalid causes a page fault.
The procedure for handling page fault:
(Steps in handling a page fault)
 The paging hardware, in translating the address through the page table, the
invalid bit is set, causing a trap to the operating system. This trap is the
result of the operating system’s failure to bring the desired page into
memory.
 When a page fault occurs (indicated by an invalid bit in the page table), the
operating system handles it through the following steps:
1. Check Validity: Verify if the memory reference is valid.
2. Terminate or Load Page: If invalid, terminate the process; if valid,
proceed to load the page.
3. Find Free Frame: Locate a free memory frame.
4. Read from Disk: Schedule a disk read to load the required page into
the frame.
5. Update Tables: Modify internal and page tables to reflect that the
page is now in memory.
6. Restart Instruction: Resume the process from the point it was
interrupted.
 The scheme in which never bring a page into memory until it is required is
called pure demand paging

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 30
II B.Tech IT Regulation: R23 OS-UNIT-4

Performance of Demand Paging:


 Demand paging can significantly affect the performance of a computer
system.
 The effective access time is then
Effective Access Time = (1 − p) × ma + p × page fault time.
Where
p = the probability of a page fault (0 ≤ p ≤ 1)
ma = the memory-access time
 Example: With an average page-fault service time of 8 milliseconds and a
memory access time of 200 nanoseconds, Page fault rate: 0.000002,the
effective access time in nanoseconds is
Effective access time = (1 − p) × ma + p × page fault time.
= (1 – 0.000002) × 200 + 0.000002 × 8,000,000
= 216 nanoseconds
Problem: Consider a computer system that uses demand paging for memory
management. The system has the following specifications: Memory access time: 150
nanoseconds, Page fault service time: 10 milliseconds, Page fault rate:
0.000001.Calculate the effective memory access time (EAT) for this system.
Solution: Effective access time = (1 − p) × ma + p × page fault time
= (1 – 0.000001) × 150 + 0.000001 × 10,000,000
=160 nanoseconds
Copy-on-write
Traditional approach:
 Traditionally, fork() worked by creating a copy of the parent’s entire address
space for the child, duplicating the pages belonging to the parent. However,
considering that many child processes invoke the exec() system call
immediately after creation, the copying of the parent’s address space may be
unnecessary.
Copy-on-write approach:
 When a parent process creates a child process using fork(), instead of
immediately copying all the parent’s memory, both processes share the same
memory pages at first.
 If either process tries to modify a shared page, the OS makes a private
copy of that page for the modifying process. Unmodified pages stay shared.
 This makes process creation faster and saves memory because only
modified pages are copied.
 Copy-on-write is a common technique used by several operating systems,
including Windows, Linux, and macOS.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 31
II B.Tech IT Regulation: R23 OS-UNIT-4

vfork() (Virtual Memory Fork)


 A faster but riskier version of fork() used in UNIX-like systems (Linux,
macOS, BSD).
 Unlike fork(), vfork() does not use COW—the child shares the parent’s
memory directly without copying.
 The parent is paused until the child finishes or runs exec() (which loads a
new program).
 If the child modifies memory, the parent sees those changes when it
resumes.
 Meant only for cases where the child immediately runs exec(), as it avoids
unnecessary copying.
 Used in some UNIX shells for efficiency.
Key Differences:
fork() with COW vfork()
Shares until write → then copies Shares fully (no COW)
Parent runs normally Suspended until child exits or exec()
Safe (changes don’t affect parent) Risky (child can corrupt parent’s memory)
General process creation Only when child calls exec() immediately

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 32
II B.Tech IT Regulation: R23 OS-UNIT-4

Page Replacement
(Need For Page Replacement)
 Page replacement is basic to demand paging
 When there is a page fault the OS decides to load the pages from the
secondary memory to the main memory. It looks for the free frame. If there
is no free frame then the pages that are not currently in use will be swapped
out of the main memory, and the desired page will be swapped into the main
memory.
 The process of swapping a page out of main memory to the swap space and
swapping in the desired page into the main memory for execution is called
as Page Replacement.

 Page replacement takes the following approach. If no frame is free, we find


one that is not currently being used and free it. We can free a frame by
writing its contents to swap space and changing the page table to indicate
that the page is no longer in memory. We can now use the freed frame to
hold the page for which the process faulted.
Steps in Page Replacement:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a
victim frame.
c. Write the victim frame to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the newly freed frame; change the page and
frame tables.
4. Continue the user process from where the page fault occurred.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 33
II B.Tech IT Regulation: R23 OS-UNIT-4

 If no frames are free, two page transfers (one out and one in) are required.
This situation effectively doubles the page-fault service time and increases
the effective access time accordingly.
 This overhead can be reduced by using a modify bit (or dirty bit).
 When this scheme is used, each page or frame has a modify bit associated
with it in the hardware.
 Modify Bit: The modify bit for a page is set by the hardware whenever any
byte in the page is written into, indicating that the page has been modified.
 When we select a page for replacement, we examine it’s modify bit. If the
bit is set, we know that the page has been modified since it was read in from
the disk. In this case, we must write the page to the disk.
 If the modify bit is not set, however, the page has not been modified since it
was read into memory. In this case, we need not write the memory page to
the disk: it is already there.
Note:
 The valid-invalid bit indicates whether the corresponding page in virtual memory is valid
or not.
 Modify (or dirty) bit is used to track whether a page in memory has been modified since
it was loaded from disk.
Page Replacement Algorithms
 If we have multiple processes in memory, we must decide how many frames
to allocate to each process; and when page replacement is required, we must
select the frames that are to be replaced.
 The string of memory references made by a process is called a reference
string.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 34
II B.Tech IT Regulation: R23 OS-UNIT-4

 There are many different page-replacement algorithms that includes:


 FIFO page Replacement
 Optimal Page Replacement
 LRU Page Replacement
 LRU Approximation page Replacement algorithm
 Counting Based Page Replacement Algorithm
 Page Buffering Algorithm
1. FIFO Page Replacement:
 The simplest page-replacement algorithm is a first-in, first-out (FIFO)
algorithm.
 A FIFO replacement algorithm replaces the oldest page that was brought
into main memory.
 Example: Consider the Reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
2, 0, 1, 7, 0, 1 for a memory with three frames.
 The three frames are empty initially.
 The first three references (7, 0, 1) cause page faults and are brought
into these empty frames.
 The algorithm has 15 faults.

 The next reference (2) replaces page 7, because page 7 was brought in
first.
 Page 0 is the next reference and 0 is already in memory, we have no
fault for this reference.
 The first reference to 3 results in replacement of page 0, since it is
now first in line.
 Because of this replacement, the next reference, to 0, will fault. Page
1 is then replaced by page 0. The process continues until all the pages
are referenced.
 Advantages:
 The FIFO page-replacement algorithm is easy to understand and
program.
 Requires minimal overhead.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 35
II B.Tech IT Regulation: R23 OS-UNIT-4

 Disadvantages:
 The Performance is not always good.
 It Suffers from Belady’s Anomaly.
 Belady’s Anomaly: The page fault increases as the number of allocated
memory frames increases. This unexpected result is called as Belady’s
Anomaly.
2. Optimal Page Replacement
 This algorithm has the lowest page-fault rate of all algorithms and will never
suffer from Belady’s anomaly
 Optimal page replacement algorithm Replace the page that will not be used
for the longest period of time
 Example: Consider the Reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
2, 0, 1, 7, 0, 1 for a memory with three frames.

 The Optimal replacement algorithm produces Nine page faults


 The first three references cause faults that fill the three empty frames.
 The reference to page 2 replaces page 7, because page 7 will not be
used until reference 18, whereas page 0 will be used at 5, and page 1
at 14.
 The reference to page 3 replaces page 1, as page 1 will be the last of
the three pages in memory to be referenced again.
 Advantages:
 Optimal replacement is much better than a FIFO algorithm
 Guarantees the lowest possible page fault rate (optimal performance)
 Disadvantage:
 The optimal page-replacement algorithm is difficult to implement,
because it requires future knowledge of the reference string.
 Often used as a theoretical benchmark for comparing other algorithms.
3. LRU Page Replacement
 The Least Recently used algorithm replaces a page that has not been used
for a longest period of time.
 LRU replacement associates with each page the time of that page’s last use.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 36
II B.Tech IT Regulation: R23 OS-UNIT-4

 It is similar to that of Optimal page Replacement looking backward in time,


rather than forward.
 Example: Consider the Reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
2, 0, 1, 7, 0, 1 for a memory with three frames.

 The LRU algorithm produces twelve faults


 The first three references cause faults that fill the three empty frames.
 The reference to page 2 replaces page 7, because page 7 has not been
used for a longest period of time, when we look backward.
 The Reference to page 3 replaces page 1, because page 1 has not been
used for a longest period of time.
 When the reference to page 4 occurs, however, LRU replacement sees
that, of the three frames in memory, page 2 was used least recently.
 Thus, the LRU algorithm replaces page 2, not knowing that page 2 is
about to be used.
 When it then faults for page 2, the LRU algorithm replaces page 3,
since it is now the least recently used of the three pages in memory.
 Advantages:
 The LRU policy is often used as a page-replacement algorithm and is
considered to be good.
 LRU replacement does not suffer from Belady’s anomaly.
 Disadvantage:
 The problem is to determine an order for the frames defined by the
time of last use.
 An LRU page-replacement algorithm may require substantial
hardware assistance.
Implementation of LRU algorithm:
 Two Implementations are feasible:
 Counters. We associate with each page-table a time-of-use field and
add to the CPU a logical clock or counter. The clock is incremented
for every memory reference. Whenever a reference to a page is made,
the contents of the clock register are copied to the time-of-use field in
the page-table entry for that page. So we can find the “time” of the
last reference to each page.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 37
II B.Tech IT Regulation: R23 OS-UNIT-4

 Stack. Another approach to implementing LRU replacement is to


keep a stack of page numbers. Whenever a page is referenced, it is
removed from the stack and put on the top. In this way, the most
recently used page is always at the top of the stack and the least
recently used page is always at the bottom

 Stack Algorithm:
 A stack algorithm is an algorithm for which it can be shown that
the set of pages in memory for n frames is always a subset of the
set of pages that would be in memory with n + 1 frames.
 Both optimal replacement and LRU replacement does not suffer
from Belady’s anomaly. Both belong to a class of page-
replacement algorithms, called stack algorithms that can never
exhibit Belady’s anomaly.
4. LRU Approximation Page Replacement Algorithm:
 The system provides support to the LRU algorithm in the form of a bit
called Reference bit.
 Reference Bit: The reference bit for a page is set by the hardware whenever
that page is referenced (either a read or a write to any byte in the page).
 Reference bits are associated with each entry in the page table.
 Initially, all bits are cleared (to 0) by the operating system. As a user process
executes, the bit associated with each page referenced is set (to 1) by the
hardware.
 After some time, we can determine which pages have been used and which
have not been used by examining the reference bits.
 This information is the basis for many page-replacement algorithms that
approximate LRU replacement.
a) Additional-Reference-bits algorithm
 The additional ordering information can be gained by recording the
reference bits at regular intervals. We can keep an 8-bit byte for each page
in a table in memory.
 At regular intervals (say, every 100 milliseconds), a timer interrupt transfers
control to the operating system.
 These 8-bit shift registers contain the history of page use for the last eight
time periods.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 38
II B.Tech IT Regulation: R23 OS-UNIT-4

 If the shift register contains 00000000, for example, then the page has not
been used for eight time periods.
 A page that is used at least once in each period has a shift register value of
11111111.
 A page with a history register value of 11000100 has been used more
recently than one with a value of 01110111.
 Thus the page with the lowest number is the LRU page, and it can be
replaced.
b) Second-Chance Algorithm:
 The basic algorithm of second-chance replacement is a FIFO replacement
algorithm.
 When a page has been selected, however, we inspect its reference bit.
 If the value is 0, we proceed to replace this page; but if the reference bit is
set to 1, we give the page a second chance and move on to select the next
FIFO page.
 When a page gets a second chance, its reference bit is cleared, and its arrival
time is reset to the current time. Thus, a page that is given a second chance
will not be replaced until all other pages have been replaced
 One way to implement the second-chance algorithm is as a circular queue.
 A pointer indicates which page is to be replaced next. When a frame is
needed, the pointer advances until it finds a page with a 0 reference bit.
 Once a victim page is found, the page is replaced, and the new page is
inserted in the circular queue in that position

c. Enhanced Second-Chance Algorithm


 We can enhance the second-chance algorithm by considering the reference
bit and the modify as an ordered pair
 The order is {Reference bit, Modify bit}
 We have the following four possible classes:
 (0, 0) neither recently used nor modified—best page to replace
 (0, 1) not recently used but modified—not quite as good

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 39
II B.Tech IT Regulation: R23 OS-UNIT-4

 (1, 0) recently used but clean—probably will be used again


soon
 (1, 1) recently used and modified—probably will be used again
soon, and the page will be need to be written out to disk before
it can be replaced
 Here we give preference to those pages that have been modified in order to
reduce the number of I/Os required. Thus the modified pages will not be
replaced before writing it to the disk.

5. Counting-based Page Replacement


 We can keep a counter of the number of references that have been made to
each page.
 This method includes two schemes
 Least frequently used (LFU) page-replacement: The LFU page-
replacement algorithm requires that the page with the smallest count be
replaced. The reason for this selection is that an actively used page
should have a large reference count.
 Most frequently used (MFU) page-replacement algorithm: The MFU
page-replacement algorithm is based on the argument that the page with
the smallest count was probably just brought in and has yet to be used.
6. Page Buffering Algorithm:
 Systems commonly keep a pool of free frames
 When a page fault occurs, a victim frame is chosen as and the desired page
is read into a free frame from the pool before the victim is written out.
 This procedure allows the process to restart as soon as possible, without
waiting for the victim page to be written out. When the victim is later
written out, its frame is added to the free-frame pool.
 Whenever the paging device is idle, a modified page is selected and is
written to the disk. Its modify bit is then reset.
 This scheme increases the probability that a page will be clean when it is
selected for replacement
FIFO Page Replacement: Selects the oldest page in memory for replacement.
Optimal Page Replacement: Selects the page that will not be used for the longest period
of time in the future..
LRU (Least Recently Used) Page Replacement: Selects the page that has not been
accessed for the longest period of time.
LRU Approximation Page Replacement Algorithm: Approximates LRU behavior without
tracking exact access times.
Counting Based Page Replacement Algorithm: Assigns a counter value to each page
indicating its recent access frequency.
Page Buffering Algorithm: Prefetches and buffers pages in anticipation of future accesses

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 40
II B.Tech IT Regulation: R23 OS-UNIT-4

Allocation of Frames
(Allocation of Frames in Virtual Memory Systems)
Frame Allocation Basics:
 Frame allocation determines how the OS distributes a fixed amount of
memory among processes.
 The system must allocate a limited number of free frames among processes.
 Initially, free frames are used to satisfy page faults. Once exhausted, page
replacement begins.
 Example: In a system with 128 KB memory and 1 KB pages, 128 frames
exist. If 35 KB is used by OS, 93 frames remain for user processes. With
pure demand paging, initially, all 93 frames are free; after they’re used, a
page-replacement algorithm selects which pages to replace.
 The OS can reserve a few frames to always remain free to handle sudden
page faults smoothly.
Minimum Number of Frames:
 The minimum number of frames per process is determined by hardware
and instruction requirements (e.g., at least 1 for instruction, 1 for memory
reference).
 Complex addressing like indirect addressing requires more frames.
 A counter may limit levels of indirection to avoid excessive frame
requirements.
 A process must be given enough frames to complete its instructions without
frequent faults.
Allocation Algorithms:
 Equal Allocation:
 Every process gets an equal number of frames.
 Simple but inefficient—small processes may waste memory.
 Proportional Allocation:
 Frames are assigned based on the process's size using the formula

 More fair and efficient, but does not consider process priority.
 Priority-Based Allocation
 High-priority processes can be allocated more frames, possibly at the
expense of low-priority ones.
Global vs. Local Replacement:
 Global Replacement: A process may replace pages belonging to other
processes; improves throughput but can cause inconsistent performance.
 Local Replacement: A process replaces only its own pages; performance is
more predictable but can lead to inefficiencies.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 41
II B.Tech IT Regulation: R23 OS-UNIT-4

Memory Reclaiming Strategy:


 A kernel reaper routine reclaims memory when free frames fall below a
threshold.
 OOM Killer may terminate processes when memory is critically low, based
on OOM scores.

NUMA Systems (Non-Uniform Memory Access)


 In NUMA systems, memory access time varies depending on CPU-memory
location.
 Memory allocation and scheduling must be optimized to reduce latency by
allocating frames closer to the CPU executing the process.
 Linux and Solaris use domains or locality groups to manage this efficiently.
 Performance improves if:
 A process runs on the same CPU as before.
 Its memory frames are allocated close to that CPU.

Thrashing
 Trashing is a condition where the system spends more time handling Page
faults than executing processes, leading to severe performance degradation.
 If the process does not have the number of frames it needs to support pages
in active use, it wills quickly page-fault. At this point, it must replace some
page. However, since all its pages are in active use, it must replace a page
that will be needed again right away. Consequently, it quickly faults again,
and again, and again, replacing pages that it must bring back in immediately.
This high paging activity is called thrashing.
 A process is thrashing if it is spending more time paging than executing.
 Thrashing is a phenomenon that occurs when a computer's performance
drops drastically as a result of excessive paging due to insufficient physical
memory to support the workload. Instead of executing useful work, the
system spends most of its time in paging, leading to a significant decrease in
performance.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 42
II B.Tech IT Regulation: R23 OS-UNIT-4

Causes of Thrashing:
 Insufficient Physical Memory: If the system doesn't have enough RAM to
hold all the active processes and data, it starts swapping data in and out of
virtual memory, leading to thrashing.
 Over commitment of Memory: When the system allows more processes to
run concurrently than the available physical memory can accommodate, it
may result in excessive swapping and thrashing.
 Poor Memory Management: Inefficient memory management algorithms or
poorly optimized applications can contribute to thrashing.
 Fragmentation: Fragmentation of virtual memory can also exacerbate
thrashing by making it harder for the system to find contiguous blocks of
memory.
 If we draw graph, in which CPU utilization is plotted against the degree of
multiprogramming. As the degree of multiprogramming increases, CPU
utilization also increases, although more slowly, until a maximum is
reached. If the degree of multiprogramming is increased even further,
thrashing sets in, and CPU utilization drops sharply. At this point, to
increase CPU utilization and stop thrashing, we must decrease the degree of
multiprogramming.

Detection of Thrashing:
 High Disk Activity: One of the most apparent signs of thrashing is high disk
activity. If the system is constantly swapping data in and out of the disk, it
indicates that thrashing might be occurring.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 43
II B.Tech IT Regulation: R23 OS-UNIT-4

 Low CPU Utilization: Paradoxically, thrashing can lead to low CPU


utilization because the system spends more time swapping data than
executing useful computations.
 Unresponsiveness: As thrashing worsens, the system becomes increasingly
unresponsive as it struggles to manage memory efficiently.
Eliminating Thrashing:
 Increase Physical Memory: Adding more RAM to the system can alleviate
thrashing by providing more space for active processes and reducing the
need for swapping.
 Optimize Memory Usage: Developers can optimize their applications to use
memory more efficiently, reducing the likelihood of thrashing
 Adjust Virtual Memory Settings: Tweaking virtual memory settings such as
swap space size and page file settings can help mitigate thrashing.
 Prioritize Processes: System administrators can prioritize critical processes
and allocate memory resources accordingly to prevent thrashing.
 Use Memory Management Techniques: Employing effective memory
management techniques such as memory paging, segmentation, and demand
paging can help reduce thrashing.
 Identify and Address Resource Hogs: Identify and address applications or
processes that consume excessive memory or CPU resources, as they can
contribute to thrashing.
Note:
 Swapping involves moving entire processes between main memory and disk, while
paging involves moving individual pages of memory between main memory and disk.
 Paging is more granular and typically more efficient than swapping for managing
memory in modern operating systems.
 Thrashing is a performance issue that occurs when excessive swapping or paging
activity significantly degrades system performance.
Overview of Mass-Storage Structure
(Physical structure of secondary and tertiary storage devices)
 Since main memory is usually too small to accommodate all the data and
programs permanently, the computer system must provide secondary storage
to back up main memory.
 Modern computer systems use disks as the primary on-line storage medium
for information (both programs and data).
Hard Disk Drives (HDD)
(Magnetic Disks or MDD: Magnetic Disk Drive)
 Magnetic disks provide the bulk of secondary storage for modern computer
systems.
 Each disk platter has a flat circular shape, like a CD. Common platter
diameters range from 1.8 to 3.5 inches.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 44
II B.Tech IT Regulation: R23 OS-UNIT-4

 The two surfaces of a platter are covered with a magnetic material. We store
information by recording it magnetically on the platters.
 A read–write head “flies” just above each surface of every platter. The
heads are attached to a disk arm that moves all the heads as a unit.
 The surface of a platter is logically divided into circular tracks, which are
subdivided into sectors.
 Cylinder: The set of tracks that are at one arm position makes up a cylinder.

 The storage capacity of common disk drives is measured in gigabytes. When


the disk is in use, a drive motor spins it at high speed. Most drives rotate 60
to 250 times per second, specified in terms of rotations per minute.
 Disk speed has two parts.
 The transfer rate is the rate at which data flow between the drive and
the computer.
 The positioning time or random-access time consists of two parts: the
time necessary to move the disk arm to the desired cylinder, called the
seek time, and the time necessary for the desired sector to rotate to the
disk head, called the rotational latency.
 Head Crash: The disk read write head has a danger that the head will make
contact with the disk surface. Although the disk platters are coated with a
thin protective layer, the head will sometimes damage the magnetic surface.
This accident is called a head crash.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 45
II B.Tech IT Regulation: R23 OS-UNIT-4

 A disk drive is attached to a computer by a set of wires called an I/O bus.


Several kinds of buses are available, including advanced technology
attachment (ATA), serial ATA (SATA), eSATA universal serial bus (USB),
and fibre channel (FC).
 The data transfers on a bus are carried out by special electronic processors
called controllers. The host controller is the controller at the computer end
of the bus.
 A disk controller is built into each disk drive. To perform a disk I/O
operation, the computer places a command into the host controller, typically
using memory-mapped I/O ports.
Non-volatile Memory (NVM) Devices:
 NVM devices retain data without power and are typically electrical, not
mechanical.
 Common Forms include SSDs, USB drives, embedded smart phone storage,
and NAND flash memory.
 Advantages over HDDs:
 No moving parts (more reliable)
 Faster (no seek or rotational latency)
 Lower power consumption
 Disadvantages:
 Higher cost per MB
 Typically lower capacity
 Falling prices and rising capacities have led to widespread SSD adoption.
Performance Considerations
 Standard interfaces like SATA may bottleneck fast NVM; alternatives like
NVMe connect directly to PCIe for better performance.
 NVM is sometimes used as a cache tier in storage hierarchies.
Flash Memory Characteristics
 NAND flash is read and written in pages, but must be erased in larger blocks
before rewriting.
 Writes wear out cells—typically after ~100,000 program-erase cycles.
 Lifespan is measured as Drive Writes Per Day (DWPD).
Controller Algorithms
 Garbage Collection: Reclaims space by relocating valid data and erasing
blocks with invalid pages.
 Over-Provisioning: Sets aside extra space (often ~20%) to improve
performance and wear-leveling.
 Wear Leveling: Spreads writes evenly to prevent premature failure of
heavily used blocks.
 Error Correction: Uses ECC to detect and fix errors; bad pages may be
retired.
 RAID: Provides redundancy against catastrophic failure.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 46
II B.Tech IT Regulation: R23 OS-UNIT-4

Volatile Memory (DRAM as Storage):


 RAM Drives: Volatile memory (DRAM) can be configured as fast
temporary storage.
 Use Cases: Useful for temporary files, shared memory, or boot-time systems
(e.g., Linux’s /tmp or initrd).
Storage Connectivity:
 Devices connect via various buses: SATA, USB, SAS, FC, etc.
 NVMe connects to PCIe for high speed and low latency.
 Controllers: Host and device controllers manage communication and
transfers using DMA.
Address Mapping:
 Logical Block Addressing (LBA): Abstracts physical locations for both
HDD and NVM.
 Flash Translation Layer (FTL): Maps logical blocks to physical pages in
NAND.
 Hard Disk Geometry: Logical blocks may not directly map to sectors due to
defects, zone-based recording (CLV, CAV), or internal remapping.
HDD Scheduling
(Disk Scheduling)
 HDD Scheduling (Hard Disk Drive Scheduling) refers to the algorithms or
strategies used by an operating system to determine the order in which
pending I/O requests (read/write operations) for a HDD are executed.
 The primary goal of these algorithms is to minimize the seek time and
rotational latency thereby improving disk performance and throughput.
 The two major components of the hard disk are Seek time and Rotational
Latency.
 Seek time: The seek time is the time for the disk arm to move the
heads to the cylinder containing the desired sector.
 Rotational latency: The rotational latency is the additional time for
the disk to rotate the desired sector to the disk head.
 Disk bandwidth: The disk bandwidth is the total number of bytes
transferred, divided by the total time between the first request for service
and the completion of the last transfer.
 Disk scheduling algorithms are used in operating systems to optimize the
order in which disk I/O requests are serviced.
 Different Disk Scheduling Algorithms are :
 First Come First Serve (FCFS)
 Shortest Seek Time First (SSTF)
 Scan Algorithm (SCAN)
 Circular Scan Algorithm (C-SCAN)
 Look Algorithm (LOOK)
 Circular Look Algorithm(C-LOOK)
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 47
II B.Tech IT Regulation: R23 OS-UNIT-4

FCFS Scheduling:
 The simplest form of disk scheduling is, of course, the first-come, first-
served (FCFS) algorithm.
 This algorithm is intrinsically fair, but it generally does not provide the
fastest service.
 Consider, for example, a disk queue with requests for I/O to blocks on
cylinders: 98, 183, 37, 122, 14, 124, 65, 67 in that order.

 If the disk head is initially at cylinder 53, it will first move from 53 to 98,
then to 183, 37, 122, 14, 24, 65, and finally to 67.
 The total head movements is
Head Movements =
(53-98)+(98-183)+((183-37)+(122-14)+(14 -124)+(124-65)+(65-67) = 640
 Disadvantage: The request from 122 to 14 and then back to 124 increases the
total head movements.
 If the requests for cylinders 37 and 14 could be serviced together, before or
after the requests for 122 and 124, the total head movement could be
decreased and performance could be thereby improved.
SSTF Scheduling:
 The SSTF algorithm selects the request with the least seek time from the
current head position.
 In other words, SSTF chooses the pending request closest to the current
head position.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 48
II B.Tech IT Regulation: R23 OS-UNIT-4

 Example: Consider, for example, Given a disk with 200 cylinders and a
disk queue with requests 98, 183, 37, 122, 14, 124, 65, 67, for I/O to blocks
on cylinders. Disk head is initially at 53.

 The closest request to the initial head position (53) is at cylinder 65.
 Once we are at cylinder 65, the next closest request is at cylinder 67.
 From there, the request at cylinder 37 is closer than the one at 98, so 37 is
served next.
 Continuing, we service the request at cylinder 14, then 98, 122, 124, and
finally 183.
 This scheduling method results in a total head movement of only 236
cylinders
 SSTF algorithm gives a substantial improvement in performance.
 Disadvantage: SSTF may cause starvation of some requests.
 Starvation: Suppose that we have two requests in the queue, for cylinders
14 and 186, and while the request from 14 is being serviced, a new request
near 14 arrives. This new request will be serviced next, making the request
at 186 wait. While this request is being serviced, another request close to 14
could arrive. In theory, a continual stream of requests near one another
could cause the request for cylinder 186 to wait indefinitely.
SCAN Scheduling:
 In the SCAN algorithm, the disk arm starts at one end of the disk and
moves toward the other end, servicing requests as it reaches each cylinder,
until it gets to the other end of the disk.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 49
II B.Tech IT Regulation: R23 OS-UNIT-4

 At the other end, the direction of head movement is reversed, and servicing
continues.
 The head continuously scans back and forth across the disk.
 The SCAN algorithm is sometimes called the elevator algorithm.
 Example: Consider, for example, Given a disk with 200 cylinders and a
disk queue with requests 98, 183, 37, 122, 14, 124, 65, 67, for I/O to blocks
on cylinders. Disk head is initially at 53.
 Assuming that the disk arm is moving toward 0 the head will next service 37
and then 14.

 At cylinder 0, the arm will reverse and will move toward the other end of
the disk, servicing the requests at 65, 67, 98, 122, 124, and 183.
 If a request arrives in the queue just in front of the head, it will be serviced
almost immediately; a request arriving just behind the head will have to wait
until the arm moves to the end of the disk, reverses direction, and comes
back.
Circular SCAN (C-SCAN) scheduling:
 It is a variant of SCAN designed to provide a more uniform wait time. 
 C-SCAN moves the head from one end of the disk to the other, servicing
requests along the way. When the head reaches the other end, however, it
immediately returns to the beginning of the disk without servicing any
requests on the return trip. 

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 50
II B.Tech IT Regulation: R23 OS-UNIT-4

 The C-SCAN scheduling algorithm essentially treats the cylinders as a


circular list that wraps around from the final cylinder to the first one. 
 Example: Consider, for example, Given a disk with 200 cylinders and a
disk queue with requests 98, 183, 37, 122, 14, 124, 65, 67, for I/O to blocks
on cylinders. Disk head is initially at 53. 


LOOK Scheduling:
 The LOOK algorithm is the same as the SCAN algorithm in that it also
services the requests on both directions of the disk head, but it “Looks"
ahead to see if there are any requests pending in the direction of head
movement.
 If no requests are pending in the direction of head movement, then the disk
head traversal will be reversed to the opposite direction and requests on the
other direction can be served.
 In LOOK scheduling, the arm goes only as far as final requests in each
direction and then reverses direction without going all the way to the end.
 Consider an example, given a disk with 200 cylinders (0-199), suppose we
have 8 pending requests: 98, 183, 37, 122, 14, 124, 65, 67 and that the
read/write head is currently at cylinder 53.
 In order to complete these requests, the arm will move in the increasing
order first and then will move in decreasing order after reaching the end. So,
the order in which it will execute is 65, 67, 98, 122, 124, 183, 37, and 14.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 51
II B.Tech IT Regulation: R23 OS-UNIT-4

C-LOOK Scheduling:
 This is just an enhanced version of C-SCAN.
 Arm only goes as far as the last request in each direction, then reverses
direction immediately, without servicing all the way to the end of the disk
and then turns the next direction to provide the service.

FCFS serves the disk I/O requests in the order they arrive. It works on a first-come, first-
served basis.
SSTF selects the request with the shortest seek time, i.e., the request closest to the
current head position.
SCAN moves the disk arm in one direction (up or down) servicing requests along the
way until it reaches the end, then reverses direction.
LOOK is similar to SCAN but reverses direction when there are no more requests in the
current direction.
C-SCAN is a variant of SCAN that always scans the disk in one direction (e.g., from
innermost to outermost track) and jumps to the other end without servicing requests.
C-LOOK is a variant of LOOK that only looks for requests in one direction and jumps to
the other end without servicing requests.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 52
II B.Tech IT Regulation: R23 OS-UNIT-4

Pros and Cons of Disk Scheduling algorithms:


FCFS:
 Pros: Simple and easy to implement.
 Cons: May lead to the "convoy effect" where smaller requests get delayed behind
larger requests.
SSTF:
 Pros: Reduces the total seek time compared to FCFS.
 Cons: May cause starvation for requests located farther away from the current head
position.
SCAN (Elevator):
 Pros: Ensures fairness in servicing requests.
 Cons: May result in increased waiting time for requests at the ends of the disk.
LOOK:
 Pros: Reduces waiting time compared to SCAN.
 Cons: May lead to the same "convoy effect" as FCFS.
Circular SCAN (C-SCAN):
 Pros: Prevents the "convoy effect."
 Cons: Uneven servicing of requests, as the arm only moves in one direction.
Circular LOOK (C-LOOK):
 Pros: Similar advantages as C-SCAN.
 Cons: Uneven servicing of requests.
Tutorial Questions:
1. Explain different stages in binding of instructions and data to memory
(or) Explain the Multistep Processing of a User Program
(or) Discuss the steps a user program goes through before being executed
2. Differentiate between Logical and Physical address space. Explain the process of
converting virtual addresses to physical addresses with a neat diagram.
3. What are the disadvantages of single contiguous memory allocation? Explain.
MVT and MFT techniques with examples.
(or) Explain the terms in Memory Partitioning with examples:
i) Fixed Partitioning ii) Dynamic partitioning.
4. Discusses various memory-management strategies used in computer systems.
(or) Compare different memory organization schemes with respect to the
different aspects
5. Explain the difference between External fragmentation and internal
fragmentation. How to solve the fragmentation problem using paging.
(or) What are the causes for External and Internal fragmentation? Suggest
solutions to the fragmentation problem.
(or) How does fragmentation occur in contiguous memory allocation? Explain
with an example.
(or) What is fragmentation? Explain the differences between internal and external
fragmentation
(or) Compare and contrast internal fragmentation and external fragmentation.
Provide examples to illustrate both.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 53
II B.Tech IT Regulation: R23 OS-UNIT-4

6. What is paging? Explain its structure for 32-byte memory with 4-byte pages.

7. What is effective access time? Compute it for 70% hit ratio, 20 ns to search TLB
and 100 ns to access memory. Observe the difference when it is changed to 90%
hit ratio.

8. Discuss in detail about various page table structures.


(or) Discuss the hierarchical paging, hashed page tables, and inverted page
tables.
(or) Explore the most common techniques for structuring the page table.
(or) Discuss different structures of page tables

9. Explain about Swapping with neat sketch.


(or) Discuss the process of Swapping in memory management with neat
schematic View.
(or) Discuss how swapping works and its impact on system performance.

10. Explain in detail about how Virtual memory is implemented with a neat diagram.
Does virtual memory increase computer speed? Give justification to your answer.
(or) Explain about Virtual Memory Management in detail.

11. Describe the benefits of a virtual memory system.


12. Explain the working of Demand Paging technique. Discuss the hardware support
required to support demand paging.
(or) Discuss implementing of virtual memory through demand paging.
(or) Illustrate how pages are loaded into memory using demand paging.

13. How demand paging affects the performance of a computer system? Give
explanation.

14. List the advantages and disadvantages of Demand Paging.

15. How does copy-on-write (COW) optimize process creation?

16. What is a page fault? Explain the steps involved in handling a page fault with a
neat sketch.
(or) Under what circumstances do page faults occur? Describe the actions taken
by the operating system when a page fault occurs with neat picture.
(or) When page fault occur? Discuss the procedure for handling a page fault with
neat diagram.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 54
II B.Tech IT Regulation: R23 OS-UNIT-4

17. What is the need for page replacement in paging? Describe any two page
replacement algorithms with examples.
(or) Explain the LRU and Optimal page replacement algorithms.
(or) Describe any two page replacement algorithms. Compare them based on
efficiency and performance

18. Consider the page reference string 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 With


Five Frames. How many page faults would occur for the FIFO, Optimal page
replacement algorithms?

19. Consider the reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 for


a memory with three frames. Trace FIFO, optimal, and LRU page replacement
algorithms.

20. Consider the following page reference string: 1,2,4,7,3,5,6,3,6,1,4,2,3,6,5,2. How


many page faults would occur for the LRU page replacement algorithm,
assuming four frames and all frames are initially empty.

21. Consider the page reference string 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 Determine


how many page faults would occur for Optimal page replacement algorithm?
Assume three, four frames are initially empty.

22. What is the need of Page replacement? Consider the following reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Find the number of Page Faults
with FIFO, Optimal Page replacement and LRU with four free frames which are
empty initially. Evaluate which algorithm gives the minimum number of page
faults?

23. Consider the following page reference string: 1,2,3,4,2,1,5,6,2,1,2,3, 7,6,3, 2,1,
2,3,6. How many page faults would occur for FIFO, LRU and optimal page
replacement algorithms assuming four page frame and all frames are initially
empty.

24. Discuss how LRU and FIFO page replacement algorithms can be implemented on
the following reference string when the numbers of frames are 3. Also, calculate
the number of page faults. 3, 2, 1, 0, 2, 2, 1, 7, 6, 7, 0, 1, 2, 0, 3, 0, 4, 1, 5, 4, 5, 6, 7,
6, 7, 2, 4, 2, 7, 3.

25. A system uses 4 page frames for storing process pages in main memory. Assume
that all the page frames are initially empty. Find the number of Page
faults, Hit ratio and Miss ratios for Optimal Page replacement algorithm while
processing the page reference string given below.
1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 55
II B.Tech IT Regulation: R23 OS-UNIT-4

26. Discuss various issued related to the allocation of frames to processes.


(or) Explain the different frame allocation algorithms used in virtual memory
systems.

27. What is thrashing? What is the cause of Thrashing? How does the system detect
Thrashing? What can the system do to eliminate this problem?
(or) What is thrashing in a virtual memory system? Explain the causes of thrashing and
discuss how the operating system can detect and prevent it.

28. How does thrashing affect system performance? Discuss the role of the Working
Set Model and Page Fault Frequency (PFF) in mitigating thrashing.

29. Explain about various issues involved in selecting appropriate disk scheduling
algorithm.
(or) What is disk access time? Explain the factors influencing the selection of a
disk scheduling algorithm.

30. Give a brief note on HDD scheduling algorithms.


(or) Describe any two disk scheduling algorithms with suitable illustrations.
(or) Briefly discuss various Disk-scheduling algorithms.
(or) Compare the SCAN and C-SCAN disk scheduling algorithms with an example.
(or) Differentiate SCAN, C-SCAN and LOOK, C-LOOK disk scheduling algorithms
with an example.
(or) Explain and compare the FCFS and SSTF disk scheduling algorithms.
(or) Explain and Compare the following disk scheduling algorithms:
i) FCFS ii) SCAN iii) C-LOOK.
(or) Explain about SSTF, LOOK, SCAN and C-SCAN disk scheduling algorithms.
(or) Explain the pros and cons of all disk scheduling algorithms.
(or) Discuss at least three HDD algorithms like FCFS, SSTF, SCAN with diagrams
and comparisons.

31. Suppose the following disk request sequence (track numbers) for a disk with 100
tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, and 70. Assume that the initial
position of the R/W head is on track 50. Find out the additional distance that will
be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm
is used compared to the SCAN (Elevator) algorithm (assuming that SCAN
algorithm moves towards 100 when it starts execution).

32. Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at
cylinder number 53. The cylinders are numbered from 0 to 199. Calculate the
total head movement (in number of cylinders) incurred while servicing these
requests.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 56
II B.Tech IT Regulation: R23 OS-UNIT-4

33. Suppose the following disk request sequence (track numbers) for a disk with 200
tracks is given: 82,170,43,140,24,16,190. Assume that the initial position of the
R/W head is on track 50. Calculate the Seek Time for SSTF, SCAN, LOOK

34. Consider a disk queue with following requests for I/O to blocks on cylinders
30,70,115,130.110,80,20,25 (Assume disk head is at 90) Draw FCFS and SSTF
scheduling and also determine how many times the disk head changes its
direction for each of the above mentioned scheduling techniques.

Assignment Questions:
1. What is the difference between
i) Page and a frame
ii) Internal and external fragmentation
iii) Logical and physical addresses.
iv) Valid-invalid bit and modify bit

2. Draw neat diagram of :


i) Paging hardware with TLB
ii) Hashed page table
iii) Inverted page table
iv) Swapping of two processes
v) Showing steps in handling a page fault
vi) Showing Page replacement
vii) Magnetic Disk

3. Consider a logical address space of 64 pages of 1,024 words each, mapped onto
a physical memory of 32 frames.
a. How many bits are there in the logical address?
b. How many bits are there in the physical address?

4. Consider a logical address space of 256 pages with a 4-KB page size, mapped
onto a physical memory of 64 frames.
a. How many bits are required in the logical address?
b. How many bits are required in the physical address?

5. A system uses 4 page frames for storing process pages in main memory.
Assume that all the page frames are initially empty. Find the number of Page
faults, Hit ratio and Miss ratios for Optimal Page replacement algorithm while
processing the page reference string given below. 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,
2,1,2,3,6. Under what circumstances do page faults occur?

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 57
II B.Tech IT Regulation: R23 OS-UNIT-4

6. Consider the following page reference string:


1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
How many page faults would occur for the following replacement algorithms?
assuming one, two, three, four, five, six, and seven frames? Remember that all
frames are initially empty, so your first unique pages will cost one fault each.
 LRU replacement
 FIFO replacement
 Optimal replacement

7. Consider the following page reference string:


7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1.
Assuming demand paging with three frames, how many page faults would
occur for the following replacement algorithms?
 LRU replacement
 FIFO replacement
 Optimal replacement

8. Consider the following page reference string: 2,3,4,5,3,2,6,7,3,2,3,4,1,7,1,4,3,2,3,4,7.


Calculate the number of page faults with LRU, FIFO and optimal page replacement
algorithms with frame size of 3.

9. Consider the following page reference string: 1,2,4,7,3,5,6,3,6,1,4,2,3,6,5,2


How many page faults would occur for the optimal page replacement
algorithm, assuming four frames and all frames are initially empty.

10. Consider the following page reference string: 1,2,4,7,3,5,6,3,6,1,4,2,3,6,5,2.How


many page faults would occur for the LRU page replacement algorithm,
assuming four frames and all frames are initially empty.

11. Given memory partitions of 500 KB, 100 KB, 200 KB, 300 KB, and 600 KB (in
order), how would each of the first-fit, best-fit, and worst-fit algorithms place
processes of 212 KB, 417 KB, 112 KB and 426 KB (in order)? Which algorithm
makes the most efficient use of memory?

12. Consider the reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 for a


memory with three frames. Trace FIFO, optimal, and LRU page replacement
algorithms.

13. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3,


2, 1, 2, 3, 6. How many page faults would occur for the optimal page replacement
algorithm, assuming three frames and all frames are initially empty.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 58
II B.Tech IT Regulation: R23 OS-UNIT-4

14. Consider the page reference string 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 With


Five Frames. How many page faults would occur for the FIFO, Optimal page
replacement algorithms?

15. A system uses 3 page frames for storing process pages in main memory. It uses
the optimal page replacement policy. Assume that three frames and all the page
frames are initially empty. What is the total number of page faults that will occur
while processing the page reference string given below? Also calculate the hit
ratio and miss ratio. 4, 7, 6, 1, 7, 6, 1, 2, 7, 2.

16. Suppose that a disk drive has 5,000 cylinders, numbered 0 to 4,999. The drive is
currently serving a request at cylinder 2,150, and the previous request was at
cylinder 1,805. The queue of pending requests, in FIFO order, is: 2,069, 1,212,
2,296, 2,800, 544, 1,618, 356, 1,523, 4,965, 3681 Starting from the current head
position, what is the total distance (in cylinders) that the disk arm moves to
satisfy all the pending requests for each of the following disk-scheduling
algorithms? FCFS,SSTF,SCAN,LOOK,C-SCAN and C-LOOK

17. Suppose the following disk request sequence (track numbers) for a disk with 100
tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, and 70. Assume that the initial
position of the R/W head is on track 50. Find out the additional distance that will
be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm
is used compared to the SCAN (Elevator) algorithm (assuming that SCAN
algorithm moves towards 100 when it starts execution).

18. Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at
cylinder number 53. The cylinders are numbered from 0 to 199. Calculate the
total head movement (in number of cylinders) incurred while servicing these
requests.

19. Suppose the following disk request sequence (track numbers) for a disk with 200
tracks is given: 82,170,43,140,24,16,190. Assume that the initial position of the
R/W head is on track 50. Calculate the Seek Time for SSTF, SCAN, LOOK

20. Consider a disk queue with following requests for I/O to blocks on cylinders
30,70,115,130.110,80,20,25 (Assume disk head is at 90) Draw FCFS,SSTF,SCAN
and LOOK scheduling and also determine how many times the disk head
changes its direction for each of the above mentioned scheduling techniques.

Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 59

You might also like