0% found this document useful (0 votes)
3 views38 pages

Unit V OS

Operating system

Uploaded by

mandwadeop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views38 pages

Unit V OS

Operating system

Uploaded by

mandwadeop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

10/9/2024

Memory Management

Introduction
 Under this functionality, OS is Primarily concerned
with allocation of physical space of finite capacity;
Program must be brought (from disk) into memory
for it to be run.
 Main memory and registers are the only storage
entities that a CPU can directly access.
 The CPU fetches instructions from main memory
according the addresses pointed by the Program
Counter.
 Register access can be done in one CPU clock
(or less)
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 2

1
10/9/2024

Introduction
 Completing a memory access may take many
cycles of the CPU clock. In such cases the
processor needs to stall since it does not have
the data required to complete the instruction it
is executing.
 Cache sits between main memory and CPU
registers to deal with the stall issue.
 Apart from the speed of memory access,
another important concern is to protect the OS
from access by user processes and also user
processes from one another.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 3

Introduction
 This protection is provided by the hardware.
 There are several ways to implement it; one
way is to use Base and Limit registers.
 Each process has a separate memory space
and a set of legal addresses that it may
access.
 When a process starts execution, the base
register is loaded with the smallest legal
physical memory register; the limit register
specifies the size of the range
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 4

2
10/9/2024

Introduction

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 5

Introduction

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 6

3
10/9/2024

Introduction
 CPU must check that every memory access
generated in user mode is between the base
and base + limit for that user.
 This scheme prevents a user program from
(accidently or deliberately) modifying the
code or data structures of either the operating
system or other users.
 The base and limit registers can be loaded
only by the OS, which uses a privileged
instruction to do so.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 7

Address Binding
 When a program is brought into memory to run,
it is not know a priori where it is going to reside
in physical memory. Therefore, it is convenient
to assume that the first address of the program
starts at 0000.
 However, it is not practical to have first physical
address of user program to always start at
0000.
 Computer systems provide some hardware/
software support to handle this aspect of
memory management.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 8

4
10/9/2024

Address Binding
 In general, addresses are represented in different
ways at different stages of a program’s life
 Addresses in the source program are generally
symbolic
 For example, variable sum or JNZ loop
 A compiler binds these symbolic addresses to
relocatable addresses
 For example, 15 bytes from the start of the module.
 Linker or loader binds relocatable addresses to
absolute addresses
 Each binding maps one address space to another
address space
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 9

Address Binding
 The binding of logical address to physical
address can be done at any step along the way:
 Compile time: If memory location known a priori, absolute
code can be generated; must recompile code if starting
location changes. MSDOS .com format programs are bound
at compile time
 Load time: compiler generates relocatable code; binding is
delayed till load time, if memory location changes, program
only needs to be reloaded.
 Execution time: Binding delayed until run time if the
process can be moved during its execution from one
memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 10

5
10/9/2024

Address Binding
-----
Loop: -----
-----
JNZ Loop

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 11

Logical Vs Physical Address Space


 The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management.
 Logical address – generated by the CPU; also
referred to as virtual address.
 Physical address – address seen by the memory
unit.
 Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 12

6
10/9/2024

Memory-Management Unit (MMU)


 Hardware device that maps logical (virtual)
address to physical address.

 The user program deals with logical addresses;


it never sees the real physical addresses.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 13

Dynamic Relocation using Relocation Register


 The value in the relocation register is added to
every logical address generated by a program

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 14

7
10/9/2024

Dynamic Loading
 All routines are kept on disk in a relocatable
load format; Routine is not loaded until it is
called
 Better memory-space utilization; unused routine
is never loaded.
 Useful when large amounts of code are needed
to handle infrequently occurring cases.
 No special support from the operating system is
required; implemented through program design;
OS may provide library routines to help dynamic
loading
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 15

Dynamic Linking
 Linking postponed until execution time.
 Small piece of code, stub, used to locate the
appropriate memory-resident library routine.
 Stub replaces itself with the address of the
routine, and executes the routine.
 Operating system needed to check if routine is
in processes’ memory address.
 Dynamic linking is particularly useful for
libraries.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 16

8
10/9/2024

Swapping
 Removing suspended or preempted processes
from memory and their subsequent bringing
back is called swapping
 Helpful in improving processor utilization in
partitioned memory environments by increasing
the ratio of ready to resident processes
 The responsibilities of a swapper includes :
 Selection of a process to swap out
 Selection of a process to swap in
 Allocation and management of swap space

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 17

Swapping
 A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution.
 Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images.
 Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed.
 Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 18

9
10/9/2024

Swapping

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 19

Swapping Considerations
 It depends on address binding, where in memory, a
swapped in process will be brought back; for
compile and load time binding, the process needs
to brought back into same locations from where it
was swapped out. For execution time binding, it
can be brought into a different location.
 To swap out a process, the backing store must
have considerable space; the swapped out image
of a process might be different from the static
image when it was loaded into the memory.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 20

10
10/9/2024

Swapping Considerations
 A swapped out process must not have any
pending I/O operations; it might be accessing user
memory for I/O buffers. Two approaches to
handle this situation:
 Never swap a process with pending I/O

 Execute I/O operations only into OS buffers

 Modified versions of swapping are found on many


systems, i.e., UNIX, Linux, and Windows;
swapping is normally disabled, it is started when
number of processes are more and are using a
threshold amount of memory
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 21

Swapping Considerations
 The choice of the process to swap in is usually
based on the amount of time it spent on
secondary storage, priority and satisfaction of the
minimum swapped out disk residence time
requirement which is imposed to control thrashing
 Implementation of swapping is dependent on file
system, specific services and relocation
 Dynamic process image is different from the
static image ;dynamic run time state of a process
to be swapped out must be recorded for its
proper subsequent resumption. Also the static
image is required for initial activation of the
process

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 22

11
10/9/2024

Swapping Considerations
 The placement of swap file can be in a
 System wide swap file , or
 Dedicated per process swap file
 System wide swap file :
 A single large file is created and placed on a fast
secondary storage device to reduce the latency of
swapping
 The size of the swap file is an important trade off; kept
small to reserve more space for system programs and
other time critical programs, affects the no. of active
processes in the system – failure to reserve space at
process creation time leads to costly run time errors
and system stoppage due to inability to swap a
designated process
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 23

Swapping Considerations
 Dedicated swap file :
 A dedicated swap file reserved for each
swappable process
 Eliminates system wide swap file’s dimensioning
problem
 Does not pose restrictions on the no. of active
processes
 More disk space expended on swapping
 Slower access
 Complicated addressing of swapping files
scattered on the secondary storage

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 24

12
10/9/2024

Swapping Considerations
 Regardless of the type of swapping file used, the
need to access secondary storage makes
swapping a lengthy operation relative to
processor instruction execution
 OS that supports swapping usually cope up with
this problem by allowing the process to be
declared as swappable at the creation time ;
however the authority to designate a process as
un-swappable must be restricted to privileged
users
 An important issue is whether the process to
partition binding is static or dynamic
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 25

Relocation
 Program relocatability refers to the ability to
load and execute a given program into an
arbitrary place in memory as opposed to a
fixed set of locations specified at program
translation time
 The two basic types of relocation are :
 Static
 Dynamic

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 26

13
10/9/2024

Static Relocation
 Relocation is performed before or during the
loading of the program into memory. When
object modules are combined by the linker or
when the process image is being loaded, all
address dependent constants are adjusted in
accordance with the actual starting physical
address allocated to the program
 Once the program is in memory, values that
need relocation are indistinguishable from
those that do not

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 27

Static Relocation
 Since relocation information is usually lost
following the loading, a partially executed
statically relocatable program cannot be simply
copied from one area of memory into another;
either it must be swapped back into the same
partition from where it was evicted, or software
relocation must be repeated whenever the
process is to be loaded into a different partition
 Given the considerable space and time
complexity of software relocation, systems with
static relocation are practically restricted to
supporting only static binding of processes to
partition
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 28

14
10/9/2024

Dynamic Relocation
 Mapping from the virtual address/relative address
to the physical address space is performed at run
time with some hardware assistance
 After allocating a suitable partition and loading a
process image in memory, the OS sets a base
register to the starting physical load address. Each
memory reference generated by the executing
process is mapped into the corresponding physical
address by having the contents of the base register
added to it
 Provides more flexibility to swapped out processes
with the price of extra hardware and slowing down
of the effective memory access time due to the
relocation process
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 29

Memory Allocation Schemes

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 30

15
10/9/2024

Contiguous Allocation
 Each logical object is placed in consecutive
memory addresses.
 Can be further classified as
 Static/Fixed Partitioning
 Dynamic Partitioning

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 31

Static Partitioning
 During the system generation process, the
available physical memory is divided into fixed
no. of partitions of varying sizes to fit the needs
of frequently run requirements
 Fixed partitions put a limit on the degree of
multiprogramming
 System maintains a Partition description table
(PDT) to keep track of current partition status
and attributes
 PDT does not maintains the identity of the
process, this needs to be recorded in the
corresponding PCBs
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 32

16
10/9/2024

Static Partitioning
0 Process Description Table
OS Partition Base Size Status
no.
100K
0 0k 100 k allocated

1 100 k 300 k free

2 400 k 100 k allocated


400K
Pi 3 500 k 250 k allocated
500K
Pj
4 750 k 150 k allocated
750K
Pk 5 900 k 100 k free
900K

1000K

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 33

Static Partitioning
 There is a close relationship and interaction
between memory management and
scheduling functions of the OS.
 By influencing the membership of the set of
resident processes, a memory manager may
affect the scheduler’s ability to perform; on the
other hand, the effectiveness of the short term
scheduler influences the memory manager by
affecting the average memory residence times

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 34

17
10/9/2024

Fragmentation
 Memory fragmentation refers to the
unused/free memory that cannot be
assigned to any process
 Static partitioning results in Internal
fragmentation.
 The unused memory that is part of a partition
allocated to a process whose size is smaller
than the allocated partition.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 35

Protection
 The integrity of a system depends on its ability to
enforce isolation of separate address space; system
must be protected from unauthorized tampering by
user processes, also each process must not
inadvertently or maliciously access the areas of
memory allocated to other processes
 Two common ways:
 Base and limit register: base and limit values for each process kept in
its corresponding PCB
 Protection bits: process identity is recorded in the occupied blocks,
system is assigned a unique key to allow unrestricted access to all
blocks, no. of bits restrict no. of partition

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 36

18
10/9/2024

Sharing
 Static partitioning of memory is not very
conducive to sharing
 Three basic approaches:
 Entrust shared objects to OS : OS code becomes
bulky and difficult to maintain
 Maintain multiple copies , one per participating
partition if shared object : system overhead increases,
swapping complicates the process even more
 Use shared memory partition : update the protection
bits to match the identity of the process currently
accessing the shared partition

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 37

Dynamic Partitioning
 Partitions are created and allocated as per the
request till either the physical memory is
exhausted or the allowable degree of
multiprogramming is reached
 For its proper management, OS maintains a link
list of free partition; to start with this free list is a
single chunk of available free memory
 Usually a small constant ‘C’ ( system defined
parameter) is set to avoid creation of
exceptionally small leftover areas in allocating
space for partition
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 38

19
10/9/2024

Dynamic Partitioning
0
0 0k 100 k allocated
OS
100K 1 - - -

2 220 k 180 k allocated


220K 3 400 k 200 k allocated
P1
4 - - -
400K
5 850 k 150 k allocated
Pj
P2
600K
Free
850K 100k 120k 600k 250k
PPk3

1000K
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 39

Dynamic Partitioning
 Common algorithms for selection of a free area of
memory for creation of a partition are
 First fit: Allocate the first partition/hole that is big enough
 Best fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size. Produces the smallest leftover
hole
 Worst fit: Allocate the largest hole; must also search entire list.
Produces the largest leftover hole.
 Next fit: A variant of Best fit; search begins from the last allocated
free partition
 First-fit and best-fit better than worst-fit in terms of speed
and storage utilization

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 40

20
10/9/2024

Dynamic Partitioning
 Usually when a free area is returned, it is
recombined with adjacent free areas to make a
single bigger chunk of free memory
 The patterns of returns of free areas is not same
as the order of allocation. This leads to holes
scattered all over memory in dynamic
partitioning causing External fragmentation
 According to Knuth, approximately 33% or one-
third of memory is wasted due to fragmentation
 Compaction is a solution but with added
overhead
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 41

Dynamic Partitioning – Protection & sharing


 Almost same as static partitioning, except the
difference that dynamic partitioning allows
adjacent partitions in physical memory to overlap
 Such sharing is restricted to two processes only
 Shared code must ensure that references to itself
are mapped properly during execution on behalf
of any of the participating processes; code must
either be position independent or occupy identical
virtual offsets in address spaces of all processes
that reference it

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 42

21
10/9/2024

Dynamic Partitioning – Protection & Sharing


4000

PDT

Base size
5500
4000 2000

5500 2500 6000

---- ----

8000

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 43

Dynamic Partitioning – Protection & Sharing

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 44

22
10/9/2024

Dynamic Partitioning
 One way to overcome external fragmentation
is compaction – bring all the free memory at
one place; possible only with dynamic
relocation. Compaction is an overhead. It
cannot be performed if I/O is in progress;
either latch the process into memory till I/O is
over or perform I/O in system buffers
 Another solution to external fragmentation is
allow the address space of a process to be in
non-contiguous memory location

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 45

Non Contiguous Memory Allocation


 Two schemes are possible
 Segmentation
 Paging
 Paged Segmentation

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 46

23
10/9/2024

Segmentation
 The address space of a single process is divided
into logical blocks (segments: main program,
functions, stack, arrays, tables) and are placed
into noncontiguous areas of memory; division is
logical hence sizes are different
 Items belonging to a single segment must be
placed in contiguous areas of physical memory
 For relocation purpose, each segment is compiled
to begin at its own virtual address 0. an individual
item within the segment is identified by its offset
relative to the beginning of the enclosing segment

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 47

Segmentation
 Each logical address generated by the system
has two components ;
 Segment name (number)
 Offset within the segment
 This two dimensional virtual address
<segment no, offset> needs to be converted into
an equivalent one dimensional physical address
 To facilitates this translation, the base and size
of each segment is recorded as a tuple called
the segment descriptor
 All segment descriptors of a given process are
collected in a table called the Segment
Descriptor Table (SDT)

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 48

24
10/9/2024

Segmentation

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 49

Segmentation
 The accessing of SDTs is facilitated by
means of dedicated hardware registers called
the SDTBR and SDTLR ; the base and limit
value of SDT is kept in the PCB of the
corresponding process
 Segmentation cuts memory bandwidth into
half; mapping each virtual address requires
two physical memory references
 To expedite translation, the frequently used
segment descriptors can be kept in dedicated
registers SDRs

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 50

25
10/9/2024

Segmentation – Protection
 Segmentation uses base – limit form of
protection; in addition to the usual protection
between different processes, it provides
protection within the address space of a single
process
 Access right to each segment can be defined
depending on the information stored in its
constituent elements
 Typically, access right bits are included in
segment descriptors; during address mapping,
the intended type of reference is checked
against the access rights for the segment in
question
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 51

Segmentation – Sharing
 Segmentation provides flexibility and ease of
sharing
 Shared objects are placed in separate
dedicated segments; different segments of
different processes can share an object with
different access rights. Also the shared
segment can have different virtual numbers
for different SDTs
 To support swapping it is necessary that the
shared segment remain resident while a
participating process can be swapped out

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 52

26
10/9/2024

Segmentation – Sharing
SDT 1
0
1 RW

2
3

Shared Object
SDT 3
0
SDT 2 1
0 EO
1 2
3 RO
2
4
3

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 53

Paging
 Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes); Divide logical memory into
blocks of same size called pages; the backing
store(disk) is also divided into same size blocks
 To run a program of size n pages, need to find n
free frames and load program; Keep track of all free
frames
 Last page need not fit exactly in the frame; internal
fragmentation
 Set up a page table to translate logical to physical
addresses.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 54

27
10/9/2024

Paging
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table
which contains base address of each page in physical
memory.
 Page offset (d) – combined with base address to
define the physical memory address that is sent to the
memory unit.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 55

Paging

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 56

28
10/9/2024

Paging

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 57

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 58

29
10/9/2024

Managing Page Map Table


 Page table is kept in main memory.
 Page-table base register (PTBR) points to the page
table.
 Page-table length register (PRLR) indicates size of the
page table.
 In this scheme every data/instruction access requires
two memory accesses. One for the page table and one
for the data/instruction.
 The two memory access problem can be solved by the
use of a special fast-lookup hardware cache called
associative memory or translation look-aside buffers
(TLBs); the search is extremely fast however, number of
entries in TLB are small often between 64 to 1024
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 59

Translation Look-aside Buffer (TLB)


 The TLB contains only few of the page-table
entries of the process.
 When a logical address is generated by the
CPU, its page no. is presented to the TLB. If
the page no. is found (a Hit), its frame
number is immediately available and memory
can be accessed. If the page no. is not in the
TLB (a Miss), it is accessed from the
memory as usual and also stored in TLB to
speed up next reference to that page.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 60

30
10/9/2024

Translation Look-aside Buffer


(TLB)

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 61

Translation Look-aside Buffer (TLB)


 The percentage of times that a page number is
found in the associative registers is called a Hit
Ratio, it is related to the number of associative
registers in the TLB.
 Assuming the Associative Lookup is  (Epsilon)
time unit and Hit ratio is  , The Effective
Access time (EAT) is given as
EAT = (1 MAT+ )  + (2 MAT+ )(1 – )

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 62

31
10/9/2024

Example
 Assume that it takes 20 ns to access the TLB
and MAT is 100 ns and Hit ratio is 80%. What
is Effective Access Time?
EAT = (0.8)*120 + (0.2)*220
= 140 ns

 Now assume that the hit ratio is 98%


EAT = (0.98)*120 + (0.2)*220 = 122 ns

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 63

Protection
 Memory protection implemented by associating
protection bit with each frame.
 Valid-invalid bit attached to each entry in the page
table:
 “valid” indicates that the associated page is in the
process’ logical address space, and is thus a legal
page.
 “invalid” indicates that the page is not in the process’
logical address space
 Additionally, Access rights can be associated with
each page declaring it to be read-only or read-write
or execute to allow only specided type of access.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 64

32
10/9/2024

Valid (v) or Invalid (i) Bit In A Page Table

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 65

Sharing
 Sharing of read-only (reentrant) code is simple in paging

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 66

33
10/9/2024

Structure of the Page Table


 In most modern computer systems, the logical
address space is quite large. For example,
consider a system with 32-bit (232), logical
address space. If page size is 4 KB (212), then a
page table may contain up to 1 million entries
(232 / 212). Assuming that each entry consists of
4 bytes, each process will need 4MB of space
for its PMT.
 Contiguous allocation of space for PMT is not
feasible.
 PMT itself can be in non-contiguous space.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 67

Structure of the Page Table


 Hierarchical Paging

 Hashed Page Tables

 Inverted Page Tables

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 68

34
10/9/2024

Hierarchical Paging
 Break up the logical address space into multiple page tables; A simple
technique is a two-level page table.
 A logical address (on 32-bit machine with 4K page size) is divided into:
 a page number consisting of 20 bits.

 a page offset consisting of 12 bits.

 Since the page table is paged, the page number is further divided into:
 a 10-bit page number.

 a 10-bit page offset.

 Thus, a logical address is as follows:


page number page offset
pi p2 d

10 10 12
where pi is an index into the outer page table, and p2 is the displacement
within the page of the outer page table.
Dr. Padma D. Adane
Deputy Director, School of CSE, RBU 69

Two-Level Page-Table Scheme

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 70

35
10/9/2024

Two-Level Page-Table Scheme


 Address-translation scheme for a two-level
32-bit paging architecture

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 71

Hashed Page Tables


 Common in address spaces > 32 bits.
 The virtual page number is hashed into a
page table. This page table contains a chain
of elements hashing to the same location.
 Virtual page numbers are compared in this
chain searching for a match. If a match is
found, the corresponding physical frame is
extracted.

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 72

36
10/9/2024

Hashed Page Tables

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 73

Inverted Page Table


 One entry for each real page of memory.
 Entry consists of the virtual address of the page
stored in that real memory location, with
information about the process that owns that
page.
 Decreases memory needed to store each page
table, but increases time needed to search the
table when a page reference occurs.
 Use hash table to limit the search to one — or at
most a few — page-table entries; TLB can
accelerate access

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 74

37
10/9/2024

Inverted Page Table

Dr. Padma D. Adane


Deputy Director, School of CSE, RBU 75

38

You might also like