0% found this document useful (0 votes)
18 views24 pages

OS Unit - 4

The document discusses memory management in computer systems, focusing on instruction execution cycles, address binding, and dynamic relocation. It explains the use of base and limit registers for hardware address protection, the concept of swapping processes in and out of memory, and different memory allocation strategies. Additionally, it covers the importance of logical versus physical address spaces and the role of the Memory Management Unit (MMU) in managing these addresses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views24 pages

OS Unit - 4

The document discusses memory management in computer systems, focusing on instruction execution cycles, address binding, and dynamic relocation. It explains the use of base and limit registers for hardware address protection, the concept of swapping processes in and out of memory, and different memory allocation strategies. Additionally, it covers the importance of logical versus physical address spaces and the role of the Memory Management Unit (MMU) in managing these addresses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Unit-4.

Page |1
MEMORY MANAGEMENT

INSTRUCTION EXECUTION CYCLE

First fetch instructions form memory to fetcher form memory.


Instructions then decoded & cause operands
are
stored back in memory.
Arter the Instructions have been
executed results

REGISTERS
USING BASE AND LIMIT
HARDWARE ADRESS PROTECTION are only storage that CPU
can a c e s s

built in processor none take


disk
Main memory & registers
address as arguments, buy
.
take memory
directly. Machine Instructions
address. storage device.
in one of these direct access
Instructions must be operates them.
. Data being usedby there before CPU on
must be moved & to
If data are not in memory, they user process
3. os from access by
operation has to protect
4. Also ensure correct
user form one
another. limit.
protect base
usually a &
5. Protection can be by 2 registers
provided address.
physical memory
Base register holds smallest legal
Limit register specifies size of the range
oa. odase S
b - s t a d i r o d e k o

6. EX: Base register =300040


Limitregister= 120900J
through 420940
Program can legally addressform 300040
operating
SySfem

256000

process

300040
300040 eivtlACW base
M A

120900
420940 limit
process

880000

024000
OS memory or other users

execution in user mode to access

7. Any attempt by program which treats the attempt


as fatal error.

results in a trap to OS, data structures of either


OS or
memory space the code and
from modifying
user program
8. This prevents
other users
can be loaded only by OS.
9. Base & Limit registers both OS & users memory.
can access
in kernel mode of errors.
to dump out in
case
OS executing
load user program
into user memory,
This allows OS to limt
RGMCET,NDL
CSE
M.N.Praneswara Rao, Dept. of

ye
CPU
no no
Ameotion t Pa T deta th the actual p. Mbéoti
Dc ColleAB
RA
Corrpile ebcofob
P 6s
Page l2

Address binding: address can


can be dos
done at any step along the
Binding of instructions & data, to memory
Oy
address

way ambdi add to


Compiletime: obo de teskk will reside, then absolute code can be
At compile time if you know where process
generated.
at location R
EX: If you know process reside starting
user

Created compiler code will start at that


location & extend up from there.
After some time if starting location changes, then it is memory to recompile this
code.
Load time Keto&e (ok2 oleote acd
1. If it is not knows at compile time, where process will reside in mèmory, then compiler
must generate relocatable code.
2. Final binding is delays until load time.
need only reload the user cade to incorporate these
3. If starting address changes we

changed values.
Execution time:
1. If process can be mobbed during its execution from one memory segment to another
then binding must be delayed until run time.
2. Specified h/w is required which will discuss further.

Logical Vs physical address space

1. Address generated by CPU is referred to as logical address.


2. Address seen by memory unit is called memory address register physical address
3. Compile time & load time address binding generate identical logical & physical
addresses.
4. Execution time address binding results in differing logical & physical address.
5. Logical address generated by program is logical address space.
6. All physical address corresponding to these logical address space is physical address

space.
7. Thus in execution time address binding scheme, logical & physical address space differ.

a we > codile time


Akwau-hod time
2
Cor Phne-

t ,1 rot
rot r mebo- uculion lime.
ur b chi q
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page |3

DYNAMIC RELOCATION USING RELOCATIoON REGISTER

time mapping form virtual to physical address is done by h/w device called MMU
Run
memory management unit). Mapping with simple MMU scheme.
in
Base register is called relocation register
-
a special register in CPU that helps
relocation of program it contains an integer value
The value in relocation register is added to every address generated by
user process at
time it is sent to memory.
EX relocated to
1. If base is at 14000 then attempt by user to address location is dynamically
location 14000.
2. DOS running on 8086 uses 4 relocation registers when loading & running processes.
3. User never sees real physical address.
4. User program deals with logical address.
address to physical address.
5. Memory mapping h/w converts logical
address (range 0 to max)
6. Now have tow types of addresses: logical
max value, R->base value)
Physical address(R+0 to R+
that process runs in location 0 to max.
7. User generates logical address & thinks
instruction or data byte as used in process.
8. Logical address is address of an
or segment register.
9 This address is obtained using index, base
10. Physical address is effective memory address
=
of aninstruction or data byte.

relocation
register
14000
physical
logical address
address
memory
CPU 14346
346

MMU

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
Page l4
SWAPPING

PPORttine

awap out

a w a p in

backing atoro

main memor

Process must be in memory to


be executed.
1. & then
Process can be temporarily swapped
out of memory to a backing store
2.
back into for execution.
memory
brought
with round robin CPU scheduling.
3. Assume multiprogramming
a. When each process
finishes its quantum, it will be with another process,
in
fast enough that some process will be
b. Memory manager swap process
memory ready to execute.
to allow resemble amounts of computing
C. Quantum must be large enough
to be done between swaps.
based scheduling algorithm.
4. Swapping is used for priority & roll in.
This variant process of swapping is sometimes called roll out
in
out process will be stored when swapped
5. To which location a swapped
ba k into some memory space it
Process that is swapped out will be swapped
occupied previously.
This restriction is dictated by method
of address binding.
easily mobbed
If binding is done at assembly or load
time then process can't be
to different location.
be swapped into different
binding is used, them process can
If execution-time
addresses are computed during execution time.
memory space because physical
6. Swapping requires a backing store. for all
This must be large enough to accommodate
copies of all memory images
these memory images.
users and it must provide direct access to images
of all processes whose memory
7. System contains.ready queue consisting
are ready to run.
are on backing store or in memory&
execute a process it calls the dispatcher.
8. When CPU scheduler decides to
is in memory.
9. Dispatcher checks whether next process current
10. If it is not & if there is no
free memory region, dispatcher swaps out
in desired process.
process in memory & swaps
transfers control to selected process.
11. Then reloads registers &
12. Context switch time in such swapping
a system is fairly high.
13. Find the swap time for the given process
EX: User process is 10 MB in size
store is hard disk with transfer
rate of 40 MB/sec.
Backing
Actual transfer of 10 MB process to or form main memory take
milliseconds.
i. 10000 KB/40000 KB per second=1/4 second=250
Assuming no head seeks are necessary
milliseconds
Assuming average latency of 8
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL

CPU dspaTche. eaolyajueve


Page |5
Swap time is 258milliseconds
Forboth swap out & swap in, total swap time is 516 milliseconds.
Major part of swap time is transfer time.
Total transfer time is directly proportional to amount of memory swapped
14. EX: 512 MB of main memory, 25 MB for os
Maximum size of user process can be 487 MB.
User process can be much smaller than t is, assume 10 MB
10 MB process can be swapped out in 258 milliseconds compared with 56 MB,
which require 6.4 seconds.
It will be helpfulif known exactly the memory or user process is using
Then we need to swap only that is being used, reducing swap time,
15. Swapping considered by other factors as wel.
If we want to swap a process, we must sure that it is completelyidle
A process may be waiting for an 1/0 operation when we want to swap that
process.
IF /o is accessing user memory for 1/0 buffers then process cant be swapped.
Assume l/0 operation is queued because the device is.busy.
If we want to swap out p1 & swap in p2 the /0.operation might then attempt to
use memory that now belongs to p2.
16. 2 main solutions:
a. Never swap a process with pending /0 or execute /0 operations only
into OS buffers transfers b/w OS buffers & process memory then occur

only when process is swapped in.


requires too much
b. Standard swapping is used in few systems, which
swapping time & provides too little execution
time.
version of UNIX.
17. Modification of swapping is used in memory
if are running &
Swapping is normally disabled but will start many processes
using a large amount of memory
is reduced.
Swapping is again halted when load on system
modified version of swapping.
Early PCs are multiple large processes by using
a
in
MS windows 3.1OS supports concurrent
execution of processes memory.

there is insufficient main memory an old process is


If new process is loaded &
swapped to disk.
because the user decides the time to
This OS doesn't provide full swapping
preempt one process.
remains swapped out until user selects
that process to run.
Swapped out process

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
Page f6
Contiguous memory allocation

both OS & various user processes.


Main memory must accommodate contiguous
most efficient way is
of main memory in
One method for allocating the parts
memory allocation.
Memory is divided into 2 portions.
1. For resident OS
2. For user processes
User can be placed in
either low or high memory.
in low memory.
in memory, so usually OS placed
is
is often
Interrupt vector in memory at some
time allocating
want to reside
When several user processes
are in i/p queue.
available memory to processes that section of
contain in a single contiguous
allocation éach process is
In contiguous
memory.
& Protection
a) Memory mapping
baselimit
base

address yes yes

CPU
no no

trap to operating system


monitor-addressing error
memory
limitregister.
obtain using a relocation register with
a
1. These can
value of the smallest physical address
Relocation registercontains
of logical addresses
3. Limit register contains range
4. Each logical address must be less than limit registers.
address dynamically by adding valué in relocation
5. The MMU maps logical
register
6. This mapped address is sent to memory.
7. When CPU scheduler selects a process for execution, the dispatcher loads the
relocation & limit registers
8. Every address generated by CPU is checked against
these registers.
being modified by this
9. We can protect OS & other user programs & data from
running process
10:This relocation register allows the OS size to change dynamically.
11. EX: OS contains space for device drivers
If device driver is not commonly used, so we can use that space for others.
Such code is sometimes called transient OS code, it comes & goes as needed

M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL


Page |7
b) Memory allocation
.Simple method for allocating memory is to divide memory into several fixed
sized partitions.
2. Each partition may contain exactly one process.
Degree of multiple programming depends on the no. of partitions.
process from i/p queue is
4. In this multiple partitions, if one partition is free,
selected & loaded into free partition.
S. When process terminates partition become available from another procesSs.
are
table indicating which parts of memory
6. In fixed size partitions OS keeps a

available & which are occupied.


. All memory is available for user process & is considered as hole (one large block
of memory
8. When process arrive & needs memory we search for whole large enough for this
process. rest available
9. If we find one we allocate only as much memory needed keeping
for future.
10. Process enters the system & put into
an i/p queue.
memory to
11. 0S considers memory requirement of each process & available
determine which processes are allocated memory.
CPU.
12. When memory is allocated then it complete or waits for
13. When process terminates it releases memory
by OS.
14. This empty memory is filled with another processfrom i/p queueavailable hole is
i.e.
of next process is not satisfied
no
15. If memory requirement
large enough to hold that process.
it can skip down the i/p queue.
16. OS then wait until large block is available orsmaller
memory requirements.
17. Whether there is some other process with
searches for set of holes that is
18. When process arrives request memory
system
&
large enough for this process
two parts.
19. If hole is too large it is split into
20. First for arriving process
21. Otheris returned to set of holes.
releases memory & this is placed in set of holes.
22. When process terminates, itto form large hole.
23. Adjacent holes are merged &
if there are any process waiting for memory
24. At this point, system will check demands for these processes
this freed &recombine d memory satisfy

of size "n" from a list of free


allocation problem how to satisfy request
Dynamic storage
holes.
FIRSTFIT :Allocate first hole that is big enough. search
or where previous first fit
Searching can start either at beginning
evolved.
we find a free hole that is large enough.
Searching stop as
Allocate smallest hole that is big enough.
BEST FIT lists unless list is ordered by size
We must search entire
leftover hole.
This produces the smallest

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
Page |8

wORST FIT: Allocates the large hole


We must search entire list,
unless it is sorted by size.
be more useful than
This produceslargest leftover hole which may
smallest leftover hole from best-fit approach.

EX: size
order), place the processes ;of
For thepartitions of 100k,500k,200k,300k,600k i(in
the first bit, best fit and worst fit algorithms
212k,417k,112k,426k(in order) according
Free partitions 100k,500k,200k,30ok,600k

First Fit:
We allocate the first process in the first available partition large enough to
accommodate first process
(i.e
partition 1 of size 500k leaving 288k memory hole
as
P1 of size 212k gets allocated in
500k-212k=288k)
Free partitions 100k,288k,200k,300k,600k
(i.e
partition 4 of size 600k leaving 183k memory hole
as
P2 of size 417k gets allocated in
600k-417k=183k)
Free partitions 100k,288k,200k,300k,183k
partition 2 of size 200k leaving 88k memory as hole (i.e
P3 of size 112k gets allocated in
200k-112k=88k) 83
Free partitions 100k,288k,88k,300k,6©ek
hence it will wait until
P4 of size 426k will not find any free partitionto accommodate
either P2 or P1 is swapped out from the memory

M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL


Page |9

Fragmentation is
from memory, the
free memory space
AS processes are loaded & removed
broken into little pieces. for the
total memory space
exists when there is enough
External fragmentation (storage is fragmented
into large no.
request but the spaces are not contiguous

of small holes) might serve


were in one big free block, we
f all these small pieces of memory
several more processes. size, external
average process
total amount of memory storage &
Depending on

fragmentation may be a minor or major problem.


3. 50% rule:
be unusable, this property
known as 50% rule
One third of memory may
4. Internal fragmentation
EX: hole of 18,464 bytes
Next process requests 18462 bytes
we left with a
hole of 2bytes.
We allocate exactly 18462 bytes; into fixed size
this problem is to break memory
General approach to avoiding
units based on block size.
blocks& allocate memory in memory.
larger than requested
Memory allocated to process may be
a slightly
numbers internal fragmentotion.
is
Difference between these two is internal
noe being used
but is
Memory that is internal to partition
fragmentation
is compaction.
5. Solution to problem of external fragmentation all free memory
so as to pace
Compaction is to shuffle memory contents
together in one large block. can't be done.
or load time, compaction
&done assembly
If relocation is static
at
execution time.
is dynamic & done at
Compaction is possible only ifrelocation toward one end of
to move all processes
Simplest compaction algorithmis
memory hole of available memory.
producing large one
move other direction,
All holes in
address space of processes
to be
to permit logical
6. Another solution for is allocated physical memory
whenever
process to
noncontiguous, thus allowing
the latter is available.
Two techniques achieve this
solution: paging & segmentation
These techniques can be combined.

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
PM- fomess
LA Pege
LA - Paeno 0 e
no
wng Pno CoHespen'kinq ore Page t10
PAGING avnids exterarenta n
to be noncontiguous.
1. This permits physical address space of process
a

OSS
2. Paging is most commonly used in most
BASIC METHOD
3. Breaking physical memory into fixed
size blocks called frames
of same size called pages.
4. Breaking logical memory into blocks available memory
are loaded into
S. When process is to be executed, its pages
frames from backing store.
frames
blocks=size of the memory
6. Backing store is divided into fixed size
into 2 parts page number (p) & page offset
7. Address generated by CPU is divided ilge
lkatan
(d)
8. Page numberis used as indexto a e i Hoh
page tablememorylieitcontans a
contains base address ofeach page in physical
Page table with offset to define physical memory
address
10. Base address is combined page
sent to memory unit

logical physical
foo00.. 0000
address
addre

CPU
1 1 1 1 . . 1111

physical
memory
Dage table

11. Page size is definedby h/w


& 16 MB per page depending on
12. Page size is power of 2,vary between 512 bytes
computer architecture
size is 2^n addressing units
size of logical address space is 2^M, page
13. If
the page number
14.Higher order m-n bits of logical address designate

i 9 y hh thefse it kests main emy


Page foble Con be alcosb ueo a Sie
wld main mem
Pren e Main memb
9y
itake mbe tin to
PU page_bbl
to to
decvau ho siyo st it till Couse n t l
onc 4
o SPe
tobe ny
P
ho ol be p abg.
mulblevel Pgkd Hal a the qke five ccon tmo
2Otes w we
this i noto pocta appiooch
thee e M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page | 11
frarn
number

pago 1 Pago
pago 2

P a g o aa
page table 3Pago2
ogioanl
4 page 1
ITYemory

page
phyical
memory

15. N low order bits designate the page offset


page offset=d=n
16. Page number= P =m-n

pao« ablo

28

PhYolcoAI m M ry

17. By using above diagram


18. Logical address O is page o & offset 0
Page 0 is in frame 5,of page table
address 20-((5*4)+0)
Logical address O maps to physical
19. Logical address 3 is page 0 & offset 3 Pr

page 0 is in frame 5 tion in


pesi
address 23=((5*4)+3)
Logical address 3 maps to physical
20. In paging no external fragmentation
that needs it 5+
Any free frame can be allocated to process
a

21. We may have some internal fragmentation


Frames are allocated as units

EX:
72766 bytes would need 35 pages+ 1086
If page size is 2048 bytes a process job
frames resulting internal fragmentation of 2048-
bytes. It would be allocated 36
1086-962 bytes
pages would need n pages plus byte
case
1
In worst
It would allocate n+1 frames, resulting
in internal fragmentation of almost an

entire frame

M.N.Praneswara Rao, Dept. of cSE RGMCET,NDL


- PI BR Ponk tb pqe tobe
pece tabe eadin pe toble
PILR- oth o meny attes
aCeS
table e hove to haw 2 memy
To
ishue ionhem pe
acces To OveAteme.
8ccoM phlm.

1 H t atte th P Translation logkaid bulfes


9 paqe tobe gud hlci in5aeoe7 t a seed Coche
Page |12
22. It is better to have small page sizes
23. But due to small page size, Overhead is involved in each page table entry
24. This overhead is reduced as size of page increases
25. Today pages are typically between 4 KB & 8 KB size some system support even

larger;
EX
Solaris support 4 KB & 8 MB (multiple page size)
Each page table entry is 4 bytes entries can address 244 bytes (16 TB) of
physical memory
26. Each page of process needs one frame
27. If process requires n pages, at least n frames must be available in memory
28. If n frames are available they are allocated to this arriving process
29. First page of process is loaded into one of allocated frames; frame number is put
in page table
30. Next page is loaded into another frame and its frame number i_ put into page
table
free-frame list free-frame list
14 15
13

20 14 14
15
15 15

6
Peige 16
Pege
17 Rage 17
page 3
new process 18 new procosS 18 pace
19 19
o1
20 20 ag
320
new-process page table 21

(a) (b)

31. OS is managing physical memory; it must know allocation details of physical


memory
32. Which frames are allocated, how many total frames are there.
33. This information is kept in a data structure called frame table.
Frame table has one entry for each page frame indicating whether it is free or
allocated to which pace of which process.
34. OS maintains a page table for each process
35. This is used to translate logical address whenever OS must map logical address
to a physical address manually

M.N.Praneswara Rao, Dept. of CSE


RGMCET,NDL
uhe to ndue: h tine ben t
TLB- memty oche thd
o
tonsletinn d vm o pM
eurt
lolion. e TLB Aes the
a c s a memy
Cake
colle as addes 7onnoton Page | 13
ord Con be
PAGING UsING TLB
tables
1. Each oS has its own methods for storing page
2. Most Os allocate a page table for each process
if page table is small ( 256
table is satisfactory
S. Use of registers for each page

entries) million entries)


4. Now a day's page table can be large (1
5. For these use of registers is not good. (PTBR)
table base register
kept in main memory & page
6. In these page table is
points to page table this register reducing
context
changing only one
Changing page tables requires
switch time
access a user memory location.
8. PROBLEM: Time required to
If we want to access location This
table using válue in PTBR offset
1. we must first index into page
access
requires a memory with page offset to
2. It provides us with
frame number, which is combined
Then we can access desired page in memory
produce actual address access a byte (1 for page

3. In this scheme 2 memory


access are needed to
table entries, one for byte)
4. Thus memory is slowed by factor of 2
loglcal
address

page frar
number nOnb

TLO hit physical


address

TLB miss_
physICal
memoy

Page table

A) called Translation Look


1. To this problem is using a special, small, fast lookup h/w
aside Buffer(TLB)
TLB consists of 2parts: a key & a value
2. Entry in with all
When associative memory is presented
with an item. Item is compared
3.
keys. valuable fields is returned
4. If time is found corresponding
5. Search is fast but the h/w is expensive
often numbering from 64 to 1024
No. of entries in TLB small,
is
6.
TLB Contains only a
few page table entries.
7.
its page number is presented to TLB
8. When logical address is generated
in TLB then it is called TLB Hit then
its frame number is
9. If page number is found
used to access memory.
immediately available & is
RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
Page |14
10. Whole task may be less than 10% longer than
it would if an unmapped memory

reference is used.
11. If page is not in TLB then it is called TLB miss then memory reference to page

table is made
it to access memory.
12. When frame number is obtained we can use
13. Then we add page number & frame number
to TLB, so it can be found quickly in
next reference.
one for replacement.
14. If TLB is full of entries the OS must select
15. Replacement policies must range from LRU (least recently used) to random.

in each TLB entry.


16. Some TLBs store address space identifiers (ASID's )
17. ASID uniquely identifies each process & is used to provide address space
protection for that process
it
18. When TLB attempts to resolve virtual page numbers, ensures that the ASID for
running process the
currently ASID associated with virtual page
19. If ASIDs doesn't match, attempt is treated as TLB miss
20. ASID also allows the TLB to contain entries for several different processes
with each context switch, TLB must be flushed (erased)
to ensure
simultaneously
information
that next executing process doesn't use wrong translation
but incorrect invalid
21. Else TLB include old entries that contain valid address
or

physical addresses left over previous process


number is found in TLB is called hit
22. Percentage of times that particular page
ratio.
time.
23. 80% hit ratio means we find page number in TLB 80% of
EX:
search TLB& 100 nanoseconds to access memory
a)if it takes 20 nanoseconds(ns) to
what is time taken to access the page if is TLB hit and TLB miss
if it is a TLB hit
when page number is in TLB.
Then memory mapped access takes 120 nanoseconds
If there is TLB miss
time for searching TLB 20ns
frame number but now page table is in memor.
then we need togo to pagetable to get
So it requires memory access
number from pagetable = 100ns
frame
time taken get the
to from memory
Then we need to access the desired byte
Time taken to access the desired byte
from memory =100ns
Total time taken =20+100+100

220ns
effective access time
b)Considering hit ratio of 80% & 98% what is the
nanoseconds
Effective access time=0.80*120+0.20*220=140
nanoseconds
access time from 100 to 140
We suffer 40% slow down in memory
For 98% hit ratio we have
time=0.98*120+0.20*220=122 nanoseconds
Effective access
in time.
produced only 22% slow down
access
Increased hit ratio

M.N.Praneswara Rao, Dept. of CSE


RGMCET,NDL
Page |15

B)PROTECTION
1. Protection in paged environment
2. Protection bits is associated with each frame
3. These bits are kept in page table.
or read-only
4. One bit define a page to be read-write
can frame
to fine correct
reference to memory goes through the page table
.tvery
number. made to read
that no writes are being
6. Protection bits can be checked to verify

only page. causes a h/w trap to OS (or memory


7. In attempt to write to a read only page

protection violation) protection


read write or execute only
8. H/w can be created to provide read-only, combination is
bits fro each kind of
access
or by providing separate protection
also allowed.
to OS.
9. llegal attempts will be trapped table a valid-invalid bit.
to each entry in page
10. Additional bit is attached address space &
to valid the associated page is in processeés logical
11. When it is set
is thus a legal (valid) page address space.
is not in processes logical
12. When bit is set to invalid -page
use of invalid-
valid bit.
13. legal addresses are trapped by to the page.
14.OS sets this bit for each page
to allow or disallowaccess
15. Ex: 16383
has address from 0 to
with 14-bit address space i.e. system
System
addresses 0to 10468
We have program that use only
Page size is 2 KB (2048 bits) of 2K size +228 bits
we require 5 pages
For a program of size 10468
But these are 6 pages
from page 0 to page 5(0-12287)
bit is set to invalid.
Any attempt to generate addréss in page 6 0r 7,
an'
in this.
Notice we had a problem that is illegal.
address 10468 any reference beyond
Program extends only up to is not valid.
If you verify page 5 the address ends with 12287 that
12287 are valid.
But page 5 is gives as valid, so access up to
12288 to 16383 are invalid.
So addresses from fragmentation of paging.
size & reflects internal
This is because of 2KB page
address range.
that process uses all its
16. It is rare situation to them.
a small fraction
of address space available
17. Many processes use only is of no use (or)
each page in address range
table with entries for
18. Creating page
waste.
occupies valuable memory space.
of this table is not used
but
Most
19. table length register (PTLR) to
h/w in form of page
20. Some systems provide
indicate size of page table.
be checked against every logical
address to verify whether address
21. This value will
is in valid range.
causes an error trap
to OS.
22. Failure of this

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
he kax
distonce thot patevlas eerent
t-The Ceament home
bae
Te engh ef Poxficules in PM
amit ad o boe Pantu bhg
BRspec fes the stating Page |16

tramo numlbor volld-Invalid bi


O0000

pede3

Page
7 UC
10,468
2.
pay® lablo

Page

SEGMENTATION

Consider a program with a set of subroutines, procedures, functions, or modules. There


may also be various data structures: tables, arrays, stacks, variables, and so on

1. Segmentation is a memory-management scheme that supports this user view of


memory.
A logical-address space is a collection of segments.
Each segment has a name and a length.
both the segment name and the offset within the
4. The addresses specify
segment.
5. The user therefore specifies each address by two quantities: a segment name

and an offset.
rather than
6. Segments are numbered and are referred to by a segment number,
consists of two tuple:
by a segment name. Thus, a logical address
a

i. <segment-number, offset>
7. Each entry of the segment table has a segment base and
a segment limit.

limit base

segment
table

CPU

yes

no

trap: addressing error physical memory|


8. The segment base contains the starting physical address where the segment
limit specifies the length of the
resides in memory, whereas the segment
segment.
s, and an offset into
9. A logical address consists of two parts: segment number,
a

that segment, d.
index into the segment table.
10. The segment number is used as an
address must be between 0 and the segment limit.
11. The offset d of the logical
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page |17

12. If it is not, we trap to the operating system


address in
base to produce the
3 . It this offset is legal, it is added to the segment
physical memory of the desired byte.
Example: 4.
14. We have five segments numbered from 0 through
as shown.
15. The segments are stored in physical memory

subroutine stack 1400


sogment o
sogmont3 2400
symbol
table
segmento
lImit:base
o1000 1400
Sqrt sogmont4 1 400 6300 3200
400 4300
maini segment
program 004700
4 1000 4/0
segment table 4300
4700 ment 2
Seyment
segment.
logical address space

5700
6300
6700 menti
physical memorv

the beginning
16. The segment table has a separate entry for each segment, giving
and the length of that
address of the segment in physical memory (or base)
segment (or limit).
location 4300.
17. For example, segment 2 is 400 bytes long and beginsis at location 4300
18. 253. indicates a reférence to byte 53 of segment 2 mapped onto
+53=4353.
to 3200 (the base
Indicates a reference to segment 3, byte 852, mapped
is
19. 3852.
ofsegment 3)+852= 4052.
20.01222, indicates a reference to 1222 of segment 0 would result in
byte a trap to

the operating system, as this segment is only 1,000 bytes long.

Shael Pa Commen Code Ths is imp


in ime shaivg
P shov
ady
A 50k dalajpace oipe nees8ooe4
gOKIS
wheTede8- te
G (Pu t'aole) iteannghe a
Cnisomet

tod code
-

to en

M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL


Page |18

Demand Paging
associated page is
this value indicates that the
1. When this bit is set to "valid,"
both legal and in memory.
either is not valid
this value indicates that the page
2. If the bit is set to "invalid," but is currently
address space of the process), or is valid
(that is, not in the logical
on the disk.
into is set as usual, but
page-table entry for a page that is brought
memory
3. The marked
for that is not currently in memory is simply
the page-table entry a page
on disk.
invalid, or contains the address of the page

vaid vald
bit

nonmory

pwalael memorv

execution time
4. First load entire program into memory at program
5. Second load pages only as they are needed
If the process tries to access a page that was not brought into memorycauses
6.
page-fault trap.
the page table, will
7. The paging hardware in translating the address through
to the operating system. This trap
notice that the invalid bit is set, causing a trap
failure to bring the desired page into
is the result of the operating system's
memory.

Procedure for handling this page fault i s on


c k n g storo

opering
Sysem

reorenc

pagetaDle
restart
instrucOn

bring
m i s s o pag"

physicul
memory
whether the reference was a
1.We check an internal table for this process, to determine
valid or invalid memory access.
terminate the process. If it was valid, but
we have not
2. If the reference was invalid, we
now page it in.
yet brought in that page, we list.
one from the free-frame
3. We finda free frame by taking
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page |19
.We schedule a disk operation to read the desired page into the newly allocated
frame.
S. When the disk read is complete, we modify the internal table kept with the process
and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap.
been in memory.
ne process can now access the page as though it had always
Performance of Demand paging
. With demand page virtual memory, pagers are only loaded when they are

needed during execution.


. Similar to paging with swapping where processes reside in secondary memory.
3. When we want to execute processes we swap it into memory
4. Lazy swapper never swaps a page into memory until it is needed
fault
5. Page fault occur & able to restart any instruction after a page
6. We the state
save instruction counter)
(registers, of process where page fault

occurs
7. Able to restart exactly same place & state
8. IF 1D OF, if error at IF restart by fetching instruction again
& then fetch
9. If error at OF then again we must fetch & decode instruction
operand
10. EX: Consider an instruction
a. Fetch and decode the instruction (ADD).
b. Fetch A.
c Fetch B.
d. Add A and B.
e. Store the sum in C
C is in page which is not in
11. If page fault when we are storing inC (because
memory) again,
fetch instruction, decoding
12. Then we need to restart the instruction from
fetching two operands adding them
13. Difference:
several different locations
14. Only when one instruction modifies
from one location to other
15. Move instruction which moves 256 bytes
done
16. If page fault occurs when move is partially in
blocks overlap source may have been modified,
17. Also the source & destination
which we can't simply restart the instruction
2 ways
1. Access both ends of both blocks.
If page fault is going to occur it will happen before anything is modified
ate stored in memory
Move can then take page fault occur, since
place, no all pages
hold values of ovenwritten locations.
2. Temporary register to
written back into memory before trap occurs
If page fault, al old values are
state before instruction was started, so that instruction
Action restores memory to its
can be repeated.

M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL


Page |20

Page Replacement
1. If no frame is free, we find one that is not currently being
used and free it.
2. We can free a frame by writing its contents to swap space and changing the page
table (and all other tables) to indicate that the page is no longer in memory
3. Page-fault service routine to include page replacement:
a. Find the location of the desired page on the disk.
b. Find a free frame:
1. If there is a free frame, use it.
to
2. If there is no free frame, use a page-replacement algorithm
select a victim frame.
frame
3. Write the victim frame to the disk; change the page and
tables accordingly.
C. Read the desired page into the newly freed frame; change the page and
frame tables.
d. Restart the user process.
This
4. If no frames are free, two page transfers (one out and one in) are required.
situation effectively doubles the page-fault service time and increases the

effective access time


5. Reduce this overhead by using a modify bit (or dirty bit).
associated with it
6. When this scheme is used, each page or frame has a modify bit
in the hardware.
7. The modify bit for a page is set by the hardware whenever any word or byte in

the page is written into, indicating that the page has been modified.
8. When we select a page for replacement, we examine its modify bit.
9. If the bit is set, we know that the page has been modified since it was read in

from the disk.


10. In this case, we must write that page to the disk. If the modify bit is not set,
however, the page has not been modified since it was read into memory.
11. Therefore, if the copy of the page on the disk has not been overwritten (by some
other page, for example), then we need not write the memory page to the disk: It

is already there.
12. This technique also applies to read-only pages (for example, pages of binary

code)
13. Such pages cannot be modified; thus, they may be discarded when desired.
Page-Replacement algorithm
1. There are many different page-replacement algorithms. How do we select a
particular replacement algorithm? In general, we want the one with the lowest
page-fault rate.
2. Evaluate an algorithm by running it on a particular string of memory references
and computing the number of page faults. The string of memory references is
called a reference string.
3. For a given page size (and the page size is generally fixed by the hardware or
system), we need to consider only the page number.
4. Second, if we have a reference to a page p, then any immediately following
references to page p will never cause a page fault. Page p will be in memory after
the first reference, so the immediately following references will not fault.
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page |21

FIFO Page Replacement


each page the time when that
A FIFO replacement algorithm associates with
page was brought into memory.
.Whena page must be replaced, the oldest page is chosen. at the
We replace the page
FIFO queue to hold all pages in memory.
a tail
Create into memory, we insert
it at the
nead of the queue. When a page is brought

of the queue.
reference string
701 2 0 30 4 2 3 0
3 21 2 0 1 7 0 1

page frames
and are brought into these
(7,0,1) cause page faults
4. The first three references
empty frames.
7 was brought in first.
5. Next reference (2) replaces page 7, because page we have no faut for this
6. 0 is the next reference and:0 is already in memory,
reference
0
7. First reference to 3 results in replacement of page
8. Page 1 is then replaced by page 0
9. There are 15 faults altogether
10. Its performance is not alwaysgood. rate and slows process
increases the page-fault
11. A bad replacement choice
execution a FIFO page-replacement
1,2,3,4,1,2,5,1,2,3,4,5

12. Problems arepossible withfour


that
frames is ten is greater
than the number of
That the number faults
of for result
nine. This most unexpected
faults for three frames is
Belady's anomaly: rate may increase as
the
For some page-replacement algorithms, the page-fault
increases.
number of allocated frames
more memory to a process
would improve its performance:
But giving
Optimal Page Replacement rate of all
has the lowest page-fault
page-replacement algorithm
1. Optimal
never suffer from Belady's
anomaly
algorithms and will the longest period of time.
that will not be used for
2. Replace the page

RGMCET,NDL
M.N.Praneswara Rao, Dept. of CSE
Page 122

ohronce sting
01 7 0
701 2 0/3 0 4 2 3 0. 3 2 2

ARge hames

3. The first three references cause faults that fill the three empty frames.
not be used until
4. The reference to page 2 page 7, because 7 will
replaces
at 14. The reference
reference 18, whereas page 0 will be used at 5, and page 1
to page 3 replaces page 1, as page 1 will be
the last of the three pages in memory
to be referenced again
5. Only nine page faults, optimal replacement is much better than a FIFO algorithm,
which resulted in fifteen faults.
must suffer, then optimal
Note: If we ignore the first three, which all algorithms
replacement is twice as good as FIFO replacement. it
6. The optimal page-replacement algorithm
is difficult to implement, because
encountered a similar
future knowledge of the reference string. (We
requires
situation with the SJF CPU-scheduling algorithm
LRU Page Replacement
for the longest period of time. This
1. replace the page that has not been used
approach is the least-recently-used (LRU) algorithm.
LRU replacement associates with each page
the time of that page's last use.
2.
been used for
3. When pagea be replaced, LRU chooses the page that has not
must
the longest period of time

elerence sting
0 3 2 1 2 0 1 7 0
7 01 2 0 3 0 4 23

page frames
4. LRU algorithm produces 12 faults
5. First 5 faults are the same as those for optimal replacement
4 occurs, however, LRU replacement sees that, of
6. When the reference to page
least recently. Thus, the LRU
the three frames in memory, page 2 was used
2 is about to be used.
algorithm replaces page 2, not knowing that page
M.N.Praneswara Rao, Dept. of CSE RGMCET,NDL
Page |23
it is now the
7. It then faults for page 2, the LRU algorithm replaces page 3; since
least recently used of the three pages in memory
An LRU page-replacement algorithm may require substantial hardware assistance

by the time of last use.


he problem is to determine an order for the frames defined
Two implementations are feasible
Counters field and add to the CPU a
.ASsociate with each page-table entry a time-of-use
logical clock or counter
2. The clock is incremented for every memory reference
the contents of the clock register
are
3. Whenever a reference to a page is made,
for that page
copied to the time-of-use field in the page-table entry
reference to each page.
4. we always have the "time" of the last
value.
5. We replace the page with the smallest time write
6. This scheme requires a search of the page
table to find LRU page and the a

to memory for each memory access.


when page tables are changed
7. The times must also be maintained
Stack. replacement is to keep a stack of page
1. Another approach to implementing LRU
numbers.
the stack and put on the top.
2. is referenced, it is removed from
Whenever a page
least
at the top of the stack and the
3. The most recently used, page is always
recently used page is always at the bottom
is best to
Because entries must be removed from the middle of the stack, it
4.
a doubly linked list
with a head and tail
implement this approach by using
pointer. changing six
it on the top of the stack then requires
5. Removing a page and putting
pointers at worst.
there is search for a replacement;
a little more expensive, but
no
6. Each update is
of the stack, which is the LRU page.
the tail pointer points to the bottom

REPLACEMENT
COUNTING BASED PAGE
of refferences that have been made
to page
1. Counter for countingthe number
2. LFU: page with smallest count to be replaced
refference count
Actively used page should have large
initial phase but is never used later
Problem: page is heavily used during
remains in memory even though it is no
Because the page that has heavy count,
longer used
the use count
Sol: shift the count right by 1 bit at regular intervals which delays
probably just brought in and has yet to be used
page with smallest count
was
3 MFU:

M.N.Praneswara Rao, Dept. of cSE


RGMCET,NDL
Page |24
LRU-APPROXIMATION PAGE REPLACEMENT

whenever that page is


hardware
The reference bit for a page is set by the
1.
referenced (either a read or a write to any byte in the page).
table.
Reference bits are associated with
each entry in the page
2.
the operating system
3. Initially, all bits are cleared (to 0) by with each page
referenced is set (to
associated
4. ASa user process executes, the bit
1) by the hardware which have
been used and
we can determine
which pages have
Arter some time, bits
not been used by examining the reference

a)Additiona-Reference-Bits Algorithm
bits at regularintervals
1. Additional ordering by recording refference
2. Keep 8 bit byte for each page in a table in memory
transfered to OS
3. At regular intervals (100ms)control will be
4. OS shifts reference bits for each page into higher order bit,
shifting other to right
5. 8 bit shift register contains history of page for last eight time periods
6. If it contains 00000000it is not used for the last eight timeperiod
7. If it contains 11111111 it is used atleast once in each period
can be replaced
8. Page with lowest number is LRUpage and it
b]Second-Chance Algorithm
1. Basic algorithm is FIFO
2. When page is selected its refference bits are inspected ifit is 0 we proceed to replace
3. If it is 1 we give a second-chance and move on to select the next FIFO page
time is reset to
4. If a page gets second-chance its refference bit is cleared and arrival
Current time
5. Page given second-chance will not be replaced untill all other pages have been replaced
orgiven second-chance
6. way to implement is circular.queue
7. A pointer indicates which page to be replaced next
8. When frame is needed pointer advances untill it finds a page with a 0 refference bit
9. As it advances it clears reference bits
10. Once victim pageis found the page is replaced and the new page is inserted
11. Worst case when all bits are set, pointer goes through the whole queue giving each page
a second-chance
c)ENHANCEDSECOND-CHANCE ALGORITHM
1. considering 2 bits refference bit and modify bit as an ordered pair
2. 4 possible classes
(0,0)-neither recently used nor modified-best page to replace
(0,1)- not recently used but modified- not good, bcoz page need to written on to
disk before replacement
(1,0)-recently used but clean-may be used soon
(1,1)- recently used and modified -will be used agin and page will need to be
written on to disk before replacement

M.N.Praneswara Rao, Dept. of CSE


RGMCET,NDL

You might also like