0 ratings0% found this document useful (0 votes) 38 views58 pagesOsy 5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, 
claim it here.
Available Formats
Download as PDF or read online on Scribd
Chapter 5
Memory Management
milind NagareBasic Memory Management
Background :
* memory is an important resource of the computer system that needs to be managed
by the operating system. To execute the program, user needs to keep the program in
main memory. The main memory is volatile. Therefore, a user needs to store his
program in some secondary storage which is non-volatile.
* Every process needs main memory since a process code, stack, heap (dynamically-
allocated structures), and data (variables) must all reside in memory.
* The management of main memory is required to support for multiprogramming.
* Many executable processes exist in main memory at any given time.
* Different processes in main memory have different address space. Memory manager
is the module of the operating system responsible for managing memory.
* Programs after completion of execution move out of main memory or processes
suspended for completion of I/O can be swapped out on secondary storage to free
the main memory for other processes.
+ New processes are required to be loaded in main memory. If available main
memory is not sufficient to hold all processes swapping between main memory and
secondary memory is done. Memory managers move processes back and forth
between memory and disk during execution.Mono-programming
* Only one program was residing in main memory
at given point of time.
* Operating system was residing in some portion of memory &
other portion was fully devoted to the single executing process.
* This system was simple to design but was not supporting
 
multiprogramming. In this approach no relocation was needed
as program could be always loaded at the same location.
Multiprogramming
+ Multiprogramming is required to support multiple processes simultaneously.
Since multiple processes are resident in memory at the same time, it
increases processor utilization if the processes are I/O bound, CPU does not
remain idle.
* CPU utilization increases as it executes multiple processes on time sharing
basis. Multiprogramming gives illusion of running multiple processes at
once and provides users with interactive response to processes.Multiprogramming with Fixed sized Partitions
+ In this scheme memory is divided into a number of fixed-sized
partitions.
* Each partition may contain exactly ane process.
* The number of partitions and its size would be defined by the
system administrator when the system starts up. The degree of
multiprogramming will be high if the number of partitions
created is more.
+ When a partition is free, a process is selected from the input
queue and is loaded into the free partition. When the process
completes the execution and terminates, the partition becomes
free for another process. The operating system keeps track on
(free and occupied partitions by keeping table indicating which
parts of memory are free and which are occupied.
* Fixed sized partitions are relatively simple to implement.
+ The problems with the fixed size partitions is if program size is
larger than the partition size, program cannot be stored in
partition.
+ Eg. Is as figure shows fixed size partitions of size 8 M.Multiprogramming with Variable sized Partitions
* Ina variable-sized partition, memory is divided into the
partitions of different sizes.
* Processes are loaded into the partitions having size nearby to
its need. It is easy to always ensure the best-fit.
* A queue is organized for each size of the partition where
processes wait for that partition. Partitions are allocated to
the processes with best-fit policy. Best fit policy ensures to
allocate the partition that is big enough for the process to
reside.
* However, such an allocation may reduce the performance as
processes would queue up in best fit queue even though other
partitions are free. For example, it may have many processes
queued up in a queue of partition of size 8M, and no processes
are queued up in a queue for the partition size 16M.
+ Fig. shows variable size partitions.B ragmentation
Ina fixed size partitioning of memory, if partition size is larger than program
 
size, some space of the partition will be unused within that partition. This is
called Internal fragmentation.
+ The best fit policy of variable size of partitioning technique minimizes internal
fragmentation.
+ In both fixed and variable size partitions it may happen that, partition can
remain empty even though processes are waiting in other partitions queue
leading to external fragmentation.
+ Dynamic partition technique can reduce the internal and external
fragmentations.
+ In this technique basically a variable partitioning with a variable number of
partitions determined dynamically. Initially, all memory is available for user
processes, and is considered as one large block, of available memory, a hole.
+ When a process arrives and needs memory, it searches for a hole large enough
to fit for this process. This technique ensures the efficient use of main memory.
There is no internal fragmentation. However, if the many small holes are
scattered, it can't be allocated to one large size process.Fragmentation
* Initially 64 MB main memory is available. 8 MB is allocated to operating
system as shown in Fig. (a). For user processes 56 MB memory is available and
considered as one large single hole.
* Processes Pl, P2 and P3 are loaded and to these processes sufficient memary
space can be allocated, creating the hole of size 4 MB at the end of memory . At
some point of time none of the allocated processes are ready.
 
+ Operating system swaps out P2 and in its place new process P4 of size 18 MB is
swapped in Fig. (e). The hole of 2 MB is created. Again after some time since
none of the processes in main memory is ready and only P2 is in ready suspend
state, is available.
* To swap in P2, there is no sufficient space is available. So operating system
swaps PI out and swaps P2 back in as in Fig. (f)
* The example shows that, there are many scattered small size holes. These holes
cannot be allocated to the process of larger size than these holes. This is called
external fragmentation.
* Solution to this problem is compaction. Compaction is a technique to covert
many small size scattered holes into one large size continuous hole.Compaction
* Compaction is relocation of the various processes so as to accumulate all the
free space in one place. Due to compaction, there is inefficient use of processor.
Fig shows the memory after compaction: scattered small size holes.
+ If processes P4 and P3 moved up, one large size hole at end of memory as fig.
+ If processes P2, P4 and P3 moved downward, one large size hole of 8 MB at
the start of user memory area as in Fig (¢). However in this, 48 MB processes
need to be moved compared to 28 MB in Fig.(b)(d) shows the 28 MB shifting of
processes.
+ Compaction should ensure the moving of processes having minimum total size.
* Compaction increases the degree of multiprogramming. It is because processes
can be loaded in the memary which is available by combining many scattered
holes. If relocation is done at compile time (static), compaction is not possible.
It is possible only if relocation is dynamic and done at execution time.Free Space Management Techniques
* The OS must manage the memory when allocated dynamically. There are two
approaches to keep track of memory usage: bitmaps and linked list of free
allocated memory segments.
Bit Map : In this method, every allocation unit signified by a 0 or 1 bit. Free
allocation unit is represented by bit 0 and allacated unit is represented by bit 1.
+ In this approach, even with an allocation unit as small as 4 bytes, 32 bits of
memary will require only 1 bit of the map. A memory of 32n bits will use n map
bits, so the bitmap will take up only 1/32 of memory. If the allocation unit is
chosen large, the bitmap will be smaller, but appreciable memory may be
wasted in the last unit of the process if the process size is not an exact multiple
of the allocation unit.
+ Advantage: A bitmap provides a simple way to keep track of memory words in a
fixed amount of memory because the size of the bitmap depends only on the size
of memory and the size of the allocation unit.
Disadvantages: The problem is when it has been decided to bring a k-unit process into
memory, memory manager must search the bitmap to find a run of k consecutive 0 bits
in the map.& Searching a bitmap for a run of a given length is a slow operationFree Space Management TechniquesFree Space Management Techniques
+ Linked List: Another way of keeping track of memory is to maintain a linked list of
allocated and free memory segments, where a segment either contains a process or is an
empty hole between two processes.
The memory of Fig (a) is represented in Fig (c) as a linked list of segments. Each entry in
the list specifies a hole (H) or process (P), the address at which it starts, the length, and a
poimer to the next item.
In this example, the segment list is kept sorted by address. Sorting this way has the
advantage that when a process terminates or is swapped out, updating the list is
straightforward. A terminating process normally has two neighbours (except when it is at
the very tap or bottom of memory).
These may be either processes or holes, leading to the four combinations shown in Fig.
updating the list requires replacing a P by an H. In Fig. (b) and Fig. (c), two entries are
coalesced into one, and the list becomes one entry shorter. In Fig. (d). three entries are
merged and two items are removed from the list.
When the processes and holes are kept on a list sorted by address, several algorithms can
be used to allocate memory for created or existing processes.
* Advantages: The segment list is kept sorted by address. Sorting this way has advantage
that when a process terminates or is swapped out, updating the list is straightforward.
Disadvantages: This is not efficient method as linked list need to be maintained.Free Space Management Techniques
 
 
milindPaging
Paging is memory management technique which allows physical address space of the
process to be noncontiguous.
In some sense, paging mechanism is similar to reading of.a book. When we read a book
we only see and need to open the current page to read.
All the other pages are not visible to us and remains closed. In the same manner, we can
say that even when we may have a large program available, the processor only needs
small set of instructions to execute at any time.
In fact, all these instructions which the processor needs to execute are within a small
proximity of each other.
This is like a page which contains all the statements which we are currently reading.
In this way paging allows to keep just the parts of a process that we're using in memory
and the rest on the disk.
Basic Operation: The problem of external fragmentation is avoided by using paging.
+ Paging is implemented by integrating the paging hardware and operating system.
+ In this technique, logical memory is divided into blocks of the same size called pages.
+ The physical memory is divided into fixed-sized blocks called frames (size is power of 2)
The page size and frame size is equal.
+ Initially all pages remain on secondary storage (backing store). When a process is
needed ta be executed, its pages are loaded into any available memory frames from the
‘secondary Siorage.Paging Hardware
 
Following basic operations is done in paging :
J. CPU generates logical address and it is divided into two parts: a page
number (p) and a page offset (d).
2. The page number is used as an index into a page table.
3. The base address of each page in physical memory is maintained by page
table.
4. The combination of base address with page offset defines the physical
memory address that is sent to the memory unit.
5. The physical location of the data in memory is therefore at offset din page
frame f.
* Because of the paging the user program sights the memory as one single
contiguous space, it gives the illusion that memory contains only one
program.
* But the user program is spread throughout main memory (physical memory).
The logical addresses are translated into physical addresses.
+ Fig. shows the operation of the paging hardware.Paging Example
Consider the following example in Fig. 5.2.3. Let page size is of € K and memory
available is 32 K.
Page 0 is in frame number 5. Page 1 is in frame number 7 and Page 0 is in frame number
5. Logical address 4 is in page 1 and offset is 0. Page 1 is mapped to frame 7. So physical
address 28 = ((7*4)+ offset 0).
Logical address 10 is in page 2 and offset is 2. Page 2 is mapped to frame 2.
+ So physical address 10=((2*4)+ offset 2)Paging ExampleMemory Protection and Sharing
* Page table contains the frame number. So for the reference of the page, page table is
verified to get correct frame number.
* Protection bit is associated with each page to denote whether page is read-write or
read only.
* If try is made to write read only page, a hardware trap goes to operating system.
* A legal page remains in logical address space of the process and in page table this
page is marked with valid (v) bit, Otherwise page is illegal and marked with invalid
(i) bit.
* Paging also provides advantages of sharing af common code.
* The re-entrant code cannot be modified. If many users share this code, only one copy
needs to be kept in main memory.
* The page table of different user points to same physical memory (frames). In this
case data used by different users can be different so data pages are mapped to
different frames.
* By simply having page table entries for different processes point to the page frame,
the operating system can create regions of memory that are shared by two or more
processes.Segmentation
* In segmentation, a user program and its associated data can be subdivided
into a number of segments.
* Different segments of the programs can have different length.
* Although there is a maximum segment length, the length depends on purpose
of the segment in the program.
* Segments are arbitrarily-sized contiguous ranges of the address space.
* They are a tool for structuring the application. As such, the programmer ar
compiler may decide to arrange the address space using multiple segments.
* For example, this may be used to create some data segments that are shared
with other processes, while other segments are not.
* Compiler can create the different segment for global variables, code, thread
stack, library etc.
* In segmentation, a program address space may occupy more than one
partition, and these partitions need be contiguous.
* Internal fragmentation is eliminated in segmentation bur, like dynamic
partitioning, it suffers from external fragmentation.Segmentation
0 Memory-management scheme that supports user view of memory
0 Aprogram is a collection of segments. A segment is a logical unit such as:
main program,
 
symbol! table, arraysUser’s View of a Program
 
subroutine
 
 
 
 
 
 
 
 
 
 
 
 
 
Operating System Concepts 1025 Silberschatz, Galvin and Gagne 2005Logical View of SegmentationSegmentation Example
In segmentation:
* Logical address it is divided into nwo parts: a segment number (s) & offset (d)
* Each entry of segment table contains segment base and segment limit.
* Segment base indicate starting physical address of the segment in main memory and
segment limit denotes length of the segment.
* Segment number is used as index to the segment table.
+ Offset (d) always between 0 and segment limit otherwise error will occur indicating
atiempting access beyond the end of the segment. Fig. indicates 4 segments of the
program having different sizes. Segment table contains base and limit of each segment.Address Translation ArchitectureExample of Segmentation
 
 
 
 
 
logical address spaceVirtual Memory
* There are many cases where entire program is not needed in main memory at a time.
+ In many cases even though entire program is needed, it may not all be needed at the same
time.
+ Application programs always get the feeling of availability of contiguous working address
space due to the idea of virtual memary.
Actually, this working memory can be physically fragmented and may even overflow on to
disk storage.
This technique makes programming of large applications easier and use real physical
memory more efficiently those without virtual memory.
Althaugh executing process is not entirely present in memory, it can complete its execution
due to virtual memory technique.
Virtual memory permits execution of larger size programs although smaller size physical
memory is available.
it means larger size programs than available physical memory can complete the
execution.
Virtual memory concept separates the user logical memory from actual physical memory.
+ This separation offers very large virtual memory to programmers although actual
physical memory is having very small size.
+ Can be implemented via Demand Paging & Demand Segmentation.
 
*
*
 
Operating System Concepts 1020 Silberschatz, Galvin and Gagne 2005Virtual Memory That is Larger Than Virtual-
Physical Memory address Space
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3560
page —
ame? oe
a
OG
——
NY _— “= Og
BG
oO
memory Oo a
= = 3B
age phyla
ee fac
ee
 
 
 
 
 
Operating System Concepts onDemand Paging
A demand paging is paging system with swapping where pages are brought in main
memory from secondary storage on demand.
* When it is needed to execute a process, memory manager swap it into memory.
+ Instead of swapping the entire process into memory, demand paging allows to swap in
only thase pages which are required for execution at that time.
+ There will be failure in accessing the page, if it is presently mapped to a frame. This will
cause a page fault which is a special type of interrupt.
+ To handle this interrupt, disk operation is initiated by OS handler to read the required
page in memory.
+ After this, mapping of page to free frame is carried out. During all this operation the
process remains blocked.
+ As soon as page becomes available, the blocked process during disk operation is now
unblocked, and placed on the ready queue.
* When scheduler schedules this pracess, it restarts its execution with the instruction that
caused the page fault.
* The execution will continue until ail the instructions are in memory. This approach of
fetching pages as they are needed for execution is called demand paging.
+ Before swapping in the process in memory, the pager program makes sure about which
pages will be used before the process is swapped out agaiDemand Paging
+ The complete process is swapped in. Instead, the pager brings only those required pages
into memory.
* This would restrict the swapping in of pages in memory which are needed for execution.
* Itsaves swap time and the amount of physical memory needed.
+ Indemand paging, valid and invalid bits are used to differentiate between those pages
that are in memory and those pages that are on the disk.
* Demand paging keep more processes in memory than the sum of their memory needs
ensuring CPU utilization as high as possible.Demand Paging
0 Demand paging is a paging system with swapping. When we want to
execute a program on secondary storage (disk), we swap it into
memory. Instead of swapping in entire program into physical memory
we only swaps those pages next to the executing instruction.
A Page Table is used to keep track of pages that are in physical
memory (Valid 1) or on disk (Invalid 0). If the page is not currently in
physical memory then the address in back storage is also stored in
Page Table.
O If the program tried to use a page that is not in memory, a Page Fault
trap will occur (caused by the invalid bit in Page Tablet
0 Demand paging Bring a page into memory only when it is needed
a Less I/O needed
a Less memory needed
O Faster response
0 More users
O Page is needed = reference to it
0 invalid reference > abort
1 not-in-memory = bring to memory
operating System Concepts ww Silberschatz Galvin and Gagne £2005Transfer of a Paged Memory to Contiguous Disk SpaceValid-Invalid Bit
0 With each page table entry a valid—invalid bit is associated
(1 = in-memory, 0 = not-in-memory)
Initially valid-invalid but is set to 0 on all entries
O Example of a page table snapshot:
Frame # | valid-invalid bit
  
page
G During address translation, if valid-invalid bit in page table entry is
Operating System Concepts 10% ‘Silberschatz, Galvin and Gagne ©2008Page Table When Some Pages Are Not in Main MemoryPage Fault and Instruction Restarts
+ If the page is in secondary storage and mapped to a page frame in main memory, a page
fault occurs. (It is a trap the operating system)
+ When page fault occurs operating system is responsible for either killing the process in
the case of an invalid memory reference. Or locating the page or loading the needed
page into an empty page frame and then restarting the faulting instruction.
+ Following is the procedure done by operating system for handling the page fault for any
process:
1. Internal table which is with the PCB of the process is verified for valid or invalid
reference.
2.For invalid reference process is terminated. For valid reference, if page yet is present in
main memory, need to bring it in main memory from secondary storage.
3.For this new page find empty frame from free frame list.
4 If disk is busy operating system has to wait. If busy search the page on disk and read it into
main memory.
5.When reading of the page from secondary storage completes, internal table and page table
of the process gets updated indicating page is now in main memory.
6. Restart the execution of interrupted instruction from newly transferred page.Page Fault
0 Ifthere is ever a reference to a page, first reference will trap to
OS = page fault
0 OS looks at another table to decide:
0 Invalid reference => abort.
0 Just not in memory.
Get empty frame.
‘Swap page into frame.
Reset tables, validation bit = 1.
Restart instruction: Least Recently Used
0 block move
poog
 
0 auto inerement/decrement location
Operating System Concepts 039 ‘Silberschatz, Galvin and Gagne ©2008Steps in Handling a Page FaultWhat happens if there is no free frame?
0 Page replacement = find some page in memory, but not
really in use, swap it out
Q algorithm
0. performance — want an algorithm which will result in
minimum number of page faults
0 Same page may be brought into memory several times
Operating System Concepts 0 Silberschatz, Galvin and Gagne 2005Page Replacement Algorithms
Afier page fault handling, to accommodate the new page, it is possible that frames will
remain always free. So it is needed to replace the existing page from main memory.
* For that purpose, there are many page replacement algorithms.
+ All these algorithms can be evaluated by running it on a particular string of memory
reference, It is expected that the number of page faults should be minimum to get the
performance.
+ Reference string is the string of memory references.
* The generation of the reference strings carried out artificially.
+ The reference string can also be generated by listing the address of each memory
reference by tracing the system. The important factor ta determine number of page faults
is the number of page frames available.
* These numbers of page faults are for a particular reference string and for page
replacement algorithm for given available number of page frames.
* The number of frames available is inversely proportional to number of page faults.
© Itmeans that if more frames are available less number of faults will occur.Need For Page ReplacementBasic Page Replacement
1. Find the location of the desired page on disk
2. Finda free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame
3. Read the desired page into the (newly) free frame. Update the
page and frame tables.
4. Restart the process
Operating System Concepts tose Silberschatz, Galvin and Gagne 2005me valiceinwalid bitFIFO Algorithm
* The simplest page-replacement algorithm and work on the basis of first in first our
(FIFO). it throws out the pages in the order in which they were brought in.
* The time is associated with each page when it was brought into main memory.
+ This algorithm always chooses oldest page for replacement.
© Since replacement is FIFO, a queue can be maintained to hold all the pages in main
amemory.
* This algorithm doesn't care about which pages are accessed frequently and which are
not. However, it is used in windows 2000.
* Consider the reference string 5, 0, 2, 3, 0, 1, 3, 4, 5, 4, 2. 0, 3, 4, 3 and consider 3
available frames.
+ Since first three pages were initially in main memory, references (5, 0, 2) causes page
faults and brought into these 3 free frames.
* Reference to page 3 again causes the fault. To bring page 3 in memory as per FIFO
policy, page 5 is replaced. Next reference is to page 0 and it is already in memory.
* To bring page I in memory again as per FIFO policy page 0 is replaced. Page 3 is
already in memory. Reference to page 4 again causes the fault.
* To bring page 4 in memory as per FIFO policy, page 2 is replaced. This process
continues as shown in FisFIFO Algorithm
 
+ Whenever faults occurred, it is shown that which pages are in 3 frames. There are in total
Hi faults occurred together.
+ This algorithm sometimes suffers through result called as Belady's anomaly.
+ That is page fault rate will increase even though number of allocated frames increases.First-in-First-Out (FIFO) Algorithm
0 Reference string: 1, 2,3, 4, 1, 2,5, 1.2.3, 4,5
0 3 frames (3 pages can be in memory at a time per process)
B
9 page faults
ro
Rog
O 4 frames 3 2
a
s
1 5 10 page faults
2
bh WR
3
0 FIFO Replacement — Belady’s Anomaly
0 more frames => more page faults
Operating System Concepts oe ‘Silberschatz, Galvin and Gagne ©2008FIFO Page Replacement
 
ference siting
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
701203 042 303 212 04 70 4
AAAe Aawaaae [o| {o| 7] [7] [7
J [2] fo) fo) =‘! (a) Is} 2] fel 2 1 [1 1| [9] [o
1) if 1] |9} [0] [o} [3] [3 3| [2] 2| |2| (a
page frames:
Opecsing System Concepts 1080
Silberschatz, Galvin and Gagne 2005LRU Algorithm
+ The time of page's last use is associated with each page.
+ When a page must be replaced, LRU chooses that page that was used farthest back in the
past.
+ LRU isa good approximation to the optimal algorithm.
+ This algorithm looks backward in time while optimal replacement algorithm looks forward in
time.
+ This policy suggests that replace a page whose last usage is farthest from current time.
+ This algorithm can be implemented with some hardware support and is considered to be a
good solution for page replacement.
+ This algorithm does suffer through Belady's anomaly.
+ We will again consider the same reference string 5, 0, 2, 3, 0, 1, 3, 4, 5,4, 2, 0, 3,4, 3.
Following is the result of applying LRU page replacement algorithm.
+ Since first three pages were initially in main memory, references (5, 0, 2) causes page faults
and brought into these free frames. Reference to page 3 again causes the fault.
+ To bring page 3 in memory as per optimal replacement policy, page 5 is replaced because it
is the page which is used least recently.
+ Next reference is to page 0 and it is already in memory.
+ Tobring page 1 in memory again as per LRU replacement policy, page 2 is replaced. Next
reference is to page 3 and it is already in memory. This process continues as shown in Fig.
causing in total lI page faults.LRU Algorithm
 
Implementation of LRU
Using Stack Initially keep all the page numbers an the stack.
* Remove the page from stack whenever it is referenced and place it on the top of the stack.
+ Any time top of stack shows latest page number that is referenced and bottom shows the page
which is used for longest period of time.
Using counters
+ Ina page table add time of use field with each page.
At each reference to page a CPU logical clock is incremented.
Copy clock register content ta time of use field.
Time of use field will show the time of last reference to the page.
Replace the page having smallest time value.Least Recently Used (LRU) Algorithm
O Reference string: 1, 2,3, 4, 1, 2, 5, 1.2.3, 4,5
5
0 Counter implementation
0 Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter
0 When a page needs to be changed, look at the counters to
determine which are to change
Operating System Concepts ws2 Silberschatz, Galvin and Gagne 2005LRU Page Replacement
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Operating System Concepts 053 ‘Silberschats, Galvin and Gagne ©2005LRU Algorithm (Cont.)
0 Stack implementation — keep a stack of page numbers in a double
link form:
0 Page referenced:
» move it to the top
» fequires 6 pointers to be changed
0 No search for replacement
Operating System Concepts ose Silberschatz, Galvin and Gagne 2005Optimal Page Replacement Algorithm
* Compare to all other page-replacement algorithms, this algorithm has the lowest page-fault
rate.
+ This algorithm replaces that page which will not be used for the longest period of time.
+ This delays the unavoidable page fault as much as possible, and thus decreases the total
number of page faults.
+ As algorithm replace the page on the basis of future use of page, it is impossible to implement.
+ The optimal algorithm is unrealizable because it is impossible to determine the number of
instructions that will be executed in the future before the page will be referenced.
+ This algorithm does suffer through Belady's anomaly.
+ Again consider the same reference string 5, 0, 2, 3.0, 1, 3, 4, 5,4, 2.0, 3,4,3
+ Since first three pages were initially in main memory, references (5, 0, 2) causes page faults
and brought into these free frames.
+ Reference to page 3 again causes the fault. To bring page 3 in memory as per optimal
replacement policy, page 2 is replaced because it is the page which will be used for longest
period of time.
+ Next reference is to page 0 and it is already in memory. To bring page I in memory again as
per optimal replacement policy, page 0 is replaced.
+ Next reference is to page 0 and it is already in memory.
+ Reference to page 4 again causes the fault. Since page 1 will no longer be referenced, it is
replaced. This process continues as shown in Fig. causing in total 8Optimal Page Replacement AlgorithmOptimal Algorithm
0 Replace page that will not be used for longest period of time
O 4 frames example
1,2,3,4,1,2,5,1,2,3,4,5
6 page faults
0 How do you know this?
0 Used for measuring how well your algorithm performs
Operating System Concepts w0s7 ‘Silberschatz, Galvin and Gagne ©2008Optimal Page Replacement
 
 
 
 
 
 
 
 
Operating System Concepts 1058 ‘Silberschats, Galvin and Gagne ©2005