0% found this document useful (0 votes)
20 views25 pages

Unit 3

The document covers memory management in operating systems, detailing concepts such as main memory, address binding, and protection mechanisms. It discusses various memory allocation strategies including contiguous memory allocation, segmentation, and paging, along with their advantages and challenges like fragmentation. Additionally, it explains dynamic loading and linking, as well as the hardware support required for efficient memory management.

Uploaded by

kavindharan40828
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views25 pages

Unit 3

The document covers memory management in operating systems, detailing concepts such as main memory, address binding, and protection mechanisms. It discusses various memory allocation strategies including contiguous memory allocation, segmentation, and paging, along with their advantages and challenges like fragmentation. Additionally, it explains dynamic loading and linking, as well as the hardware support required for efficient memory management.

Uploaded by

kavindharan40828
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

ANJALAI AMMAL MAHALINGAM ENGINEERING COLLEGE


(NAAC Accredited with "B" Grade)

CS3451-INTRODUCTION TO OPERATING SYSTEMS


UNIT III –MEMORY MANAGEMENT
UNIT III-MEMORY MANAGEMENT

Syllabus using two registers; usually a base and a limit. The base register holds
Main Memory - Swapping - Contiguous Memory Allocation – Paging - the smallest legal physical memory address; the limit register specifies
Structure of the Page Table - Segmentation, Segmentation with paging; the size of the range. The base and limit registers can be loaded only by
Virtual Memory - Demand Paging – Copy on Write – Page Replacement the operating system,which uses a special privileged instruction.
- Allocation of Frames –Thrashing. Protection of memory space is accomplished by having the CPU
3.1. Main Memory hardwarecompare every address generated in user mode with the registers.
Memory consists of a large array of bytes, each with its own Any attemptby a program executing in user mode to access operating-
address. The CPU fetches instructions from memory according to the system memory orother users’ memory results in a trap to the operating
value ofthe program counter. These instructions may cause additional system, which treats theattempt as a fatal error.
loading from and storing to specific memory addresses.
A typical instruction-execution cycle, for example, first fetches an
instruction from memory. The instruction is then decoded and may cause
operandsto be fetched from memory. After the instruction has been
executed on theoperands, results may be stored back in memory.
3.1.1 Basic hardware
Program must be brought (from disk) into memory and placed
within a process for it to be run. Main memory and registers are only
storage CPU can access directly. Memory unit only sees a stream of
addresses &read requests, or address & data and write requests. Register Figure 3.1 a base and a limit register define a logical address space.
access in one CPU clock (or less). Main memory can take many cycles,
causing a stall. Cache sits between main memory and CPU registers.
Protection of memory required to ensure correct operation.
Each process has a separate memory space. Separate per-process
memory space protects the processes from each other andis fundamental to
having multiple processes loaded in memory for concurrentexecution. To
separate memory spaces, we need the ability to determine therange of
legal addresses that the process may access and to ensure that theprocess
can access only these legal addresses. This protection can be achieved by Figure 3.2 Hardware address protection with base and limit registers.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 2


UNIT III-MEMORY MANAGEMENT

3.1.2 Address Binding Physical address space is the set of all physical addresses generated by a
A program resides on a disk as a binary executable file. To be program
executed,the program must be brought into memory and placed within a
process.Depending on the memory management in use, the process may
be movedbetween disk and memory during its execution.
Addresses may be represented in different ways during these steps.
Addresses in the source program are generally symbolic .A compiler
binds these symbolic addresses to relocatable addresses.The linkage editor
or loader in turn binds the relocatable addresses to absolute addresses.
Each binding is a mapping from one address space to another.
Address binding of instructions and data to memory addresses can
happen at three different stages
 Compile time: If memory location known a priori, absolute code
can be generated; must recompile code if starting location changes.
 Load time: Must generate relocatable code if memory location is
not known at compile time.
 Execution time: Binding delayed until run time if the process can
be moved during its execution from one memory segment to Figure 3.3 Multistep processing of a user program.
another and it needs hardware support for address maps. The run-time mapping from virtual to physical addresses is done by
3.1.3 Logical Versus Physical Address Space ahardware device called the memory-management unit (MMU).This
An address generated by the CPU is referred to as a logical mappingwith a simple MMU scheme that is a generalization of the base-
address,whereas an address seen by the memory unit is, the one loaded register scheme. The base register is now called a relocation register. The
into the memory-address register of the memory isreferred to as value in the relocation register is added to every address generated by
aphysical address. auser process at the time the address is sent to memory.
Logical and physical addresses are the same in compile-time and 8.1.4 Dynamic Loading
load-time address-binding schemes; logical (virtual) and physical
It has been necessary for the entire program and alldata of a process to be
addresses differ in execution-time address-binding scheme.
in physical memory for the process to execute. The sizeof a process has
Logical address space is the set of all logical addresses generated
thus been limited to the size of physical memory. To obtain better
by a program.
memory-space utilization, dynamic loading is used.
AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 3
UNIT III-MEMORY MANAGEMENT

 A routine is not loaded until it is called. All routines are kept on library to use.Only programs that are compiled with the new library
disk in a relocatable load format. versions are affected by any incompatible changes incorporated in it.
 The main program is loaded into memory and is executed. Other programs linked before the new library was installed will
 When a routine needs to call another routine, the calling routine continueusing the older library. This system is also known as shared
first checks to see whether the other routine has been loaded. libraries.
 If it has not, the relocatable linking loader is called to load the
3.2 Swapping
desired routine into memory and to update the program’s address
tables to reflect this change. A process must be in memory to be executed. A processcan
 Then control is passed to the newly loaded routine. beswapped temporarily out of memory to a backing store and then brought
Advantage backinto memory for continued execution (Figure 3.4). Swapping makes it
 A routine is loaded only when it is needed. This method is possible for the total physical address space of all processes to exceed the
particularly useful when large amounts of code are needed to real physicalmemory of the system, thus increasing the degree of
handle infrequently occurring cases, such as error routines. multiprogramming in a system.
 Dynamic loading does not require special support from the
operating system.
3.1.5 Dynamic Linking and Shared Libraries
Dynamically linked libraries are system libraries that are linked
to user programs when the programs are run.
Static linking, system libraries are treated like any other object
module and are combined by the loader into the binary program image.
Dynamic linking is similar to dynamic loading.
Dynamic linking is particularly useful for libraries. A library may
be replaced by a new version, and all programs that reference the
librarywill automatically use the new version.
Without dynamic linking, all suchprograms would need to be
relinked to gain access to the new library.So thatprograms will not Figure 3.4 Swapping of two processes using a disk as a backing store.
accidentally execute new, incompatible versions of libraries; version The backing store is commonly a fast disk. It must be largeenough to
information is included in both the program and the library. accommodate copies of all memory images for all users, and it must
More thanone version of a library may be loaded into memory, and provide direct access to these memory images. The system maintains a
each program uses itsversion information to decide which copy of the

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 4


UNIT III-MEMORY MANAGEMENT

ready queue consisting of all processes whose memory images are on the o Limit register contains range of logical addresses – each logical
backing store or in memory and are ready to run. Whenever the CPU address must be less than the limit register
scheduler decidesto execute a process, it calls the dispatcher. The o MMU maps logical address dynamically
dispatcher checks to see whether the next process in the queue is in o Can then allow actions such as kernel code being transient and kernel
memory. If it is not, and if there is no freememory region, the dispatcher changing size.
swaps out a process currently in memory andswaps in the desired process.
It then reloads registers and transfers control tothe selected process.
Double buffering
Never swap a process withpending I/O, or execute I/O operations
only into operating-system buffers.Transfers between operating-system
buffers and process memory then occuronly when the process is swapped
in and this double buffering itselfadds overhead. The data needs to be
copied again, from kernel memory touser memory, before the user process
can access it.
3.3 Contiguous Memory Allocation Figure 3.5 Hardware support for relocation and limit registers.
The main memory must accommodate both the operating system and 3.3.2 Memory Allocation
thevarious user processes. The memory is usually divided into two  Fixed-sized partitions allocation
partitions: one for the resident operating system and one for the user  Multiple partitions allocation
processes. Fixed-sized partition scheme
 Resident operating system, usually held in low memory with Each partition may contain exactly one process. Thus, the degree
interrupt vector of multiprogramming is bound by the number of partitions.
 User processes then held in high memory Multiple partition schemes
 Each process contained in single contiguous section of memory When a partition is free, a process is selected from the inputqueue
3.3.1 Memory Protection and is loaded into the free partition. When the process terminates,
Relocation registers used to protect user processes from each other, and thepartition becomes available for another process.In the variable-
from changing operating-system code and data partition scheme, the operating system keeps a table indicating which
o Base register contains value of smallest physical address parts of memory are available and which are occupied. All memory is
available for user processes and is considered one large block of available

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 5


UNIT III-MEMORY MANAGEMENT

memory; a hole. Memory contains a set of holes of various sizes.When a Solution to the external-fragmentation problem
process arrives, it is allocated memory from a hole large enough to  Compaction: The goal is to shuffle the memory contents so as
accommodate it. Process exiting frees its partition, adjacent free partitions to place all free memory together in one large block. It is not
combined. always possible.
Operating system maintains information about:  If relocation is static and is done at assembly or load time,
a) allocated partitions b) free partitions (hole) compaction cannot be done.
This is a dynamic storage allocation problem that concerns how to  It is possible only if relocation is dynamic and is done at
satisfy a request of size n from a list of free holes. execution time.
First fit. To permit the logical address space of the processes to be noncontiguous,
Allocate the first hole that is big enough. Searching can start thusallowing a process to be allocated physical memory wherever such
eitherat the beginning of the set of holes or at the location where the memory isavailable.
previousfirst-fit search ended.  Segmentation
Best fit.  Paging
Allocate the smallest hole that is big enough. The search must be 3.4 Segmentation
on the entire list, unless the list is ordered by size. This strategy produces Memory-management scheme that supports user view of memory.
the smallest leftover hole. Segmentation permits the physical address space of a process to be
Worst fit. noncontiguous. A program is a collection of segments. A logical address
Allocate the largest hole. The entire list is searched, unless it is space is a collection of segments.
sorted by size. This strategy produces the largest leftover hole.
3.3.3 Fragmentation
Fragmentation is a general problem in computing that can occur
wherever blocks of data are to be managed.When processes are loaded and
removed from memory, the free memory space is broken into little
pieces.Storage is fragmented into a large number of small holes.
External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous
Internal Fragmentation – allocated memory may be slightly larger
than requested memory; this size difference is memory internal to a
partition, but not being used. Figure 3.6 Programmer’s view of a program.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 6


UNIT III-MEMORY MANAGEMENT

Each segment has a name and a length. The addresses specify both the it is not, atrap is send to the operating.When an offset is legal, it is added
segment name and the offset within the segment. A logical address to the segment base to produce the addressin physical memory of the
consists of a two tuple:a segment name and an offset. desired byte. The segment table is thus essentiallyan array of base–limit
<Segment-number, offset>. register pairs.
3.4.1 Segmentation Hardware  Segment-table base register (STBR) points to the segment table’s
Segment tableis the implementation that maps two-dimensional user- location in memory.
defined addresses into one-dimensional physical addresses;  Segment-table length register (STLR) indicates number of
Each table entry has: segments used by a program.
 Segment base – contains the starting physical address  Segment number s is legal if s < STLR
where the segments reside in memory Protection
 Segment limit – specifies the length of the segment With each entry in segment table associate:
 validation bit = 0  illegal segment
 read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment
levelsince segments vary in length, memory allocation is a dynamic
storage-allocation problem.
3.4 Paging
Physical address space of a process can be non-contiguous; process is
allocated physical memory whenever it is available,
 Avoids external fragmentation
 Avoids problem of varying sized memory chunks
3.4.1 Basic method
Physical memoryinto fixed-sized blocks called frames and
Figure 3.7 Segmentationhardware. breaking logical memory intoblocks of the same size called pages.When a
A logical addressconsists of two parts: a segment number, s, and process is to be executed, its pages are loaded into any available memory
an offset into that segment, d. frames from their source.The backing store is divided into fixed-sized
The segment number is used as an index to the segment table. The blocks that are the same size as the memory frames or clusters of
offset d of the logical address must be between 0 and the segment limit. If multipleframes.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 7


UNIT III-MEMORY MANAGEMENT

Every addressgenerated by the CPU is divided into two parts: a  A set of dedicated registers- should be built with very high-speed logic
page number (p) and a page offset (d). The page number is used as an to make the paging-address translation efficient.
index into a page table. The page table contains the base address of each  The page table is kept in main memory, and a page-table base register
page in physical memory. This base address is combined with the page (PTBR) points to the page table.Changing page tables requires
changing only this one register, substantially reducing context-switch
offset to define the physical memory address that is sent to the memory
time.
unit.  A special, small, fast lookup hardware cache called a translation look-
The size of apage is a power of 2, varying between 512 bytes and aside buffer (TLB).
1 GB per page. The selection of a power of 2 as a page size makes the 3.4.3 Paging hardware with TLB
translation of a logical address into a page number and page offset The TLB is associative, high-speed memory. Each entry in the
particularly easy. If the size of the logical address space is 2m, and page TLB consists of two parts: a key (or tag) and a value. When the
size is2n bytes, then the high-order m−n bits of a logical address designate associative memory is presented with anitem, the item is compared with
the page number, and the n low-order bits designate the page offset. Thus, all keys simultaneously. If the item is found, the corresponding value field
the logical address is as follows: is returned. The search is fast; a TLB lookup in modern hardware is part of
the instruction pipeline, essentially adding no performance penalty. To be
able to execute the search within a pipeline step, however, the TLB must
be kept small.
p is an index into the page table and d is the displacement within The TLB contains only a few of the page-table entries. When a
thepage. logical address is generated by the CPU, its page number is presented to
The operating system is managing physical memory using a data the TLB. If the page number is found, its frame number is immediately
structure called a frame table to store the allocation details of physical available and is used to access memory.
memory which frames are allocated, which frames are available, how If the page number is not in the TLB (known as a TLB miss), a
many total frames there are, and so on. The frame table has one entry for memory reference to the page table must be made. Depending on the CPU,
each physical page frame, indicating whether it is free or allocated and, if this may be done automatically in hardware or via an interrupt to the
it is allocated, to which page of which process or processes. operating system. When the frame number is obtained, it can be used to
3.4.2 Hardware Support access memory. The page number and frame number is added to the TLB,
The hardware implementation of the page table can be done in so that they will be found quickly on the next reference.
several ways, If the TLB is already full of entries, an existing entry must be
selected for replacement. Replacement policies range from least recently

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 8


UNIT III-MEMORY MANAGEMENT

used (LRU) through round-robin to random. Furthermore, some TLBs An advantage of paging is the possibility of sharing common code. This
allowcertain entries to be wired down, meaning that they cannot be considerationis particularly important in time-sharing
removed fromthe TLB. Typically, TLB entries for key kernel code are environment.Reentrant code is non-self-modifying code: it never changes
wired down. during execution. Thus, two or more processes can execute the same code
Some TLBs store address-space identifiers (ASIDs) in each TLB at the same time.Each process has its own copy of registers and data
entry. An ASID uniquely identifies each process and is used to provide storage to hold the data forthe process’s execution.
address-space protection for that process. When the TLB attempts to
resolve virtual page numbers, it ensures that the ASID for the currently
running process matches the ASID associated with the virtual page. If the
ASIDs do not match, the attempt istreated as a TLB miss. In addition to
providing address-space protection, an ASID allows the TLB to contain
entries for several different processes simultaneously.
If the TLB does not support separate ASIDs, then every time a new
page table is selected (for instance, with each context switch), the TLB
must be flushed (or erased) to ensure that the next executing process does
not use the wrong translation information. The percentage of times that the
page number of interest is found in the TLB is called the hit ratio.
One additional bit is attached to each entry in the page table: a
valid–invalid bit.
 When this bit is set to valid, the associated page is in the process’s
logical address space and is thus a legal (or valid) page.
 When the bit is set to invalid, the page is not in the process’s Figure3.8 paging hardware with TLB.
logical address space. 3.5 Virtual Memory- Demand Paging
 Some systems provide hardware, in the form of a page-table Virtual memory is a technique that allows the execution of
length register (PTLR), to indicate the size of the page table. This processes that are not completely in memory. One major advantage of this
value is checked against every logical address to verify that the scheme is that programs can be larger than physical memory. Further,
address is in the valid range for the process. virtual memoryabstracts main memory into an extremely large, uniform

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 9


UNIT III-MEMORY MANAGEMENT

array of storage, separating logical memory as viewed by the user from in a whole process, the pager brings only those pages into memory. Thus,
physical memory. it avoids reading into memory pages that will not be used anyway,
Virtual memory can be implemented via: decreasing the swap timeand the amount of physical memory needed.
 Demand paging Hardware support to distinguish between the pages that are in
memory and the pages that are on the disk. The valid–invalid bit scheme
3.5.1Demand Paging is used for this purpose
An executable program might be loaded from disk into memory in  When this bit is set to “valid,” the associated page is both legal
the following ways and in memory.
 To load the entire program in physical memory at program  If the bit is set to “invalid,” the page either is not valid (that is,
execution time. not in the logical address space of the process) or is valid but is
o problem with this approach is that we may notinitially need currently on the disk.
the entire program in memory The page-table entry for a page that is brought into memory is set as usual,
 An alternative strategy is to load pages only as they are needed. This but the page-table entry for a page that is not currently in memory is either
technique is known as demand paging and is commonly used in simply marked invalid or contains the address of the page on disk. While
virtual memory systems. the process executes and accesses pages that are memory resident,
o Pages are loaded only when they aredemanded during execution proceeds normally.
program execution.
3.5.3 Page fault.
A demand-paging system is similar to paging system with
Access to a page marked invalid causes a page fault. The paging
swappingwhere processes reside in secondary memory (usually a
hardware, in translating the address through the page table, will notice that
disk).When there is a need to execute a process that is swapped into
the invalid bit is set, causing a trap to the operating system.
memory. Rather thanswapping the entire process into memory, though, a
Procedure for handling this page fault
lazy swapper is used.
1. Check an internal table (kept with the process control block) for this
A lazy swapper never swaps a page into memory unless that page
process to determine whether the reference was a valid or an invalid
will beneeded. The term “swapper” is replaced with “pager”because a
memory access.
swapper manipulates entire processes, whereas apager is concerned with
2. If the reference was invalid, the processis terminated. If it was valid but
the individual pages of a process.
it has not yet brought in that page, it must be paged in.
3.5.2 Basic Concepts
3. A free frame is identified.
When a process is to be swapped in, the pager guesses which pages
will beused before the process is swapped out again. Instead of swapping

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 10


UNIT III-MEMORY MANAGEMENT

4. A disk operation is scheduled to read the desired page into the newly Page Fault Rate (p) is 0 ≤p ≤ 1
allocated frame.  if p = 0 no page faults
5. When the disk read is complete, the internal table kept with the  if p = 1, every reference is a fault
processand the page table is modified to indicate that the page is now in Effective Access Time (EAT)
memory. Effective access time = (1 − p) × ma + p × page fault time.
6. The instruction is restarted that was interrupted by the trap.
Pure demand paging
Never bring a page into memory until it is required.Start
executing a process with no pages in memory. When the operating system
sets the instruction pointer to the first instruction of the process, which is
on a non-memory-resident page, the process immediately faults for the
page.
After this page is brought into memory, the process continues to
execute, faulting as necessary until every page that it needs is in memory.
At that point, it can execute with no more faults.
The hardware to support demand paging
Page table
This table has the ability to mark an entry invalid through avalid–
invalid bit or a special value of protection bits.
Secondary memory Figure 3.9 Steps in handling a page fault.
This memory holds those pages that are not present in main A page fault causes the following sequence to occur:
memory. The secondary memory is usually a high-speed disk. It is known 1. Trap to the operating system.
as the swap device, and the section of disk used for this purpose is known 2. Save the user registers and process state
as swap space. 3. Determine that the interrupt was a page fault.
3.5.4 Performance of Demand Paging 4. Check that the page reference was legal and determine the location of
Programs tend to have locality of reference which results in thepage on the disk.
reasonable performance from demand paging.Demand paging can 5. Issue a read from the disk to a free frame:
significantly affect the performance of a computer system. a. Wait in a queue for this device until the read request is serviced.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 11


UNIT III-MEMORY MANAGEMENT

b. Wait for the device seek and/or latency time. o Pool should always have free frames for fast demand page
c. Begin the transfer of the page to a free frame. execution
6. While waiting, allocate the CPU to some other user.  Don’t want to have to free a frame as well as other processing
7. Receive an interrupt from the disk I/O subsystem (I/O completed). on page fault
8. Save the registers and process state for the other user (if step 6 is o Why zero-out a page before allocating it?
executed).  vfork() variation on fork() system call has parent suspend and child
9. Determine that the interrupt was from the disk. using copy-on-write address space of parent
10. Correct the page table and other tables to show that the desired page o Designed to have child call exec()
isnow in memory. o Very efficient
11. Wait for the CPU to be allocated to this process again. Before Process 1 Modifies Page C
12. Restore the user registers, process state, and new page table, and
thenresume the interrupted instruction.
Three major components of the page-faultservice time:
1. Service the page-fault interrupt
2. Read in the page.
3. Restart the process.
The first and third tasks can be reduced, with careful coding, to After Process 1 Modifies Page C
severalhundred instructions.
Copy on Write
 Copy-on-Write (COW) allows both parent and child processes to
initially share the same pages in memory
o If either process modifies a shared page, only then is the
page copied
 COW allows more efficient process creation as only modified pages
are copied 3.6 Page Replacement
 In general, free pages are allocated from a pool of zero-fill-on-demand  Prevent over-allocation of memory by modifying page-fault
pages service routine to include page replacement.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 12


UNIT III-MEMORY MANAGEMENT

 Use modify (dirty) bit to reduce overhead of page transfers – only Two major problems must be solved to implement demand paging that is
modified pages are written to disk to develop
 Page replacement completes separation between logical  A frame-allocation algorithm
memory and physical memory – large virtual memory can be o How many frames to give each process
provided on a smaller physical memory  A page-replacement algorithm.
3.6.1 Basic Page Replacement o Which frames to replace
1. Find the location of the desired page on the disk. 3.6.2 FIFO Page Replacement
2. Find a free frame:  Simplest page-replacement algorithm
a. If there is a free frame, use it.  Associates with each page the time when that page was brought
b. If there is no free frame, use a page-replacement algorithm to into memory.
selecta victim frame.  When a page must be replaced, the oldest page is chosen.
c. Write the victim frame to the disk; change the page and frame  Suffers from Belady’s anomaly.
tablesaccordingly.  Belady’s anomaly: the page-fault rate may increase as the number
of allocated frames increases.

Figure 3.10 Page Replacement.


3. Read the desired page into the newly freed frame; change the page and 15 page faults
frame tables. 3.6.3 Optimal Page Replacement
4. Continue the user process from where the page fault occurred.  Replace the page that will not be used for the longest period of
time.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 13


UNIT III-MEMORY MANAGEMENT

 The algorithm that has the lowest page-fault rate of all algorithms  Every page entry has a counter; every time page is
and will never suffer from Belady’s anomaly. referenced through this entry, copy the clock into the
 Such an algorithm is called OPT or MIN. counter
 When a page needs to be changed, look at the counters to
find smallest value Search through table needed
2. Stack implementation
 To keep a stack of page numbers.
 Whenever a page is referenced, it is removed from the stack
and put on the top.
 Implemented by using a doubly linked list with a head
pointer and a tail pointer.
 Removing a page and putting it on the top of the stack then
9 page faults
requires changing six pointersat worst.
3.6.4 LRU Page Replacement
LRU and OPT are cases of stack algorithms that don’t have Belady’s
 Replace the page that has not been used for the longest period of
Anomaly
time.
3.6.5 LRU-Approximation Page Replacement
 Use past knowledge rather than future
 LRU needs special hardware and still slow
 Associate time of last use with each page
 Reference bit
o With each page associate a bit, initially = 0
o When page is referenced bit set to 1
o Replace any with reference bit = 0 (if one exists)
o Theorder of use is not known.
3.6.5.1 Additional-Reference-Bits Algorithm
 Can gain additional ordering information by recording the
12 page faults reference bits at regular intervals
Two implementations are feasible:  Can keep an 8-bit byte for each page in a table in memory.
1. Counter implementation

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 14


UNIT III-MEMORY MANAGEMENT

 At regular intervals (say, every 100 milliseconds), a timer interrupt


transfers control to the operating system.
 Every time a page is referenced
o Shift the reference bits to the right by 1
o Place the reference bit (1 if being visited, 0 otherwise) into
the higher order bit of the reference bits.
o The page with the lowest reference bits value is the one that
is Least Recently Used, thus to be replaced.
 E.g., the page with ref bits 11000100 is more recently used than
the page with ref bits 01110111
3.6.5.2 Second-Chance Algorithm
 Generally FIFO, plus hardware-provided reference bit
 If page to be replaced has
o Reference bit = 0 -> replace it Figure 3.11 Second-chance (clock) page-replacement algorithm.
o reference bit = 1 then: 3.6.5.3 Enhanced Second-Chance Algorithm
 set reference bit 0, leave page in memory  Improve algorithm by using reference bit and modify bit. With
 replace next page, subject to same rules these two bits
 Implementation is done as the clock algorithm in a circular  The following four possible classes:
queue. 1. (0, 0) neither recently used nor modified—best page to replace
 A pointer (that is, a hand onthe clock) indicates which page is to be 2. (0, 1) not recently used but modified—not quite as good,
replaced next. because thepage will need to be written out before replacement
 When a frame is needed,the pointer advances until it finds a page 3. (1, 0) recently used but clean—probably will be used again soon
with a 0 reference bit. 4. (1, 1) recently used and modified—probablywill be used again
 As it advances,it clears the reference bits. soon, andthe page will be need to be written out to disk before it
can be replaced
 Once a victim page is found, the pageis replaced, and the new page
is inserted in the circular queue in that position.  When page replacement called for, use the clock scheme but use
the four classes replace page in lowest non-empty class
o Might need to search circular queue several times

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 15


UNIT III-MEMORY MANAGEMENT

3.6.6 Counting-Based Page Replacement is page sharing).Also at least a minimum number of frames must be
 Keep a counter of the number of references that have been made to allocated.
each page The minimum number of frames per process is defined by the
architecture; the maximum number is defined by the amount of
o Not common
available physical memory.
 Lease Frequently Used (LFU) Algorithm: replaces page with One reason for allocating at least a minimum number of frames
smallestcount involves performance. As the number of frames allocated to each process
 Most Frequently Used (MFU) Algorithm: based on the decreases, the page-fault rate increases, slowing process execution. In
argument that the page with the smallest count was probably just addition, when a page fault occurs before an executing instruction is
brought in and has yet to be used complete, the instruction must be restarted. Consequently, there must be
enough frames to hold all the different pages that any single instruction
3.6.7 Page-Buffering Algorithms canreference.
 Keep a pool of free frames, always 3.7.2 Allocation Algorithms
o Then frame available when needed, not found at fault time  Equal allocation
o Read page into free frame and select victim to evict and Split mframes among n processes is to give everyone anequal
add to free pool share, m/nframes (ignoring frames needed by the operating
o When convenient, evict victim systemfor the moment).
 Possibly, keep list of modified pages  Proportional allocation
o When backing store otherwise idle, write pages there and Allocate available memory to each process according to its
set to non-dirty size. Various processes will need differing amounts of memory.
Let the size of the virtual memory for process pi be si,
 Possibly, keep free frame contents intact and note what is in them
S=∑si
o If referenced again before reused, no need to load contents
Then, if the total number of available frames is m, ai frames are allocated
again from disk
to process pi, where ai is approximately
o Generally useful to reduce penalty if wrong victim frame
ai = si/S × m.
selected
Each ai to be an integer that is greater than theminimum number of frames
3.7Allocation
required by the instruction set, with a sum notexceeding m.
3.7.1 Minimum Number of Frames
Example
The allocation of frames is constrained in various ways. It cannot
be allocated more than the total number of available frames (unless there Split 62 frames between twoprocesses, one of 10 pages and one
of 127 pages, by allocating 4 frames and 57frames, respectively,

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 16


UNIT III-MEMORY MANAGEMENT

P1=10/137 × 62 ≈ 4 o Operating system thinking that it needs to increase the degree


P2=127/137 × 62 ≈ 57. of multiprogramming
In this way, both processes share the available frames according to o Another process added to the system
their“needs,” rather than equally Definition
3.7.3 Global versus Local Allocation A process is thrashing if it isspending more time paging than
With multiple processes competing for frames,page-replacement executing. A high paging activity is called thrashing.
algorithms can be classified into two broad categories: 3.8.1 Cause of Thrashing
 Global replacement Thrashing results in severe performance problems. If CPU
Global replacement allows a process to select a utilization is too low, the degree of multiprogramming is increased by
replacement frame from the set of all frames, even if that frame is introducing a new process to the system.
currently allocated to some other process; that is, one process can If a global page-replacement algorithm is used; it replaces pages
take a frame from another. without regard to the process to which they belong.
 Local replacement
 It starts faulting andtaking frames away from other processes.
Local replacement requires that each process select from
only its own set of allocated frames. These processes need those pages,however, and so they also fault,
3.7.4 Non-Uniform Memory Access taking frames from other processes.
So far all memory accessed equally .Many systems are NUMA – speed of  These faulting processes must use the paging device to swap pages
access to memory varies .Consider system boards containing CPUs and in and out. As they queue up for the paging device, the ready
memory, interconnected over a system bus. Optimal performance comes queue empties.
from allocating memory “close to” the CPU on which the thread is  As processes wait for the paging device, CPU utilization decreases.
scheduled and modifying the scheduler to schedule the thread on the same The CPU scheduler sees the decreasing CPU utilization and
system board when possible. increases the degree of multiprogramming as a result.
3.8 Thrashing  The new process tries to get started by taking frames from running
If a process does not have “enough” pages, the page-fault rate is very high processes, causing more page faults and a longer queue for the
 Page fault to get page paging device.Thrashing has occurred, and system throughput
 Replace existing frame plunges. The page fault rate increases tremendously.
 But quickly need replaced frame back The effects of thrashing can be limited by using a local replacement
 This leads to: algorithm (or priority replacement algorithm). If one process starts
o Low CPU utilization

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 17


UNIT III-MEMORY MANAGEMENT

thrashing, it cannot steal frames from another process and cause the latter A parameter, ∆ is to define the working-set window. The ideais to
to thrash as well. examine the most recent ∆ page references. The set of pages in the most
 If processes are thrashing, they will be in the queue for the paging recent ∆ page references is the working set. If a page is in activeuse, it
device most of the time. will be in the working set. If it is no longer being used, it will drop
 The average service time for a page fault will increase because of fromthe working set ∆ time units after its last reference. Thus, the working
the longer average queue for the paging device. set is anapproximation of the program’s locality.
 Thus, the effective access time will increase even for a process that The accuracy of the working set depends on the selection of ∆. If ∆
is not thrashing. is toosmall, it will not encompass the entire locality; if ∆ is too large, it
may overlapseveral localities. In the extreme, if ∆ is infinite, the working
set is the set ofpages touched during the process execution.
The most important property of the working set, is its sizeWSSi ,
for each process in the system,
D = ∑ WSSi,
Dis the total demand for frames. Each process is actively using the pages
in its working set. If the total demand isgreater than the total number of
Figure 3.12Thrashing. available frames (D> m), thrashing will occur,because some processes
To prevent thrashing, a process must be provides with as many frames as will not have enough frames.
it needs. There are severaltechniques to know how many frames a process 3.7.3 Page-Fault Frequency
“needs”. Page-Fault Frequency (PFF) takes a moredirect
 The working-set strategy approach.Thrashing has a highpage-fault rate. Thus, the page-fault rate is
to be controlled.
 Page-Fault Frequency
3.8.2 Working-Set Model  When it is toohigh, the process needs more frames.
The working-set model is based on the assumption of locality.The  If the page-faultrate is too low, then the process may have too
locality model states that, as a process executes, it moves from localityto many frames.
locality. A locality is a set of pages that are actively used together .A An upper and lower bound is established on the desired page-fault rate.
program is generally composed of several different localities, which may  If the actual page-fault rate exceeds the upper limit, then allocate
overlap. the process another frame.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 18


UNIT III-MEMORY MANAGEMENT

 If the page-fault rate falls below the lower limit, then remove a o a 12-bit page number
frame from the process. o a 10-bit page offset
Thus, the page-faultratecan be directly measured and controlled to prevent
thrashing.

Figure 3.13 Page-fault frequency. Figure 3.14A two-level page-table scheme.


3.9Structure of the Page Table  Thus, a logical address is as follows:
The common techniques for structuring the page table, includes
 Hierarchical paging
 Hashed page tables
 p1 is an index into the outer page table, and p2 is the displacement
 Inverted page tables
within the page of the inner page table known as forward-mapped
3.5.1 Hierarchical Paging page table.
 Break up the logical address space into multiple page tables
 A simple technique is a two-level page table
 A logical address on 32-bit machine is divided into:
o a page number consisting of 22 bits
o a page offset consisting of 10 bits
 Since the page table is paged, the page number is further divided
into:
Figure 3.15 Address-Translation Scheme

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 19


UNIT III-MEMORY MANAGEMENT

 A logical address on 64-bit machine is divided into: o If there is no match, subsequent entries in the linked list are
searched for a matching virtual page number.

o The outer page table consists of 242 entries, or 244 bytes.


The obvious way toavoid such a large table is to divide
the outer page table into smaller pieces.
 Three-level paging scheme

3.5.2 Hashed Page Tables


Figure 3.16 Hashed page table
A common approach for handling address spaces larger than 32
A variation of this scheme that is useful for 64-bit address spaces has been
bits is to usea hashed page table, with the hash value being the virtual page
proposed. This variation uses clustered page tables, which are similar
number. Each entry in the hash table contains a linked list of elements that
tohashed page tables except that each entry in the hash table refers to
hash to the same location (to handle collisions). Each element consists of
severalpages (such as 16) rather than a single page. Therefore, a single
three fields as in figure 3.16
page-tableentry can store the mappings for multiple physical-page frames.
(1) Thevirtual page number,
Clusteredpage tables are particularly useful for sparse address spaces,
(2) The value of the mapped page frame
where memoryreferences are noncontiguous and scattered throughout the
(3) A pointer to the next element in the linked list.
address space.
The algorithm works as follows:
3.5.3 Inverted Page Tables
o The virtual page number in the virtual address is hashed into the
 Rather than each process having a page table and keeping track of
hash table.
all possible logical pages, track all physical pages.
o The virtual page number is compared with field 1 in the first
 One entry for each real page of memory
element in the linked list.
 Entry consists of the virtual address of the page stored in that real
o If there is a match, the corresponding page frame (field 2) is used
memory location, with information about the process that owns
to form the desired physical address.
that page

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 20


UNIT III-MEMORY MANAGEMENT

 Decreases memory needed to store each page table, but increases PART A
time needed to search the table when a page reference occurs 1. Why are page sizes always power of 2?
 Use hash table to limit the search to one — or at most a few — a. The selection of a power of 2 as a page size makes the
translation of a logical address into a page number and page
page-table entries
offset particularly easy.
o TLB can accelerate access. b. If the size of the logical address space is 2m, and page size is2n
bytes, then the high-order m− n bits of a logical address
designate the page number, and the n low-order bits designate
the page offset.

2. What is the purpose of paging the page table?


a. In certain situations the page tables could become large
enough that by paging the page tables , one could simplify
the memory allocation problem and also enable the
swapping of portions of page table that are not currently
used.
3. Define external fragmentation.
Figure3.17 Inverted page table. a. When the total memory space exists to satisfy the request
of a process, but it is not contiguous it is called external
 Each inverted page-table entry is a pair <process-id, page-number> fragmentation.
where the process-id assumes the role of the address- b. Solution to the external-fragmentation problem is
spaceidentifier. compaction
 When a memory reference occurs, part of the virtual address,
consisting of <process-id, page number>, is presented to the 4. What are the counting based page replacement algorithms?
memory subsystem. a. Lease Frequently Used (LFU) Algorithm: replaces page
with smallest count
 The inverted page table is then searched for a match. If a match is
b. Most Frequently Used (MFU) Algorithm: based on the
found—say, at entry i—then the physical address <i, offset> is
generated. If no match is found, then an illegal address access has argument that the page with the smallest count was
been attempted. probably just brought in and has yet to be used

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 21


UNIT III-MEMORY MANAGEMENT

5. Name two differences between logical and physical addresses. 8. How does the system detect thrashing?
a. A logical address does not refer to an actual existing The system can detect thrashing by evaluating the level of CPU
address; rather, it refers to an abstract address in an abstract utilization as compared to the level of multiprogramming. It can be
address space. eliminated by reducing the level of multiprogramming.
b. A logical address is generated by the CPU and is translated
into a physical address by the memory management unit 9.What is lazy swapper?
(MMU). A lazy swapper never swaps a page into memory unless that page
c. A physical address that refers to an actual physical address will be needed.
in memory.
d. Therefore, physical addresses are generated by the MMU.
10. Define demand paging in memory management. what are steps
required to handle page fault in demand paging?
6. State the effects of thrashing in an operating system. Pages are loaded only when they are demanded during program
a. Thrashing results in severe performance problems,
execution; this technique is known as demand paging and is
b. Low CPU utilization
commonly used in virtual memory systems.
c. If processes are thrashing, they will be in the queue for the
paging device most of the time. a. Reference
d. The average service time for a page fault will increase because b. Trap to os
of the longer average queue for the paging device. c. Page is on backing store
e. Thus, the effective access time will increase even for a d. Bring in missing page.
process that is not thrashing. e. Reset page table.
f. Restart instruction.
7. Mention the significance of LDT and GDT in segmentation.
a. The logical address space of a process is divided into two 11. What do you mean by Thrashing?
partitions. A process is thrashing if it is spending more time paging than
b. The first partition consists of up to 8 K segments that are executing. A high paging activity is called thrashing.
private to that process and this information is kept in the local
descriptor table (LDT); 12. Define pure demand paging.
c. The second partition consists of up to 8 K segments that are Never bring a page into memory until it is required. Start executing a
shared among all the processes and this information is kept in process with no pages in memory. When the operating system sets the
the global descriptor table (GDT). instruction pointer to the first instruction of the process, which is on a
non-memory-resident page, the process immediately faults for the
page.
AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 22
UNIT III-MEMORY MANAGEMENT

13. What is Belady’s anomaly? Local replacement


A situation when the page-fault rate may increase as the number of Local replacement requires that each process select from only its own
allocated frames increases.FIFO Page Replacement suffers from set of allocated frames.
Belady’s anomaly.
17. Describe a mechanism by which one segment could belong to
14. Distinguish between internal fragmentation and external the address space of two different processes
fragmentation. a. Since segment tables are a collection of base–limit registers,
Internal fragmentation External Fragmentation segments can be shared when entries in the segment table of two
 The memory allocated to a process  Total memory space is enough different jobs point to the same physical location.
may be slightly larger than the for the new process but it is not b. The two segment tables must have identical base pointers, and the
requested memory contiguous. shared segment number must be the same in the two processes
 First fit and best fit does not suffer  First fit and best fit suffers from
from internal fragmentation. internal fragmentation.
18. Consider a logical address space of 64 pages of 1024 words
 In fixed partitioning efficient use of  In dynamic partitioning
each, mapped onto a physical memory of 32 frames.
memory due to internal inefficient use of memory due to
fragmentation. the need for compaction a. How many bits are there in the logical address?
 Paging and segmentation suffers  Paging and Segmentation does b. How many bits are there in the physical address?
from internal fragmentation not suffers from external Logical address: 16 bits
fragmentation Physical address: 15 bits

15. Write about swapping. Why it is needed? 19. Define equal allocation and proportional allocation algorithms.
A process must be in memory to be executed. A process can be a. Equal allocation
swapped temporarily out of memory to a backing store and then Split m frames among n processes is to give everyone an
brought back into memory for continued execution.Swapping equal share, m/n frames
b. Proportional allocation
increases the degree of multiprogramming in a system.
Allocate available memory to each process according to its
size. Various processes will need differing amounts of memory.
16. Differentiate local and global page replacement algorithm.
i. ai= si/S × m.
Global replacement
Global replacement allows a process to select a replacement frame 20. Define virtual memory.
from the set of all frames, even if that frame is currently allocated to Virtual memory is a technique that allows the execution of
some other process; that is, one process can take a frame from another. processes that are not completely in memory. One major advantage of
this scheme is that programs can be larger than physical memory.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 23


UNIT III-MEMORY MANAGEMENT

21. Define compaction. the segment tables of each user. With paging there must be a common
A technique use to overcome external fragmentation.The goal is entry in the page tables for each page that is shared.
to shuffle the memory contents so as to place all free memory together PART B
in one large block.
1. Page replacement algorithms Problems.- FIFO,LRU,OPTIMAL
2. Draw the diagram of segmentation memory management scheme and
22. Under what circumstances do page fault occur? Describe the explain its principle.
actions taken by the operating system when a page fault occurs.(8) 3. With a neat sketch, explain how logical address is translated into
a. A page fault occurs when an access to a page that has not been
physical address using paging mechanism.
brought into main memory takes place.
4. Explain the concept of demand paging and its performance issue in
b. The operating system verifies the memory access, aborting the
detail with diagram.
program if it is invalid.
5. Explain the effect of thrashing.
c. If it is valid, a free frame is located and I/O is requested to read
6. Discuss the concept of buddy system allocation and slab allocator with
the needed page into the free frame.
neat sketch.
d. Upon completion of I/O, the process table and page table are
7. Explain why sharing a reentrant module is easier when segmentation is
updated and the instruction is restarted.
used than when pure paging is used with example.
8. Discuss the situation under which the most frequently used page
23. Why are segmentation and paging sometimes combined into a one replacement algorithm generates fewer page faults than the least
scheme? Explain in detail with example.(8) recently used page replacement algorithm. Also discuss under which
Segmentation and paging are often combined in order to circumstances the opposite holds.
improve upon each other. Segmented paging is helpful when the page 9. Compare paging with segmentation in terms of the amount of memory
table becomes very large. A large contiguous section of the page required by the address translation structures in order to convert
table that is unused can be collapsed into a single segment table virtual addresses to physical addresses.
entry with a page table address of zero. Paged segmentation handles 10. Most systems allow programs to allocate more memory to its address
space during execution. Data allocated in the heap segments of
the case of having very long segments that require a lot of time for
programs is an example of such allocated memory. What is required to
allocation. By paging the segments, we reduce wasted memory due to support dynamic memory allocation in the following schemes?
external fragmentation as well as simplify the allocation. a. Contiguous memory allocation (4)
b. Pure segmentation(5)
24. Explain why sharing a reentrant module is easier when c. Pure Paging(4)
segmentation is used than when pure paging is used with example.
Since segmentation is based on a logical division of memory rather than a
physical one, segments of any size can be shared with only one entry in

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 24


UNIT III-MEMORY MANAGEMENT

Problems 6. When do page faults occur? Consider the following page reference
1. Discuss the various Partition Allocation Methods using the string:1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6.How many page faults
following example, given memory partition of 100 K, 500 K, 200 and page fault occur for the FIFO, LRU and optimal replacement
K, 300 K and 600 K (in order) how would each of the partition algorithms ,assuming one, two, three, four page frames.
methods place the processes of 212 K, 417 K, 112 K and 426
K?(in order) Which method makes the most efficient use of
memory?
2. Given six memory partitions of 300 KB, 600 KB, 350 KB, 200
KB, 750 KB, and 125 KB (in order), how would the first-fit, best-
fit, and worst-fit algorithms place processes of size 115 KB, 500
KB, 358 KB, 200 KB, and 375 KB (in order)?
3. Given six memory partitions of 100 MB, 170 MB, 40 MB, 205
MB, 300 MB, and 185 MB (in order), how would the first-fit, best-
fit, and worst-fit algorithms place processes of size 200 MB, 15
MB, 185 MB, 75 MB, 175 MB, and 80 MB (in order)? Indicate
which—if any—requests cannot be satisfied.
4. Consider the following page reference
string:1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,4,5,3.How many page
faults would occur for the following page replacement algorithms
?Assume four frames and all frames are initially empty.
(i)LRU replacement
(ii)FIFO replacement
(iii)Optimal replacement
5. Consider the following page reference string:
1,2,3,2,5,6,3,4,6,3,7,3,1,5,3,6,3,4,2,4,3,4,5,1 Indicate page faults
and calculate total number of page faults and successful ratio for
FIFO ,optimal and LRU algorithms . Assume there are four
frames and initially all the frames are empty.

AAMEC-DEPT OF CSE CS 3451-INTRODUCTION TO OPERATING SYSTEMSMrs.G.Kalaiselvi AP/CSE 25

You might also like