Module 4
Downloaded from Ktunotes.in
Contents
• Memory Management: Concept of address
spaces, Swapping, Contiguous memory
allocation, fixed and variable partitions,
• Segmentation, Paging. Virtual memory,
Demand paging, Page replacement
algorithms.
Downloaded from Ktunotes.in
Background
• Program must be brought (from disk) into memory
• Fetch-decode-execute cycle CPU
• Memory unit only sees a stream of addresses +
read requests, or address + data and write requests
• Sequence of memory addresses generated by
running program
Downloaded from Ktunotes.in
Logical vs. Physical Address Space
Logical address – generated by the CPU; also referred
CPU
to as virtual address
Physical address – address seen by the memory unit
• Logical address space is the set of all logical
addresses generated by a program
• Physical address space is the set of all physical
addresses generated by a program
Downloaded from Ktunotes.in
Background
Multiple processes resides in memory
• Protection of memory required to ensure
correct operation
1. Protect OS
2. Protect user processes
Downloaded from Ktunotes.in
Base and Limit Registers
• A pair of base and limit registers define
the logical address space
Downloaded from Ktunotes.in
Hardware Address Protection with Base and Limit Registers
• OS loads the base & limit reg.
• Privileged instruction
Downloaded from Ktunotes.in
Address Binding
• Process resides in main memory
• Associate each data element with memory address
• Further, addresses represented in different ways at different
stages of a program’s life
– Source code addresses usually symbolic
– Compiled code addresses bind to relocatable addresses
• i.e. “14 bytes from beginning of this module”
– Linker or loader will bind relocatable addresses to absolute addresses
• i.e. 74014
Downloaded from Ktunotes.in
Multistep Processing of a User Program
Downloaded from Ktunotes.in
Binding of Instructions and Data to
Memory
• Address binding of instructions and data to memory addresses
can happen at three different stages
– Compile time: If memory location known a priori, absolute code can
be generated; must recompile code if starting location changes
– Load time: Must generate relocatable code if memory location is not
known at compile time
– Execution time: If the process can be moved during its execution from
one memory segment to another
• Binding delayed until run time
• Need hardware support for address maps (e.g., base and limit registers)
Downloaded from Ktunotes.in
Logical vs. Physical Address Space
Logical address – generated by the CPU; also referred to as virtual address
CPU
Physical address – address seen by the memory unit
• Logical and physical addresses are the same in compile-time and load-
time address-binding schemes;
• logical (virtual) and physical addresses differ in execution-time address-
binding scheme
• Logical address space is the set of all logical addresses generated by a
program
• Physical address space is the set of all physical addresses generated by a
program
Downloaded from Ktunotes.in
Memory-Management Unit (MMU)
• Hardware device that at run time maps virtual to physical address
• Many methods possible
• To start, consider simple scheme where the value in the relocation register is added to
every address generated by a user process at the time it is sent to memory
– relocation register
– MS-DOS on Intel 80x86 used 4 relocation registers
• The user program deals with logical addresses (0 to max); it never sees the real
physical addresses (R to R+max)
– Say the logical address 25
– Execution-time binding occurs when reference is made to location in memory
– Logical address bound to physical addresses
Downloaded from Ktunotes.in
Dynamic relocation using a
relocation register
14000
Relocatable
code
Downloaded from Ktunotes.in
Contiguous Allocation
Multiple processes resides in memory
Downloaded from Ktunotes.in
Contiguous Allocation
• Main memory usually divided into two
partitions:
– Resident operating system, usually held in low
memory
– User processes then held in high memory
– Each process contained in single contiguous
section of memory
Downloaded from Ktunotes.in
Contiguous Allocation (Cont.)
• Multiple-partition allocation
– Divide memory into several Fixed size partition
– Each partition stores one process
– Degree of multiprogramming limited by number of
partitions
– If a partition is free, load process from job queue
Downloaded from Ktunotes.in
Contiguous Allocation (Cont.)
• Multiple-partition allocation
– Variable partition scheme
– Hole – block of available memory; holes of various size are scattered throughout memory
– Keeps a table of free memory
– When a process arrives, it is allocated memory from a hole large enough to accommodate it
– Process exiting frees its partition, adjacent free partitions combined
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS OS
process 5 process 5 process 5 process 5
process 9 process 9
Hole process
process 8
10
process 2 process 2 process 2 process 2
Downloaded from Ktunotes.in
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free
holes?
Dynamic storage
• First-fit: Allocate the firstallocation problem
hole that is big enough
• Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
– Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
Downloaded from Ktunotes.in
Fragmentation
• External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous
• Internal Fragmentation – allocated memory may be
slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used
• First fit analysis reveals that given N blocks allocated,
0.5 N blocks lost to fragmentation
– 1/3 may be unusable -> 50-percent rule
Downloaded from Ktunotes.in
Hardware Support for Relocation
and Limit Registers
• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
• Relocation register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical address must be less
than the limit register
• Context switch
• MMU maps logical address dynamically
Downloaded from Ktunotes.in
Swapping
• A process must be in memory to be executed.
• A process, can be swapped temporarily out of memory to a
backing store and then brought back into memory for
continued execution .
• Swapping makes it possible for the total physical address
space of all processes to exceed the real physical memory of
the system, thus increasing the degree of multiprogramming
in a system. Downloaded from Ktunotes.in
Swapping
Downloaded from Ktunotes.in
Standard Swapping
• Standard swapping involves moving processes between main
memory and a backing store.
• The backing store is commonly a fast disk. It must be large
enough to accommodate copies of all memory images for all
users, and it must provide direct access to these memory images.
• The system maintains a ready queue consisting of all processes
whose memory images are on the backing store or in memory
and are ready to run.
• Whenever the CPU scheduler decides to execute a process, it calls
the dispatcher.
• The dispatcher checks to see whether the next process in the
Downloaded from Ktunotes.in
SEGMENTATION
Downloaded from Ktunotes.in
Segmentation
• Segmentation is a memory-
management scheme that supports
programmer view of memory.
• A logical address space is a collection
of segments.
• Each segment has a name and a
length.
• Segments are numbered and are
referred to by a segment number,
rather than by a segment name.
• Thus, a logical address consists of a
two tuple:
<segment-number, offset>.
Downloaded from Ktunotes.in
Segmentation
• Eg: A C compiler might create separate segments
for the following:
1. The code
2. Global variables
3. The heap
4. The stacks
5. The standard C library
Downloaded from Ktunotes.in
Segmentation Hardware
Downloaded from Ktunotes.in
Segment table
• Each entry in the segment table has a segment base and a segment limit.
• The segment base contains the starting physical address where the segment
resides in memory, and the segment limit specifies the length of the segment.
Address Translation
• A logical address consists of two parts: a segment number, s, and an offset
into that segment, d.
• The segment number is used as an index to the segment table.
• The offset d of the logical address must be between 0 and the segment limit.
If it is not, trap to the operating system (logical addressing attempt beyond
end of segment).
• When an offset is legal, it is added to the segment base to produce the
address in physical memory of the desired bytes.
Downloaded from Ktunotes.in
Segmentation-Example
Downloaded from Ktunotes.in
PAGING
Downloaded from Ktunotes.in
Paging
• Paging is a memory-management scheme that permits the physical address space
of a process to be non contiguous.
• Paging avoids the considerable problem of fitting memory chunks of varying sizes
onto the backing store.
Basic Method
• Breaking physical memory into fixed-sized blocks called frames and breaking
logical memory into blocks of the same size called pages.
• When a process is to be executed, its pages are loaded into any available
memory frames from the backing store.
• The backing store is divided into fixed-sized blocks that are of the same size as the
memory frames.
Downloaded from Ktunotes.in
Paging
Downloaded from Ktunotes.in
Paging
• Every address generated by the CPU is divided into two
parts: a page number (p) and a page offset (d).
• The page number is used as an index into a page table.
• The page table contains the base address of each page in
physical memory.
• This base address is combined with the page offset to
define the physical memory address that is sent to the
memory unit.
Downloaded from Ktunotes.in
Paging
Downloaded from Ktunotes.in
Paging
• The page size (like the frame size) is defined by the hardware.
• The size of a page is typically a power of 2, varying between
512 bytes and 16 MB per page, depending on the computer
architecture.
• If the size of logical address space is 2m and a page size is 2n
addressing units (bytes or words), then the high-order m – n
bits of a logical address designate the page number, and the n
low-order bits designate the page offset.
Downloaded from Ktunotes.in
Paging Example
Logical address 0
Logical address = 16 (0*4+0)
Page size=4 Physical address:
Physical memory=32 (5*4+0)=20
Logical address 3
(0*4+3)
Physical address:
User’s view (5*4+3)=23
Logical address 4
Run time address binding (1*4+0)
Physical address:
(6*4+0)=24
n=2 and m=4 32-byte
memory and 4-byte pages
Downloaded from Ktunotes.in
Logical address 13
Paging-Disadvantage
• When we use a paging scheme, there is no external fragmentation:
Any free frame can be allocated to a process that needs it.
• Paging have internal fragmentation.
• If the memory requirements of a process do not coincide with page
boundaries, the last frame allocated may not be completely full.
– For example, if page size is 2,048 bytes, a process of 72,766 bytes would need
35 pages plus 1,086 bytes. It would be allocated 36 frames, resulting in an
internal fragmentation of 2,048-1,086 = 962 bytes.
• In the worst case, a process would need n pages plus 1 byte. It would
be allocated, n + 1 frames, resulting in an internal fragmentation of
almost an entire frame.
Downloaded from Ktunotes.in
Hardware implementation of page table
1)Using Hardware registers
• A single page table consisting of an array of fast hardware registers,
with one entry for each virtual page, indexed by virtual page number.
• When a process is started up, the operating system loads the registers
with the process‘ page table, taken from a copy kept in main memory.
Advantages: It is straightforward and requires no memory references
during mapping.
Disadvantage: Potentially expensive (if the page table is large). Having to
load the full page table at every context switch hurts performance.
Downloaded from Ktunotes.in
Page Tables-Implementation
2)Page table stored entirely in main memory.
• Page table base register (PTBR) points to the start of the
page table.
• Changing the page table requires changing only this register
thereby reducing context switch time.
Disadvantage
• This scheme, requires two memory accesses to access a
byte (one for the page-table entry, one for the byte).
Downloaded from Ktunotes.in
3) Translation Lookaside Buffer (TLB)
Downloaded from Ktunotes.in
• The TLB is associative, high-speed memory.
• Each entry in the TLB consists of two parts:a key (or tag) and a value.
• When the associative memory is presented with an item, the item is
compared with all keys simultaneously.
• If the item is found,the corresponding value field is returned.
• The search is fast;
• The TLB contains only a few of the page-table entries.
• When a logical address is generated by the CPU, its page number is
presented to the TLB.
• If the page number is found, its frame number is immediately available
and is used to access memory.
Downloaded from Ktunotes.in
• If the page number is not in the TLB (known as a TLB miss), a
memory reference to the page table must be made.
• Then add the page number and frame number to the TLB.
• If the TLB is already full of entries, an existing entry must be
selected for replacement.
• Replacement policies range from least recently used (LRU)
through round-robin to random.
• Some TLBs allow certain entries to be wired down, meaning
that they cannot be removed from the TLB. Typically, TLB
entries for kernel code are wired down.
Downloaded from Ktunotes.in
• Some TLBs store address-space identifiers (ASIDs) in each TLB
entry.
• An ASID uniquely identifies each process and is used to provide
address-space protection for that process.
• When the TLB attempts to resolve virtual page numbers, it ensures
that the ASID for the currently running process matches the ASID
associated with the virtual page.
• If the ASIDs do not match, the attempt is treated as a TLB miss.
• ASID allows the TLB to contain entries for several different processes
simultaneously.
Downloaded from Ktunotes.in
• The percentage of times that the page number
of interest is found in the TLB is called the hit
ratio.
Eg: An 80-percent hit ratio, means that we find
the desired page number in the TLB 80
percent of the time.
Downloaded from Ktunotes.in
Protection
• Memory protection in a paged environment is accomplished
by protection bits associated with each frame.
• Normally, these bits are kept in the page table .
• One bit can define a page to be read–write or read-only.
• This protection bits can be checked to verify that no writes are
being made to a read-only page.
• An attempt to write to a read-only page causes a hardware
trap to the operating system (or memory-protection
violation).
• More finer level of protection can be provided by using
hardware to provide read-only, read–write, or execute-only
Downloaded from Ktunotes.in
Valid–invalid bit
• When this bit is set to valid, the associated page is in the
process’s logical address space and is thus a legal (or valid) page.
• When the bit is set to invalid, the page is not in the process’s
logical address space.
• Illegal addresses are trapped by use of the valid–invalid bit.
• The operating system sets this bit for each page to allow or
disallow access to the page.
Downloaded from Ktunotes.in
Virtual Memory
• Virtual Memory is a storage scheme that provides user an illusion
of having a very big main memory. This is done by treating a part of
secondary memory as the main memory.
• In this scheme, User can load the bigger size processes than the
available main memory by having the illusion that the memory is
available to load the process.
• Instead of loading one big process in the main memory, the
Operating System loads the different parts of more than one
process in the main memory.
• By doing this, the degree of multiprogramming will be increased
and therefore, the CPU utilization will also be increased.
Downloaded from Ktunotes.in
Virtual Memory
Downloaded from Ktunotes.in
Demand Paging
• A demand paging system is quite similar to a paging system
with swapping where processes reside in secondary memory
and pages are loaded only on demand, not in advance.
• When a context switch occurs, the operating system does not
copy any of the old program’s pages out to the disk or any of
the new program’s pages into the main memory Instead, it
just begins executing the new program after loading the first
page and fetches that program’s pages as they are referenced.
Downloaded from Ktunotes.in
Downloaded from Ktunotes.in
While executing a program, if the program
references a page which is not available in the
main memory because it was swapped out a little
ago, the processor treats this invalid memory
reference as a page fault and transfers control
from the program to the operating system to
demand the page back into the memory.
Downloaded from Ktunotes.in
Advantages
Following are the advantages of Demand Paging −
⮚ Large virtual memory.
⮚ More efficient use of memory.
⮚ There is no limit on degree of multiprogramming.
Disadvantages
Number of tables and the amount of processor overhead
for handling page interrupts are greater than in the case of
the simple paged management techniques.
Downloaded from Ktunotes.in
Page Replacement policies
• FIRST-IN,FIRST-OUT(FIFO)
• LEAST -RECENTLY- USED(LRU)
• OPTIMAL PAGE-REMOVAL (OPT)
Downloaded from Ktunotes.in
FIFO PAGE REMOVAL POLICY
Downloaded from Ktunotes.in
Downloaded from Ktunotes.in
Belady’s anomaly
• It is a problem in which we increase the no.of
frames(primary device capacity),the number
of page fault will also increase.
Downloaded from Ktunotes.in
Optimal Page replacement
In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Downloaded from Ktunotes.in
Least Recently Used
In this algorithm page will be replaced which is least
recently used.
Downloaded from Ktunotes.in
Downloaded from Ktunotes.in
OPTIMAL PAGE-REMOVAL POLICY(OPT)
Downloaded from Ktunotes.in
LEAST -RECENTLY- USED(LRU)
Downloaded from Ktunotes.in
Thrashing
• Page fault and swapping: We know every program divided
into some pages. When a program need a page which is not in
RAM that is called page fault.
• Whenever a page fault happens, operating system will try to
fetch that page from secondary memory and try to swap it
with one of the page in RAM. This is called swapping.
Downloaded from Ktunotes.in
What is Thrashing ?
• If this page fault and then swapping happening very frequently at higher rate, then operating
system has to spend more time to swap these pages. This state is called thrashing.
• Because of this, CPU utilization is going to be reduced.
Downloaded from Ktunotes.in
Given
Logical Address space=4GB
Physical Address space=64MB
Page size=4KB
Find the no.of pages,no.of frames,no.of entries
in page table,size of page table.
Downloaded from Ktunotes.in
1KB=210
1 MB=220
1 GB=2 30
Downloaded from Ktunotes.in
Downloaded from Ktunotes.in