0% found this document useful (0 votes)
85 views89 pages

Computer Organization

This document provides an overview of memory organization. It discusses how memory is made up of registers containing storage locations identified by addresses. Each register contains storage elements called cells that store one bit of data. Words are groups of bits that a memory unit can store. A memory unit uses data, address selection, and control lines to read and write data. The document also introduces memory hierarchy and describes characteristics like capacity, access time, performance, and cost per bit of different memory types. It explains read and write operations in memory and provides details about random access memory (RAM) and read only memory (ROM). [END SUMMARY]
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views89 pages

Computer Organization

This document provides an overview of memory organization. It discusses how memory is made up of registers containing storage locations identified by addresses. Each register contains storage elements called cells that store one bit of data. Words are groups of bits that a memory unit can store. A memory unit uses data, address selection, and control lines to read and write data. The document also introduces memory hierarchy and describes characteristics like capacity, access time, performance, and cost per bit of different memory types. It explains read and write operations in memory and provides details about random access memory (RAM) and read only memory (ROM). [END SUMMARY]
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 89

COMPUTER ORGANIZATION

UNIT-2
MEMORY ORGANIZATION

ASSISTANT PROFESSOR
ANOOP KUMAR PIPERSAHANIYAN
04/27/2023
ECE DEPARTMENT 1
Introduction to memory and memory units

 Memories are made up of registers. Each register in the


memory is one storage location. The storage location is also
called as memory location. Memory locations are identified
using Address. The total number of bit a memory can store is
its capacity. 
 A storage element is called a Cell. Each register is made up of
storage element in which one bit of data is stored. The data in a
memory are stored and retrieved by the process called writing
and reading respectively. 
04/27/2023 2
Introduction to memory and memory units

04/27/2023 3
Introduction to memory and memory units
 A word is a group of bits where a memory unit
stores binary information. A word with group of
8 bits is called a byte.
  
A memory unit consists of data lines, address
selection lines, and control lines that specify the
direction of transfer. The block diagram of a
memory unit is shown below: 
04/27/2023 4
Introduction to memory and memory units

04/27/2023 5
Introduction to memory and memory units
 Data lines provide the information to be stored
in memory. The control inputs specify the
direct transfer. The k-address lines specify the
word chosen. 
 When there are k address lines, 2k memory
words can be accessed. 

04/27/2023 6
Memory Hierarchy Design and its Characteristics

 In the Computer System Design, Memory


Hierarchy is an enhancement to organize the
memory such that it can minimize the access time.
 The Memory Hierarchy was developed based on a
program behavior known as locality of references.
 The figure below clearly demonstrates the different
levels of memory hierarchy :

04/27/2023 7
04/27/2023 8
Memory Hierarchy Design and its Characteristics

04/27/2023 9
Memory Characteristics
 This Memory Hierarchy Design is divided into 2 main
types:
 External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic
Tape i.e. peripheral storage devices which are accessible
by the processor via I/O Module.
 Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU
registers. This is directly accessible by the processor.
04/27/2023 10
Memory Characteristics
 Capacity:

It is the global volume of information the memory can store. As we


move from top to bottom in the Hierarchy, the capacity increases.

 Access Time:
 It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the
Hierarchy, the access time increases.

04/27/2023 11
Memory Characteristics
 Performance:

Earlier when the computer system was designed without Memory Hierarchy design, the
speed gap increases between the CPU registers and Main Memory due to large difference
in access time. This results in lower performance of the system and thus, enhancement was
required. This enhancement was made in the form of Memory Hierarchy Design because
of which the performance of the system increases. One of the most significant ways to
increase system performance is minimizing how far down the memory hierarchy one has
to go to manipulate data.
 Cost per bit:

As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal
Memory is costlier than External Memory.

04/27/2023 12
Memory Characteristics
 Read and Write operations in Memory
 A memory unit stores binary information in groups of bits called words.
Data input lines provide the information to be stored into the memory,
Data output lines carry the information out from the memory. The control
lines Read and write specifies the direction of transfer of data. Basically, in
the memory organization, there are memory locations indexing from 0 to
where l is the address buses. We can describe the memory in terms of the
bytes using the following formula:

Where,
l is the total address buses
N is the memory in bytes
04/27/2023 13
Memory Characteristics
 Memory Address Register (MAR) is the address register
which is used to store the address of the memory location
where the operation is being performed.
Memory Data Register (MDR) is the data register which is
used to store the data on which the operation is being
performed.
 Memory Read Operation:
Memory read operation transfers the desired word to
address lines and activates the read control line.Description
of memory read operation is given below:
04/27/2023 14
Memory Characteristics

04/27/2023 15
Memory Characteristics
 In the above diagram initially, MDR can
contain any garbage value and MAR is
containing 2003 memory address. After the
execution of read instruction, the data of
memory location 2003 will be read and the
MDR will get updated by the value of the 2003
memory location (3D).

04/27/2023 16
Memory Write Operation

Memory write operation transfers the address


of the desired word to the address lines,
transfers the data bits to be stored in memory
to the data input lines. Then it activates the
write control line. Description of the write
operation is given below:

04/27/2023 17
Memory Write Operation

04/27/2023 18
Random Access Memory (RAM) and Read Only Memory (ROM)

 Memory is the most essential element of a computing


system because without it computer can’t perform
simple tasks. Computer memory is of two basic types

 Primary memory(RAM and ROM) and Secondary
memory (hard drive, CD, etc). Random Access
Memory (RAM) is primary-volatile memory and
Read-Only Memory (ROM) is primary-non-volatile
memory. 
04/27/2023 19
Types of Memory

04/27/2023 20
Random Access Memory (RAM)

 Random Access Memory (RAM) – 


 It is also called read-write memory or the main
memory or the primary memory.
 The programs and data that the CPU requires
during the execution of a program are stored in
this memory.
 It is a volatile memory as the data is lost when the
power is turned off.
04/27/2023 21
04/27/2023 22
RAM (Random Access Memory )

 RAM(Random Access Memory) is a part of computer’s Main


Memory which is directly accessible by CPU.
 RAM is used to Read and Write data into it which is accessed by
CPU randomly. RAM is volatile in nature, it means if the power
goes off, the stored information is lost. RAM is used to store the
data that is currently processed by the CPU. Most of the
programs and data that are modifiable are stored in RAM. 
 Integrated RAM chips are available in two form: 
 SRAM(Static RAM)
 DRAM(Dynamic RAM)
04/27/2023 23
Block diagram
The block diagram of RAM chip is given below

The block diagram of RAM chip is given


below

04/27/2023 24
SRAM
 1. SRAM :
The SRAM memories consist of circuits capable of retaining the stored information as long as the
power is applied. That means this type of memory requires constant power. SRAM memories
are used to build Cache Memory. 
 SRAM Memory Cell: Static memories(SRAM) are memories that consist of circuits capable of
retaining their state as long as power is on. Thus this type of memory is called volatile memory.
 The below figure shows a cell diagram of SRAM. A latch is formed by two inverters connected
as shown in the figure. Two transistors T1 and T2 are used for connecting the latch with two-bit
lines.
 The purpose of these transistors is to act as switches that can be opened or closed under the
control of the word line, which is controlled by the address decoder.
 When the word line is at 0-level, the transistors are turned off and the latch remains its
information.
 For example, the cell is at state 1 if the logic value at point A is 1 and at point, B is 0. This state is
retained as long as the word line is not activated. 

04/27/2023 25
SRAM

04/27/2023 26
SRAM
 For Read operation, the word line is activated by the address input
to the address decoder. The activated word line closes both the
transistors (switches) T1 and T2. Then the bit values at points A and
B can transmit to their respective bit lines. The sense/write circuit
at the end of the bit lines sends the output to the processor.
  
For Write operation, the address provided to the decoder activates
the word line to close both the switches. Then the bit value that is to
be written into the cell is provided through the sense/write circuit
and the signals in bit lines are then stored in the cell. 

04/27/2023 27
DRAM
 2. DRAM :
DRAM stores the binary information in the form of electric charges applied to capacitors. The stored information on the
capacitors tends to lose over a period of time and thus the capacitors must be periodically recharged to retain their usage.
The main memory is generally made up of DRAM chips. 
 DRAM Memory Cell: Though SRAM is very fast, but it is expensive because of its every cell requires several transistors.

 Relatively less expensive RAM is DRAM, due to the use of one transistor and one capacitor in each cell, as shown in the
below figure., where C is the capacitor and T is the transistor.

 Information is stored in a DRAM cell in the form of a charge on a capacitor and this charge needs to be periodically
recharged. 

For storing information in this cell, transistor T is turned on and an appropriate voltage is applied to the bit line.
 This causes a known amount of charge to be stored in the capacitor. After the transistor is turned off, due to the property
of the capacitor, it starts to discharge.
 Hence, the information stored in the cell can be read correctly only if it is read before the charge on the capacitors drops
below some threshold value. 

04/27/2023 28
 Random Access Memory (RAM) is used to store the programs
and data being used by the CPU in real-time. The data on the
random access memory can be read, written, and erased any
number of times. RAM is a hardware element where the data
being currently used is stored. It is a volatile memory. Types of
RAM:
 Static RAM, or (SRAM) which stores a bit of data using the state
of a six transistor memory cell.
 Dynamic RAM, or (DRAM) which stores a bit data using a pair of
transistor and capacitor which constitute a DRAM memory cell.

04/27/2023 29
Read Only Memory (ROM)
 Read Only Memory (ROM) is a type of memory where the data has been
prerecorded. Data stored in ROM is retained even after the computer is
turned off ie, non-volatile. Types of ROM:
 Programmable ROM, where the data is written after the memory chip has
been created. It is non-volatile.
 Erasable Programmable ROM, where the data on this non-volatile memory
chip can be erased by exposing it to high-intensity UV light.
 Electrically Erasable Programmable ROM, where the data on this non-
volatile memory chip can be electrically erased using field electron emission.
 Mask ROM, in which the data is written during the manufacturing of the
memory chip.

04/27/2023 30
Difference between RAM and ROM

04/27/2023 31
DifferenceRAM
ROM

 Difference between RAM and ROM


 Data retention
 RAM is a volatile memory which could store the data as long as the power is
supplied.ROM is a non-volatile memory which could retain the data even when power is
turned off.Working type
 Data stored in RAM can be retrieved and altered. Data stored in ROM can only be read.
UseUsed to store the data that has to be currently processed by CPU temporarily.It stores
the instructions required during bootstrap of the computer.Speed
 It is a high-speed memory.It is much slower than the RAM.CPU InteractionThe CPU can
access the data stored on it.The CPU can not access the data stored on it unless the data is
stored in RAM. Size and Capacity Large size with higher capacity.Small size with less
capacity. Used as/in CPU Cache, Primary memory.
 Firmware, Micro-controllers Accessibility The data stored is easily accessible The data
stored is not as easily accessible as in RAM Cost Costlier cheaper than RAM.

04/27/2023 32
DRAM

04/27/2023 33
Difference between SRAM and DRAM :

04/27/2023 34
MEMORY MAPPING
Cache and Main Memory

04/27/2023 35
Cache Memory in Computer Organization

 Cache Memory is a special very high-speed memory. It is used to speed up and


synchronizing with high-speed CPU.
 Cache memory is costlier than main memory or disk memory but economical than
CPU registers.
 Cache memory is an extremely fast memory type that acts as a buffer between RAM
and the CPU. It holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.
 Cache memory is used to reduce the average time to access data from the Main
memory.
 The cache is a smaller and faster memory which stores copies of the data from
frequently used main memory locations.
 There are various different independent caches in a CPU, which store instructions
and data.
04/27/2023 36
Cache Memory in Computer Organization

04/27/2023 37
Levels of memory

 Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter,
address register etc.
 Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily
stored for faster access.
 Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and once
power is off data no longer stays in this memory.
 Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data stays
permanently in this memory.
04/27/2023 38
Cache Memory in Computer Organization
 Cache Performance:
When the processor needs to read or write a
location in main memory, it first checks for a
corresponding entry in the cache.
 If the processor finds that the memory location
is in the cache, a cache hit has occurred and
data is read from cache

04/27/2023 39
Cache Memory in Computer Organization
 If the processor does not find the memory location in the cache,
a cache miss has occurred. For a cache miss, the cache allocates
a new entry and copies in data from main memory, then the
request is fulfilled from the contents of the cache.
 The performance of cache memory is frequently measured in
terms of a quantity called Hit ratio.
 Hit ratio = hit / (hit + miss) = no. of hits/total accesses
 We can improve Cache performance using higher cache block
size, higher associativity, reduce miss rate, reduce miss penalty,
and reduce the time to hit in the cache.
04/27/2023 40
Cache Memory in Computer Organization
 Cache Mapping:
There are three different types of mapping
used for the purpose of cache memory which
are as follows:
 Direct mapping
 Associative mapping
 Set-Associative mapping
04/27/2023 41
Direct Mapping
 –
The simplest technique, known as direct mapping, maps each block of
main memory into only one possible cache line. or
In Direct mapping,
 assign each memory block to a specific line in the cache. If a line is
previously taken up by a memory block when a new block needs to be
loaded, the old block is trashed.
 An address space is split into two parts index field and a tag field.
 The cache is used to store the tag field whereas the rest is stored in the
main memory.
 Direct mapping`s performance is directly proportional to the Hit ratio.
04/27/2023 42
Direct Mapping
 For purposes of cache access, each main memory
address can be viewed as consisting of three fields. The
least significant w bits identify a unique word or byte
within a block of main memory. In most contemporary
machines, the address is at the byte level. The remaining
s bits specify one of the 2s blocks of main memory. The
cache logic interprets these s bits as a tag of s-r bits
(most significant portion) and a line field of r bits. This
latter field identifies one of the m=2r lines of the cache.
04/27/2023 43
Direct Mapping

04/27/2023 44
Direct Mapping

04/27/2023 45
Associative Mapping

In this type of mapping, the associative memory is used to


store content and addresses of the memory word. Any block
can go into any line of the cache.
 This means that the word id bits are used to identify which
word in the block is needed, but the tag becomes all of the
remaining bits.
 This enables the placement of any word at any place in the
cache memory. It is considered to be the fastest and the most
flexible mapping form.
04/27/2023 46
Associative Mapping

04/27/2023 47
Set-associative Mapping
 Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks
of direct mapping are removed. Set associative addresses the problem of possible
thrashing in the direct mapping method.
 It does this by saying that instead of having exactly one line that a block can map to
in the cache, we will group a few lines together creating a set. Then a block in
memory can map to any one of the lines of a specific set..
 Set-associative mapping allows that each word that is present in the cache can have
two or more words in the main memory for the same index address.
 Set associative cache mapping combines the best of direct and associative cache
mapping techniques
 .In this case, the cache consists of a number of sets, each of which consists of a
number of lines. The relationships are
04/27/2023 48
Set-associative Mapping

04/27/2023 49
Set-associative Mapping

04/27/2023 50
Application of Cache Memory

 Usually, the cache memory can store a


reasonable number of blocks at any given time,
but this number is small compared to the total
number of blocks in the main memory.

 The correspondence between the main memory


blocks and those in the cache is specified by a
mapping function.
04/27/2023 51
Types of Cache

Primary Cache –
A primary cache is always located on the processor chip. This cache is
small and its access time is comparable to that of processor registers.

 Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache.

 Often, the Level 2 cache is also housed on the processor chip.

04/27/2023 52
Locality of reference

Since size of cache memory is less as compared to main memory. So to check which part of main
memory should be given priority and loaded in cache is decided based on locality of reference.
 Types of Locality of reference

 Spatial Locality of reference


This says that there is a chance that element will be present in the close proximity to the reference
point and next time if again searched then more close proximity to the point of reference.
 Temporal Locality of reference
In this Least recently used algorithm will be used. Whenever there is page fault occurs within a word
will not only load word in main memory but complete page fault will be loaded because spatial
locality of reference rule says that if you are referring any word next word will be referred in its
register that’s why we load complete page table so the complete block will be loaded.

04/27/2023 53
Cache Coherence
 Cache coherence is the regularity or consistency of data stored
in cache memory. Maintaining cache and memory consistency
is imperative for multiprocessors or distributed shared
memory (DSM) systems.
 Cache management is structured to ensure that data is not
overwritten or lost. When multiple processors with separate
caches share a common memory,
 it is necessary to keep the caches in a state of coherence by
ensuring that any shared operand that is changed in any cache
is changed throughout the entire system.
04/27/2023 54
Virtual memory
 Virtual memory is a feature of an operating system that
enables a computer to be able to compensate shortages of
physical memory by transferring pages of data from random
access memory to disk storage.
 This process is done temporarily and is designed to work as a
combination of RAM and space on the hard disk.
 This means that when RAM runs low, virtual memory can
move data from it to a space called a paging file. This process
allows for RAM to be freed up so that a computer can
complete the task.
04/27/2023 55
Virtual memory
 Occasionally a user might be shown a message that says the
virtual memory is running low, this means that either more
RAM needs to be added, or the size of the paging file needs to
be increased.
 What is virtual memory?
 Virtual memory is a technique used in computing to optimize
memory management by transferring data between different
storage systems, such as random access memory (RAM) and
disk storage. A virtual memory system has many advantages,
including:
04/27/2023 56
Virtual memory
 Freeing applications from having to compete for shared
memory space and allowing multiple applications to run at the
same time
 Allowing processes to share memory between libraries (a
collection of code that provides the foundation for a program's
operations)
 Improving security by isolating and segmenting where the
computer stores information
 Increasing the amount of memory available by working
outside the limits of a computer's physical memory space
04/27/2023 57
Virtual memory
 Optimizing central processing unit (CPU) usage
 Virtual memory is a built-in component of most modern desktop computers. It's
incorporated into a computer's CPU and is a more cost-effective method for
managing memory than expanding the computer's physical memory storage
system.
 However, some specialized computers might not rely on virtual memory because it
could cause inconsistencies in the computer's processing
 . Professionals in certain industries, such as in scientific or statistical modeling, may
avoid using virtual memory for specific tasks that require stability and
predictability.
 Most everyday personal or business computers don't need this level of consistency
and benefit from the advantages of virtual memory far more than from the
predictability of other types of memory systems.
04/27/2023 58
Types of virtual memory

 Types of virtual memory


 The two ways computers handle virtual
memory are through paging and segmenting.
Here are the differences between these types of
virtual memory:

04/27/2023 59
Paging

 This type of virtual memory works by separating memory into sections called paging
files. When a computer reaches its RAM limits, it transfers any currently unused
pages into the part of its hard drive used for virtual memory. The computer performs
this process using a swap file, which is a designated space within its hard drive for
extending the virtual memory of the computer's RAM. By moving unused files into its
hard drive, the computer frees its RAM space for other memory tasks and ensures
that it never runs out of real memory.
 As part of this process, the computer uses page tables, which translate virtual
addresses into the physical addresses that the computer's MMU uses to process
instructions. The MMU communicates between the computer's OS and its page tables.
When the user performs a task, the OS searches its RAM for the processes to conduct
the task. If it can't find the processes to perform the task in RAM, the MMU prompts
the OS to move the required pages into RAM and uses a page table to note the new
storage location of the pages.
04/27/2023 60
Segmenting
 Segmentation is another method of managing virtual memory. A segmentation system divides
virtual memory into segments of varying lengths and moves any segments not in use from the
computer's virtual memory space to its hard drive.
 Similar to page tables, segment tables track whether the computer is storing the segment in memory
or a physical address.

 Segmentation differs from paging because it divides memory into sections of varying lengths, while
paging divides memory into sections of equal size.

 With paging, the hardware determines the size of a section, but the user can determine the length of
a segment in a segmentation system. Although segmentation is often slower than paging.

 it offers the user more control over how to divide memory and may make it easier to share data
between processes. However, many casual computer users may prefer a paging system because it
automatically handles memory divisions.

04/27/2023 61
Limitations of virtual memory
 Limitations of virtual memory

 Although virtual memory has many advantages, here are some of its limitations:

 Virtual memory is often slower than physical memory, so most computers prioritize using physical memory
when possible.

 It requires additional hardware support to move data between a computer's virtual and physical memory.

 The amount of storage virtual memory can provide depends on the amount of secondary storage a computer
has.
 If the computer only has a small amount of RAM, virtual memory can cause "thrashing," which is when the
computer must constantly swap data between virtual and physical memory, resulting in significant
performance delays.
 It can take longer for applications to load or for a computer to switch between applications when using virtual
memory.

04/27/2023 62
Virtual memory vs Physical memory
 The two most significant differences between virtual memory and
physical memory are speed and cost. Computers that rely on physical
memory tend to be faster than computers relying on virtual memory.
 However, increasing a computer's physical memory capacity is more
expensive than implementing a virtual memory system.
 For these reasons, most computers use their physical memory system—
their RAM—to maintain the speed of their processing before using their
virtual memory system.
 The computer only uses its virtual memory when it runs out of RAM
for storage.
 .
04/27/2023 63
Virtual memory vs Physical memory
 However, users also have the option to expand their RAM.
Installing more RAM can resolve computer delays caused by
frequent memory swaps.
 The amount of RAM a computer has depends on how much the
user or manufacturer installs. Comparatively, the size of the
computer's hard drive determines its virtual memory capacity.
 When choosing a computer, you may prefer one with more
RAM if you're someone who runs many applications at a time.
 If you work with only one or two computer applications most
of the time, you may not need as much RAM
04/27/2023 64
Virtual memory vs Physical memory
 A computer can address more memory than the amount physically
installed on the system. This extra memory is actually called virtual
memory and it is a section of a hard disk that's set up to emulate the
computer's RAM.
 The main visible advantage of this scheme is that programs can be
larger than physical memory. Virtual memory serves two purposes.
 First, it allows us to extend the use of physical memory by using disk.
 Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.
 Following are the situations, when entire program is not required to
be loaded fully in main memory.
04/27/2023 65
Virtual memory vs Physical memory
 • User written error handling routines are used only when an error occurred in the data or
computation.

 • Certain options and features of a program may be used rarely.


 • Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.

 • The ability to execute a program that is only partially in memory would counter many benefits.

 • Less number of I/O would be needed to load or swap each user program into memory.

 • A program would no longer be constrained by the amount of physical memory that is available.
 • Each user program could take less physical memory, more programs could be run the same time,
with a corresponding increase in CPU utilization and throughput.

04/27/2023 66
Virtual Address Translation
 Modern microprocessors intended for general-
purpose use, a memory management unit, or
MMU, is built into the hardware.
 The MMU's job is to translate virtual addresses
into physical addresses. A basic example is
given below −

04/27/2023 67
Virtual Address Translation

04/27/2023 68
Virtual Address Translation
 Virtual memory is commonly implemented by
demand paging. It can also be implemented in
a segmentation system. Demand segmentation
can also be used to provide virtual memory.

04/27/2023 69
Demand Paging
 A demand paging system is quite similar to a paging system
with swapping where processes reside in secondary memory
and pages are loaded only on demand, not in advance.
 When a context switch occurs, the operating system does not
copy any of the old program’s pages out to the disk or any of
the new program’s pages into the main memory Instead, it
just begins executing the new program after loading the first
page and fetches that program’s pages as they are
referenced.

04/27/2023 70
Demand Paging

04/27/2023 71
Demand Paging
 While executing a program, if the program references a page
which is not available in the main memory because it was
swapped out a little ago,
 the processor treats this invalid memory reference as a page fault
and transfers control from the program to the operating system to
demand the page back into the memory.

04/27/2023 72
Page Fault
 Page Fault
 A page fault occurs when a program attempts to access a block of memory
that is not stored in the physical memory, or RAM. The fault notifies the
operating system that it must locate the data in virtual memory, then
transfer it from the storage device, such as an HDD or SSD, to the system
RAM.
 Though the term "page fault" sounds like an error, page faults are common
and are part of the normal way computers handle virtual memory. In
programming terms, a page fault generates an exception, which notifies the
operating system that it must retrieve the memory blocks or "pages" from
virtual memory in order for the program to continue.

04/27/2023 73
Demand Paging
 Once the data is moved into physical memory, the program continues
as normal. This process takes place in the background and usually
goes unnoticed by the user.
 Most page faults are handled without any problems. However, an
invalid page fault may cause a program to hang or crash.
 This type of page fault may occur when a program tries to access a
memory address that does not exist. Some programs can handle these
types of errors by finding a new memory address or relocating the
data.
 However, if the program cannot handle the invalid page fault, it will
get passed to the operating system, which may terminate the process.
04/27/2023 74
Page Fault

 This can cause the program to unexpectedly quit. While page faults are common when
working with virtual memory, each page fault requires transferring data from secondary
memory to primary memory. This process may only take a few milliseconds, but that can still
be several thousand times slower than accessing data directly from memory.
 Therefore, installing more system memory can increase your computer's performance, since it
will need to access virtual memory less often.
 Advantages
 Following are the advantages of Demand Paging:
 • Large virtual memory.
 • More efficient use of memory.
 • There is no limit on degree of multiprogramming.
 Disadvantages
 • Number of tables and the amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques.

04/27/2023 75
Page Replacement Algorithm

 Page replacement algorithms are the techniques using


which an Operating System decides which memory
pages to swap out, write to disk when a page of
memory needs to be allocated. Paging happens
whenever a page fault occurs and a free page cannot
be used for allocation purpose accounting to reason
that pages are not available or the number of free
pages is lower than required pages.

04/27/2023 76
Page Replacement Algorithm
 A page replacement algorithm looks at the limited
information about accessing the pages provided by
hardware, and tries to select which pages should be
replaced to minimize the total number of page misses,
while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many
different page replacement algorithms. We evaluate an
algorithm by running it on a particular string of memory
reference and computing the number of page faults.

04/27/2023 77
Page Replacement Algorithm
 Reference String
 The string of memory references is called reference string. Reference strings
are generated artificially or by tracing a given system and recording the
address of each memory reference.
 The latter choice produces a large number of data, where we note two
things.
 For a given page size, we need to consider only the page number, not the
entire address.
 If we have a reference to a page p, then any immediately following
references to page p will never cause a page fault. Page p will be in memory
after the first reference; the immediately following references will not fault.

04/27/2023 78
Page Replacement Algorithms
 Page Replacement Algorithms : 
 1. First In First Out (FIFO) – 
This is the simplest page replacement algorithm.
In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the
queue is selected for removal. 
04/27/2023 79
Page Replacement Algorithms
 Example-1Consider page reference string 1, 3,
0, 3, 5, 6 with 3 page frames.Find number of
page faults. 

04/27/2023 80
Page Replacement Algorithms

04/27/2023 81
Page Replacement Algorithms
 Initially all slots are empty, so when 1, 3, 0 came they are
allocated to the empty slots —> 3 Page Faults. 
when 3 comes, it is already in  memory so —> 0 Page
Faults. 
Then 5 comes, it is not available in  memory so it replaces
the oldest page slot i.e 1. —>1 Page Fault. 
6 comes, it is also not available in memory so it replaces the
oldest page slot i.e 3 —>1 Page Fault. 
Finally when 3 come it is not available so it replaces 0 1 page
fault 
04/27/2023 82
Page Replacement Algorithms
 2. Optimal Page replacement – 
In this algorithm, pages are replaced which
would not be used for the longest duration of
time in the future. 

04/27/2023 83
Page Replacement Algorithms
 Example-2:Consider the page references 7, 0, 1,
2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find
number of page fault. 

04/27/2023 84
Page Replacement Algorithms

04/27/2023 85
Page Replacement Algorithms
 Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults 
0 is already there so —> 0 Page fault. 
when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.—>1 Page fault. 
0 is already there so —> 0 Page fault.. 
4 will takes place of 1 —> 1 Page Fault. 
 Now for the further page reference string —> 0 Page fault because they are
already available in the memory. 
Optimal page replacement is perfect, but not possible in practice as the
operating system cannot know future requests. The use of Optimal Page
replacement is to set up a benchmark so that other replacement algorithms can
be analyzed against it.
04/27/2023 86
Page Replacement Algorithms
 3. Least Recently Used – 
In this algorithm page will be replaced which is
least recently used. 
Example-3 Consider the page reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames. Find number of page faults. 

04/27/2023 87
Page Replacement Algorithms

04/27/2023 88
Page Replacement Algorithms
 Initially all slots are empty, so when 7 0 1 2 are
allocated to the empty slots —> 4 Page faults 
0 is already their so —> 0 Page fault. 
when 3 came it will take the place of 7 because it is
least recently used —>1 Page fault 
0 is already in memory so —> 0 Page fault. 
4 will takes place of 1 —> 1 Page Fault 
Now for the further page reference string —> 0 Page
fault because they are already available in the
memory. 
04/27/2023 89

You might also like