Computer Memory System
Module 3
What is Computer Memory?
• Computer memory is just like the human brain.
• It is used to store data/information and instructions.
• It is a data storage unit or a data storage device where data is to be processed and
instructions required for processing are stored.
• It can store both the input and output can be stored here
Characteristics of Computer Memory
•It is faster computer memory as compared to secondary memory.
•It is semiconductor memories.
•It is usually a volatile memory, and main memory of the computer.
•A computer system cannot run without primary memory.
CHARACTERISTICS OF MEMORIES
Volatility
o Volatile {RAM}
o Non-volatile {ROM, Flash memory}
Mutability
o Read/Write {RAM, HDD, SSD, RAM, Cache, Registers…}
o Read Only {Optical ROM (CD/DVD…), Semiconductor ROM}
Accessibility
o Random Access {RAM, Cache}
o Direct Access {HDD, Optical Disks}
o Sequential Access {Magnetic Tapes}
How Does Computer Memory Work?
When you open a program, it is loaded from secondary memory into primary memory.
Because there are various types of memory and storage,
an example would be moving a program from a solid-state drive (SSD) to RAM.
Because primary storage is accessed more quickly, the opened software can connect with the computer’s
processor more quickly. The primary memory is readily accessible from temporary memory slots or other storage
sites.
Memory is volatile, which means that data is only kept temporarily in memory. Data saved in volatile memory
is automatically destroyed when a computing device is turned off. When you save a file, it is sent to secondary
memory for storage.
There are various kinds of memory accessible. It’s operation will depend upon the type of primary memory used.
but normally, semiconductor-based memory is more related with memory. Semiconductor memory made up
of IC (integrated circuits) with silicon-based metal-oxide-semiconductor (MOS) transistors.
MEMORY REPRESENTATION
The computer memory stores different kinds of data like input data, output data, intermediate results, etc., and
the instructions.
Binary digit or bit is the basic unit of memory. A bit is a single binary digit, i.e., 0 or 1.
A bit is the smallest unit of representation of data in a computer. However, the data is handled by the computer
as a combination of bits.
A group of 8 bits form a byte. One byte is the smallest unit of data that is handled by the computer.
One byte (8 bit) can store 2 8 = 256 different combinations of bits, and thus can be used to represent 256
different symbols. In a byte, the different combinations of bits fall in the range 00000000 to 11111111. A
group of bytes can be further combined to form a word. A word can be a group of 2, 4 or 8 bytes
1 bit = 0 or 1
1 Byte (B) = 8 bits
1 Kilobyte (KB) = 210 = 1024 bytes
1 Megabyte (MB) = 220 = 1024KB
1 Gigabyte (GB) = 230 = 1024 MB = 1024 *1024 KB
1 Terabyte (TB) = 240= 1024 GB = 1024 * 1024 *1024
KB
MEMORY HIERARCHY
The memory is characterized on the basis of two key factors: capacity and access
time.
• Capacity is the amount of information (in bits) that a memory can store.
• Access time is the time interval between the read/ write request and the
availability of data.
The lesser the access time, the faster is the speed of
memory.
Ideally, we want the memory with fastest speed and largest capacity.
However, the cost of fast memory is very high. The computer uses a hierarchy of
memory that is organized in a manner to enable the fastest speed and largest
capacity of memory.
Characteristics of Memory Hierarchy
•Capacity: It is the global volume of information the memory can store. As we move from top to bottom in
the Hierarchy, the capacity increases.
•Access Time: It is the time interval between the read/write request and the availability of the data. As we
move from top to bottom in the Hierarchy, the access time increases.
•Performance: Earlier when the computer system was designed without a Memory Hierarchy design, the
speed gap increased between the CPU registers and Main Memory due to a large difference in access time.
This results in lower performance of the system and thus, enhancement was required. This enhancement was
made in the form of Memory Hierarchy Design because of which the performance of the system
increases. One of the most significant ways to increase system performance is minimizing how far down the
memory hierarchy one has to go to manipulate data.
•Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases
i.e. Internal Memory is costlier than External Memory.
Advantages of Memory Hierarchy
•It helps in removing some destruction, and managing the memory in a better way.
•It helps in spreading the data all over the computer system.
•It saves the consumer’s price and time.
The Internal Memory and External Memory are the two broad categories of memory
used in the computer. The Internal Memory consists of the CPU registers, cache memory
and primary memory. The internal memory is used by the CPU to perform the
computing tasks. The External Memory is also called the secondary memory. The
secondary memory is used to store the large amount of data and the software
Internal memory External memory
Internal memory is also known as primary storage or
External memory is also known as secondary storage.
main memory.
It is volatile in nature in case of RAM but ROM is
It is non-volatile in nature.
non-volatile.
It is used to store data temporarily (in case of RAM). It is used to store data permanently.
Internal memory is a working memory. External memory is not a working memory.
Examples are RAM and ROM. Examples are hard disk, CD, DVD, flash drive etc.
In general, referring to the computer memory usually means the internal memory.
Internal Memory
The key features of internal memory are:
1. Limited storage capacity.
2. Temporary storage.
3. Fast access.
4. High cost.
Registers, cache memory, and primary memory constitute the internal memory. .
The primary memory is further of two kinds RAM and ROM.Registers are the fastest and the
most expensive among all the memory types.
The registers are located inside the CPU, and are directly accessible by the CPU.
The speed of registers is between 1-2 ns (nanosecond).
The sum of the size of registers is about 200B.
Cache memory is next in the hierarchy and is placed between the CPU and the main memory.
The speed of cache is between 2-10 ns.
The cache size varies between 32 KB to 4MB.
Any program or data that has to be executed must be brought into RAM from the secondary
memory.
Primary memory is relatively slower than the cache memory.
The speed of RAM is around 60ns. The RAM size varies from 512KB to 64GB
Secondary Memory
The key features of secondary memory storage devices are:
1. Very high storage capacity.
2. Permanent storage (non-volatile), unless erased by user.
3. Relatively slower access.
4. Stores data and instructions that are not currently being used by CPU but may be required
later for processing.
5. Cheapest among all memory.
To get the fastest speed of memory with largest capacity and least cost, the fast memory is
located close to the processor. The secondary memory, which is not as fast, is used to store
information permanently, and is placed farthest from the processor.
With respect to CPU, the memory is organized as follows:
• Registers are placed inside the CPU (small capacity, high cost, very high speed)
• Cache memory is placed next in the hierarchy (inside and outside the CPU)
• Primary memory is placed next in the hierarchy
• Secondary memory is the farthest from CPU (large capacity, low cost, low speed)
The speed of memories is dependent on the kind of technology used for the memory.
The registers, cache memory and primary memory are semiconductor memories.
They do not have any moving parts and are fast memories.
The secondary memory is magnetic or optical memory has moving parts and has slow
speed.
CPU REGISTERS
Registers are very high-speed storage areas located inside the CPU.
After CPU gets the data and instructions from the cache or RAM, the data and instructions are moved to the
registers for processing.
Registers are manipulated directly by the control unit of CPU during instruction execution.
That is why registers are often referred to as the CPU’s working memory.
Since CPU uses registers for the processing of data, the number of registers in a CPU and the size of each
register affect the power and speed of a CPU.
CPU Registers and Their Functions
Registers in CPU are of different types that each do a specific job when the computer follows instructions.
Let’s explore the primary types of CPU registers and their respective functions:
•Data Registers: Data registers, also called general-purpose registers, hold information while the computer
works on it. They are really important for doing math and logical tasks on the CPU.
•Address Registers: Address registers keep track of where information is stored in memory. As well as
making it easier for the computer to find and use data. They help move information between the CPU and
memory smoothly.
•Control Registers: Control registers in CPU helps to manage things like interruptions, how programs run,
and the status of the system. They also make sure all parts of the CPU work together smoothly.
•Special Purpose Registers: Some registers have specific jobs, like the Program Counter (PC). Which also
remembers where the next instruction is stored in memory for the CPU to use.
Types of CPU Registers
CPU registers are sorted into different types based on what they do and how they are used inside the CPU. So,
here are some common types of CPU registers:
1. Data Registers
•Accumulator (ACC): Holds numbers for math and logic.
•General-Purpose Registers (GPRs): Can hold different types of info during programs, like numbers or
addresses.
2. Address Registers
•Memory Address Register (MAR): Keeps track of where data or instructions are stored in memory.
•Index Registers (IX and IY): Helps find specific memory locations by adding to a base address.
3. Control Registers
•Program Counter (PC): Points to the next instruction for the CPU to use.
•Instruction Register (IR): Holds the current instruction the CPU is working on.
•Stack Pointer (SP): Manages the stack, like when functions call each other.
4. Status Registers
•Flags Register in CPU: This shows if math or logic operations have certain outcomes, like if a number is zero
or if there’s been an overflow.
5. Floating-Point Registers
They are specialized hardware components within a computer’s central processing unit (CPU) designed to
efficiently handle floating-point arithmetic operations.
These are some of the common types of CPU registers. The specific registers available and their functions may
vary depending on the CPU architecture and design.
Purpose of Registers in CPU
Registers in computing serve several crucial purposes:
•Fast Data Access: Registers are the quickest memory in a computer since they’re right inside the CPU. This
means getting data from them is almost instant, unlike getting it from RAM.
•Instruction Execution: Registers hold onto data that the CPU is working with, like numbers for math or
memory addresses for moving data around.
•Control and Status: Registers keep track of how the CPU and the whole computer system are running. For
example, the program counter remembers where the next instruction is, and flags signal things like if a math
operation had a big number or if the result is zero.
•Data Storage: Registers keep important data temporarily, like stuff taken from RAM or devices, or numbers
being used in calculations.
•Addressing: Some registers help the CPU find specific spots in memory to read from or write to.
Why we need a CPU register?
• For the fast operations of an instruction, the CPU register is highly useful. Without theses
CPU operation is unimaginable. These are the fastest memory when we look at the
different memory will hold the top position in the memory hierarchy.
• A register can hold an instruction, address, or any other sort of data.
• There are different types of registers available and we have seen most used in the above
part of the article.
• Thus having register, it makes the operations of CPU smooth efficient and meaningfull.
• A register must be large enough according to its requirements and specifications.
Advantages and Disadvantages
Advantages
Below are the advantages:
•These are fastest memory blocks and hence instructions are executed fastly compared to main memory
•Since each register purpose is different, and instructions will be handled with grace and smoothness by the
CPU with the help of registers
•There are rarely any CPU that will not be having register in the digital world
Disadvantages
Since the memory size of the register is finite and if the instruction is bigger then cpu need to use cache or
main memory along with register for the operation
What is Primary Memory
Primary memory is a segment of computer memory that can be accessed directly by the processor. In a
hierarchy of memory, primary memory has access time less than secondary memory and greater than cache
memory. Generally, primary memory has a storage capacity lesser than secondary memory and greater than
cache memory.
Need of primary memory
In order to enhance the efficiency of the system, memory is organized in such a way that access time for the
ready process is minimized.
The following approach is followed to minimize access time for the ready process.
•All programs, files, and data are stored in secondary storage that is larger and hence has greater access time.
•Secondary memory can not be accessed directly by a CPU or processor.
•In order, to execute any process operating system loads the process in primary memory which is smaller and
can be accessed directly by the CPU.
•Since only those processes are loaded in primary memory which is ready to be executed, the CPU can access
those processes efficiently and this optimizes the performance of the system.
PRIMARY MEMORY (Main Memory)
Primary memory is categorized into two main types:
Random access memory (RAM) and read only memory (ROM).
RAM is used for the temporary storage of input data, output data and intermediate results. The input
data entered into the computer using the input device, is stored in RAM for processing. After processing,
the output data is stored in RAM before being sent to the output device. Any intermediate results
generated during the processing of program are also stored in RAM. Unlike RAM, the data once stored in
ROM either cannot be changed or can only be changed using some special operations. Therefore, ROM is
used to store the data that does not require a change.
RAM vs Virtual Memory
RAM Virtual Memory
Virtual Memory is a storage allocation scheme in
which secondary memory can be addressed as
though it were part of the main memory. The
RAM is a physical device or memory installed in addresses a program may use to reference memory
the computer or laptop and is used by the CPU to are distinguished from the addresses the memory
store data temporarily during program execution. system uses to identify physical storage sites and
program-generated addresses are translated
automatically to the corresponding machine
addresses..
ROM vs RAM
ROM RAM
Non-volatile memory used for permanent
Volatile memory used for temporary storage
storage
Generally slower than RAM High-speed access
Primarily read-only Read and write operations
ROM generally holds only megabytes of storage RAM can store in gigabytes
Data accessible is not easy Data accessible is easy
Cheaper than RAM High cost as compared to ROM
Used for the temporary storage of data Used for permanent storage of data
Types of Primary Memory
1. RAM (Random Access Memory)
The Word “RAM” stands for “random access memory” or may also refer to short-term memory. It’s called
“random” because you can read store data randomly at any time and from any physical location. It is a
temporal storage memory. RAM is volatile that only retains all the data as long as the computer powered. It
is the fastest type of memory.
RAM stores the currently processed data from the CPU and sends them to the graphics unit.
What is RAM (Random Access Memory)?
It is one of the parts of the Main memory, also famously known as Read Write Memory. Random Access
memory is present on the motherboard and the computer’s data is temporarily stored in RAM. As the name
says, RAM can help in both Read and write. RAM is a volatile memory, which means, it is present as long as
the Computer is in the ON state, as soon as the computer turns OFF, the memory is erased.
1. SRAM (Static Random Access memory)
SRAM is used for Cache memory, it can hold the data as long as the power availability is there. It is refreshed
simultaneously to store the present information. It is made with CMOS (Complementary Metal Oxide
Semiconductor ) technology. It contains 4 to 6 transistors and it also uses clocks.
It does not require a periodic refresh cycle due to the presence of transistors. Although SRAM is faster, it
requires more power and is more expensive in nature. Since SRAM requires more power, more heat is lost
here as well, another drawback of SRAM is that it can not store more bits per chip, for instance, for the same
amount of memory stored in DRAM, SRAM would require one more chip.
Function of SRAM
The function of SRAM is that it provides a direct interface with the Central Processing Unit at
higher speeds.
Characteristics of SRAM
•SRAM is used as the Cache memory inside the computer.
•SRAM is known to be the fastest among all memories.
•SRAM is costlier.
•SRAM has a lower density (number of memory cells per unit area).
•The power consumption of SRAM is less but when it is operated at higher frequencies, the power
consumption of SRAM is compatible with DRAM.
2. DRAM (Dynamic Random Access memory)
DRAM is used for the Main memory, it has a different construction than SRAM, it used one transistor and
one capacitor (also known as a conductor), which is needed to get recharged in milliseconds due to the
presence of the capacitor. Dynamic RAM was the first sold memory integrated circuit. DRAM is the
second most compact technology in production (First is Flash Memory). DRAM has one transistor and
one capacitor in 1 memory bit. Although DRAM is slower, it can store more bits per chip, for instance, for
the same amount of memory stored in SRAM, DRAM requires one less chip. DRAM requires less power and
hence, less heat is produced.
Function of DRAM
The function of DRAM is that it is used for programming code by a computer processor in order to function.
It is used in our PCs (Personal Computers).
Characteristics of DRAM
•DRAM is used as the Main Memory inside the computer.
•DRAM is known to be a fast memory but not as fast as SRAM.
•DRAM is cheaper as compared to SRAM.
•DRAM has a higher density (number of memory cells per unit area)
•The power consumption by DRAM is more
Types of DRAM
•SDRAM: Synchronous DRAM, increases performance through its pins, which sync up with the data
connection between the main memory and the microprocessor.
•DDR SDRAM: (Double Data Rate) It has features of SDRAM also but with double speed.
•ECC DRAM: (Error Correcting Code) This RAM can find corrupted data easily and sometimes can
fix it.
•RDRAM: It stands for Rambus DRAM. It used to be popular in the late 1990s and early 2000s. It has
been developed by the company named Rambus Inc. at that time it competed with SDRAM. It’s latency
was higher at the beginning but it was more stable than SDRAM, consoles like Nintendo 64 and Sony Play
Station 2 used that.
•DDR2, DDR3, AND DDR4: These are successor versions of DDR SDRAM with upgrades in performance
Difference Between SRAM and DRAM
SRAM DRAM
DRAM stand for Dynamic Random Access
SRM stand for Static Random Access memory
memory
More power is required Less power is required
More expensive Less expensive
Faster Slower
Advantages of Using RAM
•Speed: RAM is faster than other types of storage like ROM, hard drives or SSDs, allowing for quick access to data and smooth
performance of applications.
•Multitasking: More RAM allows a computer to handle multiple applications simultaneously without slowing down.
•Flexibility: RAM can be easily upgraded, enhancing a computer’s performance and extending its usability.
•Volatile Storage: RAM automatically clears its data when the computer is turned off, reducing the risk of unwanted data
accumulation.
Disadvantages of Using RAM
•Volatility: Data stored in RAM is lost when the computer is turned off, which means important data must be saved to permanent
storage.
•Cost: RAM can be more expensive per gigabyte compared to other storage options like hard drives or SSDs.
•Limited Storage: RAM has a limited capacity, so it cannot store large amounts of data permanently.
•Power Consumption: RAM requires continuous power to retain data, contributing to overall power consumption of the device.
•Physical Space: Increasing RAM requires physical space in the computer, which might be limited in smaller devices like laptops
and tablets.
ROM (Read Only Memory)
ROM is the long-term internal memory.
ROM is “Non-Volatile Memory” that retains data without the flow of electricity.
ROM is an essential chip with permanently written data or programs.
It is similar to the RAM that is accessed by the CPU. ROM comes with pre-written by the computer
manufacturer to hold the instructions for booting-up the computer.
What is Read-Only Memory (ROM)?
ROM stands for Read-Only Memory. It is a non-volatile memory that is used to store important information
which is used to operate the system. As its name refers to read-only memory, we can only read the programs
and data stored on it. It is also a primary memory unit of the computer system. It contains some electronic
fuses that can be programmed for a piece of specific information. The information is stored in the ROM in
binary format. It is also known as permanent memory.
Block Diagram of ROM
As shown in below diagram, there are k input lines and n output lines in it. The input address from which we
wish to retrieve the ROM content is taken using the k input lines. Since each of the k input lines can have a
value of 0 or 1, there are a total of 2 k addresses that can be referred to by these input lines, and each of these
addresses contains n bits of information that is output from the ROM.
A ROM of this type is designated as a 2k x n ROM.
Internal Structure of ROM:
The internal structure comprises two basic components: decoder and OR gates. A decoder is a circuit
that decodes an encoded form (such as binary coded decimal, BCD) to a decimal form. So, the input is
in binary form, and the output is its decimal equivalent. All the OR gates present in the ROM will have
outputs of the decoder as their output. Let us take an example of 64 x 4 ROM. The structure is shown in the
following image.
How does ROM work?
ROM is sustained by a small, long-life battery in the computer. It contains two basic
components: the decoder and the OR logic gates. In ROM, the decoder receives input
in binary form; the output will be the decimal equivalent. The OR gates in ROM use the
decoder's decimal output as their input.
ROM performs like a disk array. It contains a grid of rows and columns that are used
to turn the system on and off. Every element of the array correlates with a specific
memory element on the ROM chip. A diode is used to connect the corresponding
elements. When a request is received, the address input is used to find the specific
memory location. The value that is read from the ROM chip should match the
contents of the chosen array element.
Features of ROM
•ROM is a non-volatile memory.
•Information stored in ROM is permanent.
•Information and programs stored on it, we can only read and cannot modified.
•Information and programs are stored on ROM in binary format.
•It is used in the start-up process of the computer.
•Non-Volatile Memory: ROM is a non-volatile memory type; thus, it keeps its data even when the power is
switched off. This makes it suitable for storing permanent instructions and data since it guarantees that the
recorded information will remain intact and may be accessed whenever necessary.
•Read-Only Nature: Reading-only memory, or ROM, as its name implies, prevents data from being readily
modified or wiped. This characteristic provides stability and prevents accidental alterations, ensuring the
integrity and reliability of the stored information.
•Permanent Storage: ROM offers permanent storage of data and instructions. Once the data is programmed into
ROM during manufacturing, it remains fixed and cannot be changed without physically replacing the ROM chip.
This permanence guarantees the consistency and stability of the stored information.
•Firmware Storage: ROM is commonly used for storing firmware containing essential instructions for operating
electronic devices. ROM's non-volatile and read-only nature ensures that the firmware remains unchanged,
providing reliable and consistent functionality to the device.
•Data Security: ROM offers inherent data security. Since the data stored in ROM cannot be modified or
erased, it protects against unauthorized alterations or tampering. This feature enhances the security and
authenticity of the stored information, making ROM suitable for critical instructions and sensitive data.
•Instant Read Access: ROM provides instant read access to the stored instructions and data. The information
can be accessed directly without time-consuming loading, enabling quick retrieval and execution of essential
instructions.
•Compatibility: ROM is compatible with various systems and architectures, allowing seamless integration
into different electronic devices and systems. This compatibility ensures that ROM can be utilized in various
applications.
•Reliability: Due to its read-only nature, ROM offers high reliability. The data stored in ROM is not
susceptible to accidental modifications or loss, ensuring consistent and predictable performance over time.
Such dependability is crucial for important systems where stability and data integrity are of the utmost
importance.
•Booting and Initialization: ROM plays a crucial role in electronic systems' booting and initialization
processes. The firmware stored in ROM contains the initial instructions required to start the system, load the
operating system, and initiate the hardware components. This ensures a smooth and controlled startup
sequence for the device.\
•Cost-Effectiveness: ROM is generally more cost-effective than other memory types, making it an economical
choice for many applications. Production costs are cheaper since the manufacturing procedures used to
produce ROMs are well-established.
Types of ROM:
1) Masked Read Only Memory (MROM):
It is the oldest type of read only memory (ROM). It has become obsolete so it is not used anywhere in today's world. It is a
hardware memory device in which programs and instructions are stored at the time of manufacturing by the
manufacturer. So it is programmed during the manufacturing process and can't be modified, reprogrammed, or erased
later. The MROM chips are made of integrated circuits. Chips send a current through a particular input-output
pathway determined by the location of fuses among the rows and columns on the chip. The current has to pass along a
fuse-enabled path, so it can return only via the output the manufacturer chooses. This is the reason the rewriting and any other
modification is not impossible in this memory.
Programmable Read Only Memory (PROM):
PROM is a blank version of ROM. It is manufactured as blank memory and programmed after
manufacturing. We can say that it is kept blank at the time of manufacturing. You can purchase and then
program it once using a special tool called a programmer.
The programmer can choose one particular path for the current by burning unwanted fuses by sending a high
voltage through them. The user has the opportunity to program it or to add data and instructions as per his
requirement. Due to this reason, it is also known as the user-programmed ROM as a user can program it.
To write data onto a PROM chip; a device called PROM programmer or PROM burner is used. The process
or programming a PROM is known as burning the PROM. Once it is programmed, the data cannot be modified
later, so it is also called as one-time programmable device.
Erasable and Programmable Read Only Memory
(EPROM):
EPROM is a type of ROM that can be reprogramed and erased many times. The method to erase the data is
very different; it comes with a quartz window through which a specific frequency of ultraviolet light is passed for
around 40 minutes to erase the data. So, it retains its content until it is exposed to the ultraviolet light. You need a
special device called a PROM programmer or PROM burner to reprogram the EPROM.
Uses: It is used in some micro-controllers to store program, e.g., some versions of Intel 8048 and the Free scale
68HC11.
Electrically Erasable and Programmable Read Only Memory (EEPROM):
ROM is a type of read only memory that can be erased and reprogrammed repeatedly, up to 10000 times. It is
also known as Flash EEPROM as it is similar to flash memory. It is erased and reprogrammed electrically
without using ultraviolet light. Access time is between 45 and 200 nanoseconds.
The data in this memory is written or erased one byte at a time; byte per byte, whereas, in flash memory data
is written and erased in blocks. So, it is faster than EEPROM. It is used for storing a small amount of data in
computer and electronic systems and devices such as circuit boards.
Uses: The BIOS of a computer is stored in this memory.
FLASH ROM:
It is an advanced version of EEPROM. It stores information in an arrangement or array of memory cells made
from floating-gate transistors. The advantage of using this memory is that you can delete or write blocks of
data around 512 bytes at a particular time. Whereas, in EEPROM, you can delete or write only 1 byte of data at
a time. So, this memory is faster than EEPROM.
It can be reprogrammed without removing it from the computer. Its access time is very high, around 45 to 90
nanoseconds. It is also highly durable as it can bear high temperature and intense pressure.
Uses: It is used for storage and transferring data between a personal computer and digital devices. It is used in
USB flash drives, MP3 players, digital cameras, modems and solid-state drives (SSDs). The BIOS of many
modern computers are stored on a flash memory chip, called flash BIOS.
Uses of ROM:
ROM (Read-Only Memory) is used in various electronic devices. Let's explore the numerous ROM apps found
in these electronic devices.
Computers:
In computer systems, ROM is essential. The Basic Input/Output System (BIOS) and first startup instructions are
stored as part of the computer's firmware. The firmware included in ROM is in charge of initializing the
hardware elements, running self-tests, and loading the operating system into memory when you switch on your
computer.
Video Games:
ROM is widely used in video games. Game data was previously stored on ROM cartridges in earlier gaming
consoles and portable devices. These cartridges carried the game's code, graphics, sound, and other components
on ROM chips. A gaming console loads the game when you insert a game cartridge by reading the data from the
ROM chip. Using ROM in video games allowed for easy distribution and ensured that the game data remained
intact without the risk of accidental modifications.
Smartphones:
• ROM is essential in smartphones for storing firmware, such as the operating system and built-in applications.
To maintain consistency throughout the device's existence, manufacturers program the firmware into the
ROM during the device's construction. The bootloader, which starts the booting process and loads the
operating system, is also included in ROM. By utilizing ROM, smartphones can provide stable and reliable
performance and protect the firmware from potential corruption or tampering.
Digital Speed Meters:
• In the automotive industry, ROM is used in digital speed meters or speedometers. The ROM chip in these
devices stores the calibration data and conversion tables needed to measure and display the vehicle's speed
accurately. This ensures that the speed meter operates consistently and provides accurate readings. The
non-volatile nature of ROM ensures that the calibration data remains intact even if the power is disconnected
or the vehicle is turned off.
Programmable Electronics:
• ROM is used in programmable electronic devices, microcontrollers, and programmable logic devices (PLDs). Those
devices frequently use programmable read-only memory (prom) or erasable programmable read-only memory
(EPROM). Users can program these ROM chips to preserve certain information or instructions that the device can
access and carry out. This flexibility allows for customization and flexibility in various digital applications, along with
robotics, automation, and control systems.
Advantages of ROM:
1.Data Retention: ROM maintains data even without power, ensuring that crucial data is retained and accessible
whenever necessary.
2.Permanent Storage: ROM's non-modifiable nature assures that the information stored inside stays intact, making
it a reliable and consistent source of data and instructions.
3.Reliable Performance: As ROM is read-only, unintentional modifications are prevented, ensuring that stored
data will work reliably and consistently over time.
4.Non-Volatile Memory: ROM is an option for storing important instructions, firmware, and data that shouldn't be
changed since it can preserve data without a constant power source.
5.Stability: The ROM offers a strong basis for the booting process and overall system function by storing crucial
instructions and calibration data, assuring consistent and predictable performance.
6.Data Security: Read-only memory (ROM) protects against unauthorized alterations, strengthening the
security of data held within and preventing unauthorized access.
7.Instant Accessibility: The ability to instantly access data and instructions stored in ROM reduces the need
for time-consuming data loading procedures, allowing for speedier system operation.
8.Simple Design and Manufacturing: The design of ROM chips makes it simple to integrate them into
electrical equipment.
9.Cost-Effectiveness: ROM is often less expensive than other memory types, making it a cost-effective
option for many applications without compromising performance.
10.Compatibility: ROM may easily be integrated into various electronic systems and devices since it is
compatible with various architectures and systems.
Disadvantages of ROM:
1.Immutability: The main disadvantage of ROM is its inability to be modified or updated. Once data is
programmed into ROM, it cannot be changed, limiting its flexibility and adaptability in certain applications.
2.Limited Flexibility: Unlike writable memory, such as RAM or flash memory, ROM does not allow for dynamic
changes or updates to the stored data, restricting its use in situations that require frequent modifications.
3.Manufacturing Challenges: Manufacturing ROM chips requires special processes, making them less flexible and
potentially more expensive to produce than other types of memory.
4.Design Constraints: The fixed nature of ROM imposes design constraints as the data programmed into it cannot
be easily altered or expanded. This can be limiting when system requirements change, or additional functionality
is desired.
5.Time-Consuming Development: Creating and programming ROM requires significant time and effort during the
development phase, which may slow down the overall product development cycle.
1.Higher Costs for Small-Scale Production: The initial costs associated with ROM production, such as mask
creation, can be relatively high, making it less cost-effective for small-scale or customized production runs.
2.Lack of Upgradability: ROM can only be upgraded or replaced with newer versions by physically replacing
the entire chip, which can be costly and impractical in many situations.
3.Storage Inefficiency: ROM is read-only; unused space within the ROM chip cannot be utilized, resulting in
potential storage inefficiencies.
4.Limited Error Correction: Unlike other memory types, ROM does not provide built-in error correction
mechanisms, which can disadvantage applications with critical data integrity.
5.Reduced Versatility: The fixed nature of ROM makes it less versatile for applications that require dynamic
storage and frequent changes to the stored data.
You know that processor memory, also known as primary memory, is expensive as well as limited. The
faster primary memory are also volatile. If we need to store large amount of data or programs permanently,
we need a cheaper and permanent memory. Such memory is called secondary memory. Here we will
discuss secondary memory devices that can be used to store large amount of data, audio, video and
multimedia files.
Characteristics of Secondary Memory
These are some characteristics of secondary memory, which distinguish it from primary memory −
•It is non-volatile, i.e. it retains data when power is switched off
•It is large capacities to the tune of terabytes
•It is cheaper as compared to primary memory
Depending on whether secondary memory device is part of CPU or not, there are two types of secondary
memory – fixed and removable.
Let us look at some of the secondary memory devices available.
Hard Disk Drive
Hard disk drive is made up of a series of circular disks called platters arranged one over the other almost ½
inches apart around a spindle. Disks are made of non-magnetic material like aluminum alloy and coated with
10-20 nm of magnetic material.
Standard diameter of these disks is 14 inches and they rotate with speeds
varying from 4200 rpm (rotations per minute) for personal computers to
15000 rpm for servers. Data is stored by magnetizing or demagnetizing the
magnetic coating. A magnetic reader arm is used to read data from and
write data to the disks. A typical modern HDD has capacity in terabytes
(TB).
Function of Hard disk
The hard disk is a secondary storage device, which is designed to store data permanently.
The secondary storage devices include a large storage capacity as compared to the primary storage devices.
The data stored in a hard disk is retained when our computer system shuts down.
The data stored in the hard disk can be of many types such as the operating system, installed software,
documents, and other files of computer.
Hard disk was introduced in the year 1956 by IBM. The first personal computer contains a hard drive of less
than 1 megabyte, while modern computers contain a hard drive of 1 terabyte.
Every computer contains atleast one hard drive to store data and software.
In Windows operating system, the hard drive is known to be the C drive,
and in MAC, it is simply called as the hard drive. The desktop computers which have external hard drives are
used for backup purposes or additional storage.
Advantages of the hard disk
The advantages of a Hard Disk Drive are given as follows:
•One of the significant advantages of a Hard Disk drive is that its cost is low.
•Another advantage of a Hard Disk is that it is readily available in the market.
•Hard Disk is faster than optical disks.
•The capacity for storing the data in HDDs is large.
Disadvantages of the hard disk
The disadvantages or limitations of Hard Disk Drive are given as follows:
The speed of reading and writing in HDD is slower than the RAM.
HDDs are noisy.
Another disadvantage of HDD is energy inefficiency.
HDDs consume more power.
The form factor of HDDs is heavier than the SSDs.
CD Drive
CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read and write
data. They are very cheap as you can get 700 MB of storage space for less than a dollar. CDs are inserted in
CD drives built into CPU cabinet. They are portable as you can eject the drive, remove the CD and carry it
with you. There are three types of CDs −
•CD-ROM (Compact Disk – Read Only Memory) − The data on these CDs are recorded by the
manufacturer. Proprietary Software, audio or video are released on CD-ROMs.
•CD-R (Compact Disk – Recordable) − Data can be written by the user once on the CD-R. It cannot be
deleted or modified later.
•CD-RW (Compact Disk – Rewritable) − Data can be written and deleted on these optical disks again and
again.
DVD Drive
DVD stands for Digital Video Display. DVD are optical devices that can store 15 times the
data held by CDs. They are usually used to store rich multimedia files that need high storage
capacity. DVDs also come in three varieties – read only, recordable and rewritable.
Pen Drive
Pen drive is a portable memory device that uses solid state memory rather than magnetic
fields or lasers to record data. It uses a technology similar to RAM, except that it is
nonvolatile. It is also called USB drive, key drive or flash memory.
Blu Ray Disk
Blu Ray Disk (BD) is an optical storage media used to store high definition (HD) video and
other multimedia filed. BD uses shorter wavelength laser as compared to CD/DVD. This
enables writing arm to focus more tightly on the disk and hence pack in more data. BDs can
store up to 128 GB data.
Cache Memory
Cache memory is a high-speed memory, which is small in size but faster than the main
memory (RAM). The CPU can access it more quickly than the primary memory. So, it is
used to synchronize with high-speed CPU and to improve its performance.
What is cache memory?
Cache memory is a chip-based computer component that makes retrieving data from the computer's memory more
efficient.
It acts as a temporary storage area that the computer's processor can retrieve data from easily.
This temporary storage area, known as a cache, is more readily available to the processor than the computer's main
memory source, typically some form of dynamic random access memory (DRAM).
Cache memory is sometimes called CPU (central processing unit) memory because it is typically integrated
directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.
Therefore, it is more accessible to the processor, and able to increase efficiency, because it's physically close to the
processor.
In order to be close to the processor, cache memory needs to be much smaller than main memory. Consequently, it
has less storage space. It is also more expensive than main memory, as it is a more complex chip that yields higher
performance.
.
• Cache memory can only be accessed by CPU.
• It can be a reserved part of the main memory or a storage device outside the CPU.
• It holds the data and programs which are frequently used by the CPU. So, it makes
sure that the data is instantly available for CPU whenever the CPU needs this data.
• In other words, if the CPU finds the required data or instructions in the cache memory,
it doesn't need to access the primary memory (RAM).
• Thus, by acting as a buffer between RAM and CPU, it speeds up the system
performance.
Importance of Caching
Caching plays a crucial role in optimizing system performance and improving overall efficiency in various computing
environments. Its importance lies in several key aspects:
•Faster Data Access: By storing frequently accessed data closer to the processor, caching reduces the time required to fetch
data from slower storage mediums, such as disk drives or network servers. This results in faster data retrieval times and
improved system responsiveness.
•Reduced Latency: Caching helps mitigate the latency associated with accessing data from distant or slower storage devices.
By keeping frequently used data readily available in cache memory, systems can minimize the delay experienced by users
when accessing resources.
•Bandwidth Conservation: Caching conserves network bandwidth by serving frequently requested content locally instead of
fetching it repeatedly from remote servers. This reduces the load on network infrastructure and improves overall network
performance.
•Scalability: Caching enhances system scalability by offloading processing and storage burdens from backend systems. By
caching frequently accessed data, systems can efficiently handle increasing user loads without experiencing performance
degradation.
•Improved User Experience: Faster data access and reduced latency contribute to a smoother and more responsive user
experience. Whether it’s web browsing, accessing files, or running applications, caching helps deliver content and services
quickly, leading to higher user satisfaction.
•Enhanced Reliability: Caching can improve system reliability by reducing the risk of service disruptions or downtime. By
serving cached content during periods of high demand or network instability, systems can maintain service availability and
mitigate the impact of potential failures.
Types of Cache Memory:
L1:
• It is the first level of cache memory, which is called Level 1 cache or L1 cache.
• In this type of cache memory, a small amount of memory is present inside the CPU
itself.
• If a CPU has four cores (quad core cpu), then each core will have its own level 1
cache.
• As this memory is present in the CPU, it can work at the same speed as of the CPU.
• The size of this memory ranges from 2KB to 64 KB.
• The L1 cache further has two types of caches:
• Instruction cache, which stores instructions required by the CPU,
• and the data cache that stores the data required by the CPU.
L2:
• This cache is known as Level 2 cache or L2 cache.
• This level 2 cache may be inside the CPU or outside the CPU.
• All the cores of a CPU can have their own separate level 2 cache, or they can share one
L2 cache among themselves.
• In case it is outside the CPU, it is connected with the CPU with a very high-speed bus.
• The memory size of this cache is in the range of 256 KB to the 512 KB.
• In terms of speed, they are slower than the L1 cache.
L3:
• It is known as Level 3 cache or L3 cache.
• This cache is not present in all the processors; some high-end processors may have this
type of cache.
• This cache is used to enhance the performance of Level 1 and Level 2 cache.
• It is located outside the CPU and is shared by all the cores of a CPU.
• Its memory size ranges from 1 MB to 8 MB.
• Although it is slower than L1 and L2 cache, it is faster than Random Access Memory
(RAM).
Use Cases and Benefits
•Improved Performance: CPU cache dramatically reduces the time required to access
frequently used data and instructions, resulting in faster program execution and improved
system responsiveness.
•Reduced Memory Latency: By storing frequently accessed data and instructions closer to
the CPU cores, cache minimizes the latency associated with fetching data from main
memory.
•Higher Throughput: Cache enables processors to execute instructions more efficiently by
reducing the frequency of stalls caused by waiting for data from main memory.
•Optimized Multithreading: In multithreaded applications, CPU cache helps improve thread
performance by reducing contention for shared resources and minimizing cache thrashing.
Need for Cache Mapping
Cache mapping is needed to identify where the cache memory is present in cache memory.
Mapping provides the cache line number where the content is present in the case of cache
hit or where to bring the content from the main memory in the case of cache miss.
Cache mapping is a technique that is used to bring the main memory content to the
cache or to identify the cache block in which the required content is present.
What is Cache Mapping?
Cache mapping is the procedure in to decide in which cache line the main memory block
will be mapped.
In other words, the pattern used to copy the required main memory content to the
specific location of cache memory is called cache mapping.
The process of extracting the cache memory location and other related information in
which the required content is present from the main memory address is called as cache
mapping.
The cache mapping is done on the collection of bytes called blocks.
In the mapping, the block of main memory is moved to the line of the cache memory.
Primary Terminologies
Some primary terminologies related to cache mapping are listed below:
•Main Memory Blocks: The main memory is divided into equal-sized partitions called the
main memory blocks.
•Cache Line: The cache is divided into equal partitions called the cache lines.
•Block Size: The number of bytes or words in one block is called the block size.
•Tag Bits: Tag bits are the identification bits that are used to identify which block of main
memory is present in the cache line.
•Number of Cache Lines: The number of cache lines is determined by the ratio of cache size
divided by the block or line size.
•Number of Cache Set: The number of cache sets is determined by the ratio of several cache
lines divided by the associativity of the cache.
Some important points related to cache mappings are listed below.
•The number of bytes in main memory block is equal to the number of bytes in cache line
i.e., the main memory block size is equal to the cache line size.
•Number of blocks in cache = Cache Size / line or Block Size
•Number of sets in cache = Number of blocks in cache / Associativity
•The main memory address is divided into two parts i.e., main memory block number and
byte number.
Direct Mapping
In direct mapping physical address is divided into three parts
i.e., Tag bits, Cache Line Number and Byte offset.
The bits in the cache line number represents the cache line in which the content is present
whereas the bits in tag are the identification bits that represents which block of main
memory is present in cache.
The bits in the byte offset decides in which byte of the identified block the required content is
present.
Cache Line Number = Main Memory block Number %
Number of Blocks in Cache
Fully Associative Mapping
In fully associative mapping address is divided into two parts
i.e., Tag bits and Byte offset.
The tag bits identify that which memory block is present and bits in the byte offset field
decides in which byte of the block the required content is present.
Tag Byte Offset
Set Associative Mapping
In set associative mapping the cache blocks are divided in sets.
It divides address into three parts i.e., Tag bits, set number and byte offset.
The bits in set number decides that in which set of the cache the required block is present
and tag bits identify which block of the main memory is present.
The bits in the byte offset field gives us the byte of the block in which the content is present.
Tag Set Number Byte Offset
Cache Set Number = Main Memory block number % Number
of sets in cache
How does cache memory work with CPU?
When CPU needs the data, first of all, it looks inside the L1 cache. If it does not find anything in L1, it looks
inside the L2 cache. If again, it does not find the data in L2 cache, it looks into the L3 cache. If data is found in
the cache memory, then it is known as a cache hit. On the contrary, if data is not found inside the cache, it is
called a cache miss.
If data is not available in any of the cache memories, it looks inside the Random Access Memory (RAM). If
RAM also does not have the data, then it will get that data from the Hard Disk Drive.
So, when a computer is started for the first time, or an application is opened for the first time, data is not
available in cache memory or in RAM. In this case, the CPU gets the data directly from the hard disk drive.
Thereafter, when you start your computer or open an application, CPU can get that data from cache memory or
RAM.
Direct-mapping Associative Mapping Set-Associative Mapping
Needs comparison with all tag bits, i.e.,
Needs only one comparison because of the cache control logic must examine Needs comparisons equal to number of
using direct formula to get the effective every block’s tag for a match at the same blocks per set as the set can contain more
cache address. time in order to determine that a block is in than 1 blocks.
the cache/not.
Main Memory Address is divided into 3
fields : TAG, BLOCK & WORD. The
BLOCK & WORD together make an index.
The least significant WORD bits identify a Main Memory Address is divided into 1 Main Memory Address is divided into 3
unique word within a block of main fields : TAG & WORD. fields : TAG, SET & WORD.
memory, the BLOCK bits specify one of
the blocks and the Tag bits are the most
significant bits.
There is one possible location in the
The mapping of the main memory block
cache organization for each block from The mapping of the main memory block
can be done with a particular cache block
main memory because we have a fixed can be done with any of the cache block.
of any direct-mapped cache.
formula.
If the processor need to access same If the processor need to access same
In case of frequently accessing two
memory location from 2 different main memory location from 2 different main
different pages of the main memory if
memory pages frequently, cache hit ratio memory pages frequently, cache hit ratio
reduced, the cache hit ratio reduces.
decreases. has no effect.
Search time is less here because there is
Search time is more as the cache control
one possible location in the cache Search time increases with number of
logic examines every block’s tag for a
Advantages-
•Simplest type of mapping
Advantages- Advantages-
•Fast as only tag field matching is required
•It is fast. •It gives better performance than the direct
while searching for a word.
•Easy to implement and associative mapping techniques.
•It is comparatively less expensive than
associative mapping.
Disadvantages- Disadvantages- Disadvantages-
•It gives low performance because of the •Expensive because it needs to store •It is most expensive as with the increase
replacement for data-tag value. address along with the data. in set size cost also increases.
The index is given by the number of The index is given by the number of sets
The index is zero for associative mapping.
blocks in cache. in cache.
It has less tags bits than associative
It has least number of tag bits. It has the greatest number of tag bits. mapping and more tag bits than direct
mapping.
What is virtual memory?
Virtual memory is a memory management technique where secondary memory can be
used as if it were a part of the main memory.
Virtual memory is a common technique used in a computer's operating system (OS).
Virtual memory uses both hardware and software to enable a computer to compensate
for physical memory shortages, temporarily transferring data from random access
memory (RAM) to disk storage.
Mapping chunks of memory to disk files enables a computer to treat secondary memory as
though it were main memory.
Today, most personal computers (PCs) come with at least 8 GB (gigabytes) of RAM.
But, sometimes, this is not enough to run several programs at one time.
This is where virtual memory comes in.
Virtual memory frees up RAM by swapping data that has not been used recently over to a
storage device, such as a hard drive or solid-state drive (SSD).
Virtual memory is important for improving system performance, multitasking and using
large programs.
However, users should not overly rely on virtual memory, since it is considerably slower than
RAM.
If the OS has to swap data between virtual memory and RAM too often, the computer will
begin to slow down -- this is called thrashing.
How virtual memory works?
Virtual memory uses both hardware and software to operate.
When an application is in use, data from that program is stored in a physical address using RAM.
A memory management unit (MMU) maps the address to RAM and automatically translates
addresses.
The MMU can, for example, map a logical address space to a corresponding physical address.
While copying virtual memory into physical memory, the OS divides memory with a fixed
number of addresses into either pagefiles or swap files.
Each page is stored on a disk, and when the page is needed, the OS copies it from the disk to main
memory and translates the virtual addresses into real addresses.
What are the benefits of using virtual memory?
The advantages to using virtual memory include:
•It can handle twice as many addresses as main memory.
•It enables more applications to be used at once.
•It frees applications from managing shared memory and saves users from having to add memory modules when
RAM space runs out.
•It has increased speed when only a segment of a program is needed for execution.
•It has increased security because of memory isolation.
•It enables multiple larger applications to run simultaneously.
•Allocating memory is relatively inexpensive.
•It does not need external fragmentation.
•CPU use is effective for managing logical partition workloads.
•Data can be moved automatically.
•Pages in the original process can be shared during a fork system call operation that creates a copy of itself.
Two types of virtual memory
Memory management systems use two types of virtual memory methods to improve application performance.
Paging
• In a system that uses paging, RAM is divided into a number of blocks called pages, usually
4K in size.
• Processes are then allocated just enough pages to meet their memory requirements.
• This means there will always be a small amount of memory wasted, except in the unusual
case where a process requires exactly a whole number of pages.
• During the normal course of operations, pages are swapped between RAM and a page file,
which represents the virtual memory.
Segmentation
• Segmentation is an alternative approach to memory management:
• Instead of pages of a fixed size, the memory management system allocates segments of differing length to
processes to exactly meet their requirements.
• Unlike in a paged system, no memory is wasted in a segment.
• Segmentation also allows applications to be split up into logically independent address spaces, which can
make them easier and more secure to share.
• One downside to segmentation is that because each segment is a different length, it can lead to memory
fragmentation.
• As segments are continually allocated and de-allocated, small chunks of memory are scattered within the
memory space. They’re too small to be useful.
• As these small chunks build up, fewer and fewer segments of useful size can be allocated.
• It’s difficult for the OS to keep track of all these segments, and each process will need to use multiple
segments.
• This is inefficient and can reduce overall application performance.
r No. Paging Segmentation
Non-Contiguous memory Non-contiguous memory
1
allocation allocation
Paging divides program into Segmentation divides program
2
fixed size pages. into variable size segments.
3 OS is responsible Compiler is responsible.
Paging is faster than Segmentation is slower than
4
segmentation paging
Paging is closer to Operating
5 Segmentation is closer to User
System
It suffers from internal It suffers from external
6
fragmentation fragmentation
There is no external There is no external
7
fragmentation fragmentation
Logical address is divided into
Logical address is divided into
8 segment number and segment
page number and page offset
offset
Page table is used to maintain Segment Table maintains the
9
the page information. segment information
How to manage the virtual memory
Managing virtual memory within an OS can be straightforward, as there are default settings that determine the
amount of hard drive space to allocate for virtual memory.
Those settings will work for most applications and processes, but there may be times when it is necessary to
manually reset the amount of hard drive space allocated to virtual memory -- for example, with applications that
depend on fast response times or when the computer has multiple hard disk drives (HDDs).
When manually resetting virtual memory, the minimum and maximum amount of hard drive space to be used for
virtual memory must be specified.
Allocating too little HDD space for virtual memory can result in a computer running out of RAM.
If a system continually needs more virtual memory space, it may be wise to consider adding RAM.
Common OSes may generally recommend users not increase virtual memory beyond 1 ½ times the amount of
RAM.
Managing virtual memory differs by OS.
For this reason, IT professionals should understand the basics when it comes to managing physical memory,
virtual memory and virtual addresses.
RAM cells in SSDs also have a limited lifespan. RAM cells have a limited number of writes, so using them for
virtual memory often reduces the lifespan of the drive.
What are the benefits of using virtual memory?
The advantages to using virtual memory include:
•It can handle twice as many addresses as main memory.
•It enables more applications to be used at once.
•It frees applications from managing shared memory and saves users from having to add memory modules when
RAM space runs out.
•It has increased speed when only a segment of a program is needed for execution.
•It has increased security because of memory isolation.
•It enables multiple larger applications to run simultaneously.
•Allocating memory is relatively inexpensive.
•It does not need external fragmentation.
•CPU use is effective for managing logical partition workloads.
•Data can be moved automatically.
•Pages in the original process can be shared during a fork system call operation that creates a copy of itself.
What are the limitations of using virtual memory?
Although the use of virtual memory has its benefits, it also comes with some tradeoffs worth considering, such
as:
•Applications run slower if they are running from virtual memory.
•Data must be mapped between virtual and physical memory, which requires extra hardware support for address
translations, slowing down a computer further.
•The size of virtual storage is limited by the amount of secondary storage, as well as the addressing scheme with
the computer system.
•Thrashing can occur if there is not enough RAM, which will make the computer perform slower.
•It may take time to switch between applications using virtual memory.
•It lessens the amount of available hard drive space.
Feature Virtual Memory Physical Memory (RAM)
An abstraction that extends the The actual hardware (RAM) that
Definition available memory by using disk stores data and instructions currently
storage being used by the CPU
Location On the hard drive or SSD On the computer’s motherboard
Faster (accessed directly by the
Speed Slower (due to disk I/O operations)
CPU)
Smaller, limited by the amount of
Capacity Larger, limited by disk space
RAM installed
Lower (cost of additional disk
Cost Higher (cost of RAM modules)
storage)
Direct (CPU can access data
Data Access Indirect (via paging and swapping)
directly)
Volatile (data is lost when power is
Volatility Non-volatile (data persists on disk)
off)
Applications of Virtual memory
Virtual memory has the following important characteristics that increase the capabilities of the
computer system. The following are five significant characteristics of Lean.
•Increased Effective Memory: One major practical application of virtual memory is, virtual
memory enables a computer to have more memory than the physical memory using the disk
space. This allows for the running of larger applications and numerous programs at one time
while not necessarily needing an equivalent amount of DRAM.
•Memory Isolation: Virtual memory allocates a unique address space to each process and that
also plays a role in process segmentation. Such separation increases safety and reliability based
on the fact that one process cannot interact with and or modify another’s memory space through
a mistake, or even a deliberate act of vandalism.
•Efficient Memory Management: Virtual memory also helps in better utilization of the
physical memories through methods that include paging and segmentation. It can transfer
some of the memory pages that are not frequently used to disk allowing RAM to be used by
active processes when required in a way that assists in efficient use of memory as well as
system performance.
•Simplified Program Development: For case of programmers, they don’t have to consider
physical memory available in a system in case of having virtual memory. They can program
‘as if’ there is one big block of memory and this makes the programming easier and more
efficient in delivering more complex applications.
Requirements of memory management
Memory management system requires a few of the basic requirements for its processing and storage which are discussed
below.
Sharing of memory
During a multi-process environment, all the processes access the same part of the main memory. Here
protection measures are required during this sharing process which is taken care of by memory management.
This approach of sharing has been an advantageous one because each process does not need a separate copy to
be created as they access the shared copy already available in the memory. Shared memory is considered an
efficient approach during inter-process communication methods.
Mapping of Address /Relocation −
During the time of process execution, the user may not know about other programs that are residing in the main
memory, here swapping happens to the disk and it gets returned to the main memory but with a different
location. Relocation happens after swapping because the previous memory location where the process already
resided will now be used by another process. Once the process gets loaded to the memory, translation must
happen from the logical address to the physical address, which is done by the associated process and the
operating system. A physical address contains a combination of logical addresses and contents related to the
relocation register.
Memory protection
Protection mechanisms have to be satisfied by the processor which is in the execution state, to avoid
interference of other unauthorized to perform write operations to the same file located in the shared memory
space. This has to be in control by the processor of the system rather than the installed operating system, as OS
has a function to control the process which occupies them, thereby checking the reference of the valid memory.
In simple terms, protection is the method of securing the memory from unauthorized processes.
Logical addressing −
Programs that are stored inside the memory units are organized into modules and these modules are modifiable with read and
execute permission but a few do not have permission to modify. The sequence of bytes or otherwise as words is the
representation form of main memory which is linear and has a one-dimensional address area in nature. As said above, logical
addresses are generated by the CPU during the run time of the process. The user programs which are given as input is divided
into modules compiled independently and their references are addressed during the run time of the system. Protection can vary
on each level of logical spaces where different modules reside.
Physical space −
As it’s known that the memory of the system has two divisions such as main memory and secondary memory. Main memory
(volatile) is capable to store and handle the current programs which are in execution with better performance whereas secondary
memory (which acts as nonvolatile) supports storing the data for a long time but provides less performance when compared to
main memory. The flow of information and swapping process seems to be a difficult task for the user to understand.
The overlaying approach can be followed by the user when the main memory and the data to be stored are not as sufficient as
required. This method allows various modules of a user program allocated to the same memory space. The major drawback is
during a multiprogramming environment, the user does not know the details about the space and its location in the memory
during the program execution.
https://www.toppr.com/guides/computer-science/computer-f
undamentals/primary-memory/cache-memory/
https://www.javatpoint.com/rom
https://www.techtarget.com/searchstorage/definition/cache-
memory