Memory Organization
Chapter Seven
Computer Organization & Architecture
Memory Organization
• Memory Hierarchy
• Main Memory
• Cache memory
• Mapping Functions
• Direct Mapping
• Associative Mapping
• Set Associative Mapping
Memory Hierarchy
The memory unit is an essential component in any digital
computer since it is needed for storing programs and
data
Not all accumulated information is needed by the CPU
at the same time
Therefore, it is more economical to use low-cost storage
devices to serve as a backup for storing the information
that is not currently used by CPU
Memory Hierarchy
The memory unit that directly communicate with CPU
is called the main memory
Devices that provide backup storage are called auxiliary
memory
The memory hierarchy system consists of all storage
devices employed in a computer system from the slow by
high-capacity auxiliary memory to a relatively faster
main memory, to an even smaller and faster cache
memory
Memory Hierarchy
The main memory occupies a central position by being able to communicate
directly with the CPU and with auxiliary memory devices through an I/O
processor
A special very-high-speed memory called cache is used to increase the speed of
processing by making current programs and data available to the CPU at a rapid
rate
Memory Hierarchy
CPU logic is usually faster than main memory access time, with
the result that processing speed is limited primarily by the speed
of main memory
The cache is used for storing segments of programs currently
being executed in the CPU and temporary data frequently needed
in the present calculations
The typical access time ratio between cache and main memory is
about 1to7
Auxiliary memory access time is usually 1000 times that of main
memory
Memory Hierarchy
Main Memory I/O Processor
CPU
Cache
Magnetic
Disks Magnetic Tapes
Latency and Bandwidth
The speed and efficiency of data transfers among memory,
processor, and disk have a large impact on the performance of a
computer system.
Memory latency – the amount of time it takes to transfer a
word of data to or from the memory.
Memory bandwidth – the number of bits or bytes that can be
transferred in one second. It is used to measure how much time
is needed to transfer an entire block of data.
Bandwidth is not determined solely by memory. It is the product
of the rate at which data are transferred (and accessed) and the
width of the data bus.
Main Memory
Most of the main memory in a general purpose
computer is made up of RAM integrated circuits chips,
but a portion of the memory may be constructed with
ROM chips
RAM– Random Access memory
In tegated RAM are available in two possible operating modes,
Static and Dynamic
ROM– Read Only memory
Random-Access Memory (RAM)
Static RAM (SRAM)
Each cell stores bit with a six-transistor circuit.
Retains value indefinitely, as long as it is kept powered.
Relatively insensitive to disturbances such as electrical noise.
Faster and more expensive than DRAM.
Dynamic RAM (DRAM)
Each cell stores bit with a capacitor and transistor.
Value must be refreshed every 10-100 ms.
Sensitive to disturbances.
Slower and cheaper than SRAM.
SRAM vs DRAM Summary
Tran. Access
per bit time Persist?Sensitive? Cost Applications
SRAM 6 1X Yes No 100x cache memories
DRAM 1 10X No Yes 1X Main memories,
frame buffers
ROM
ROM is used for storing programs that are
PERMENTLY resident in the computer and for tables of
constants that do not change in value once the
production of the computer is completed
The ROM portion of main memory is needed for storing
an initial program called bootstrap loader, witch is to start
the computer software operating when power is turned
off
Main Memory
A RAM chip is better suited for communication with the
CPU if it has one or more control inputs that select the
chip when needed
The Block diagram of a RAM chip is shown next slide,
the capacity of the memory is 128 words of 8 bits (one
byte) per word
RAM
Cache memory
When CPU needs to access memory, the cache is examined
If the word is found in the cache, it is read from the fast
memory
If the word addressed by the CPU is not found in the cache,
the main memory is accessed to read the word
Cache memory
If the active portions of the program and data are placed
in a fast small memory, the average memory access time
can be reduced,
Thus reducing the total execution time of the program
Such a fast small memory is referred to as cache memory
The cache is the fastest component in the memory
hierarchy and approaches the speed of CPU component
Cache memory
The performance of cache memory is frequently measured in
terms of a quantity called hit ratio
When the CPU refers to memory and finds the word in
cache, it is said to produce a hit
Otherwise, it is a miss
Hit ratio = hit / (hit+miss)
Cache memory
The basic characteristic of cache memory is its fast access time,
Therefore, very little or no time must be wasted when searching
the words in the cache
The transformation of data from main memory to cache memory
is referred to as a mapping process, there are three types of
mapping:
Associative mapping
Direct mapping
Set-associative mapping
Cache Memory
00000000 Main
00000001
•
Memory
00000 Cache •
00001 •
• •
• •
• •
• •
FFFFF •
•
•
3FFFFFFF
Address Mapping !!!
Memory Address Map
Memory Address Map is a pictorial representation of assigned
address space for each chip in the system
To demonstrate an example, assume that a computer system needs
512 bytes of RAM and 512 bytes of ROM
The RAM have 128 byte and need seven address lines, where the
ROM have 512 bytes and need 9 address lines
Memory Address Map
Memory Address Map
The hexadecimal address assigns a range of hexadecimal
equivalent address for each chip
Line 8 and 9 represent four distinct binary combination
to specify which RAM we chose
When line 10 is 0, CPU selects a RAM. And when it’s 1,
it selects the ROM
Associative mapping
The fastest and most flexible cache organization uses an associative
memory
The associative memory stores both the address and data of the
memory word
This permits any location in cache to store any word from main
memory
The address value of 15 bits is shown as a five-digit octal number
and its corresponding 12-bit word is shown as a four-digit octal
number
Associative mapping
Associative mapping
A CPU address of 15 bits is places in the argument
register and the associative memory is searched for a
matching address
If the address is found, the corresponding 12-bits data is
read and sent to the CPU
If not, the main memory is accessed for the word
If the cache is full, an address-data pair must be displaced
to make room for a pair that is needed and not presently
in the cache
Associative Mapping Main
memory
Block 0
Block 1
Cache
tag
Block 0
tag
Block 1
Block i
tag
Block 127
4: one of 16 words. (each
block has 16=24 words)
Block 4095
12: 12 tag bits Identify which
of the 4096 blocks that are Tag Word
resident in the cache 12 4 Main memory address
4096=212.
Figure 5.16. Associative-mapped cache.
Associative Memory
Cache Location 00000000 Main
00000001
•Bootloader or
Memory
00000 Cache low-level system
memory.
00001
•
•
00012000 00012000
•
•Application
• memory or
15000000
• reserved regions
FFFFF •
08000000 08000000
•
•
15000000
•Extended
Address (Key) Data regions or
external memory
3FFFFFFF
Direct Mapping
Associative memory is expensive compared to RAM
In general case, there are 2^k words in cache memory and
2^n words in main memory (in our case, k=9, n=15)
The n bit memory address is divided into two fields: k-bits
for the index and n-k bits for the tag field
Main
memory
Direct Mapping
Block 0
Block 1
Block j of main memory maps onto
block j modulo 128 of the cache
Cache Block 127
tag
Block 0 Block 128
tag
Block 1 Block 129
4: one of 16 words. (each
block has 16=24 words) tag
Block 127 Block 255
7: points to a particular block Block 256
in the cache (128=27) Block 257
Figure 5.15. Direct-mapped cache.
5: 5 tag bits are compared
with the tag bits associated
with its location in the cache.
Block 4095
Identify which of the 32
blocks that are resident in Tag Block Word
the cache (4096/128). 5 7 4 Main memory address
Direct Mapping
Tag Block Word
5 7 4 Main memory address
11101,1111111,1100
Tag: 11101
Block: 1111111=127, in the 127th block of the cache
Word:1100=12, the 12th word of the 127th block in the cache
Set-Associative Mapping
The disadvantage of direct mapping is that two words
with the same index in their address but with different
tag values cannot reside in cache memory at the same
time
Set-Associative Mapping is an improvement over the
direct-mapping in that each word of cache can store two
or more word of memory under the same index address
Set-Associative Mapping
Set-Associative Mapping
In the slide, each index address refers to two data words and
their associated tags
Each tag requires six bits and each data word has 12 bits, so
the word length is 2*(6+12) = 36 bits
Main
memory
Block 0
Set-Associative Mapping Block 1
Cache
tag
Block 0
Set 0
Block 63
tag
Block 1
Block 64
tag
Block 2
Set 1
tag Block 65
Block 3
tag Block 127
4: one of 16 words. (each Block 126
Set 63
Block 128
block has 16=24 words) tag
Block 127
Block 129
6: points to a particular set in
the cache (128/2=64=26)
6: 6 tag bits is used to check
if the desired block is Block 4095
present (4096/64=26). Figure 5.17. Set-associative-mapped cache with two blocks per set.
Tag Set Word
6 6 4 Main memory address
Set-Associative Mapping
Address
000 00500
2-Way Set Associative
00000 Cache
00500 000 0 1 A 6 010 0 7 2 1
Tag1 Data1 Tag2 Data2
00900 080 4 7 C C 000 0 8 2 2 000 0 1 A 6 010 0 7 2 1
01400 150 0 0 0 5 000 0 9 0 9
FFFFF
Compare Compare
20 10 16 10 16
Bits Bits Bits Bits Bits Match No match
(Addr) (Tag) (Data) (Tag) (Data)
Set-Associative Mapping
Tag Set Word
6 6 4 Main memory address
111011,111111,1100
Tag: 111011
Set: 111111=63, in the 63th set of the cache
Word:1100=12, the 12th word of the 63th set in the cache
Auxiliary Memory
Magnetic Disk
Optical Disk
RAID Disk
Magnetic Tape
Thanks!
? QUESTION
End!!