0% found this document useful (0 votes)
44 views11 pages

Cache

chn

Uploaded by

Jayakrishnan Vk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views11 pages

Cache

chn

Uploaded by

Jayakrishnan Vk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CACHE MEMORY

Principle of locality helped to speed up main


memory access by introducing small fast
memories known as CACHE MEMORIES
that hold blocks of the most recently
referenced instructions and data items.
Cache is a small fast storage device that
holds the operands and instructions most
likely to be used by the CPU.
Memory Hierarchy of early computers: 3
levels
– CPU registers
– DRAM Memory
– Disk storage
Due to increasing gap between CPU and
main Memory, small SRAM memory called
L1 cache inserted.

L1 caches can be accessed almost as fast


as the registers, typically in 1 or 2 clock
cycle

Due to even more increasing gap between


CPU and main memory, Additional cache:
L2 cache inserted between L1 cache and
main memory : accessed in fewer clock
cycles.
• L2 cache attached to the memory bus or to its
own cache bus

• Some high performance systems also include


additional L3 cache which sits between L2 and
main memory . It has different arrangement but
principle same.

• The cache is placed both physically closer and


logically closer to the CPU than the main
memory.
CACHE LINES / BLOCKS
• Cache memory is subdivided into cache
lines

• Cache Lines / Blocks: The smallest unit


of memory than can be transferred
between the main memory and the
cache.
CACHE HITS / MISSES
• Cache Hit: a request to read from memory,
which can satisfy from the cache without
using the main memory.

• Cache Miss: A request to read from memory,


which cannot be satisfied from the cache, for
which the main memory has to be consulted.
CACHE MEMORY :
PLACEMENT POLICY
There are three commonly used methods to
translate main memory addresses to cache
memory addresses.
• Associative Mapped Cache
• Direct-Mapped Cache
• Set-Associative Mapped Cache

The choice of cache mapping scheme affects


cost and performance, and there is no single
best method that is appropriate for all situations
Associative Mapping
• a block in the Main Memory can be mapped
to any block in the Cache Memory available
(not already occupied)
• Advantage: Flexibility. An Main Memory block
can be mapped anywhere in Cache Memory.
• Disadvantage: Slow or expensive. A search
through all the Cache Memory blocks is
needed to check whether the address can be
matched to any of the tags.
Direct Mapping
• To avoid the search through all Cache
memory blocks needed by associative
mapping, this method only allows
# blocks in main memory .
# blocks in cache memory

blocks to be mapped to each Cache


Memory block.
• Advantage: Direct mapping is faster than the
associative mapping as it avoids searching
through all the CM tags for a match.

• Disadvantage: But it lacks mapping flexibility.


For example, if two MM blocks mapped to
same CM block are needed repeatedly (e.g.,
in a loop), they will keep replacing each other,
even though all other CM blocks may be
available.
Set-Associative Mapping
• This is a trade-off between associative and
direct mappings where each address is
mapped to a certain set of cache locations.

• The cache is broken into sets where each set


contains "N" cache lines, let's say 4. Then,
each memory address is assigned a set, and
can be cached in any one of those 4 locations
within the set that it is assigned to. In other
words, within each set the cache is
associative, and thus the name.

You might also like