0% found this document useful (0 votes)
33 views20 pages

Cache 13115

Cache memory is a high-speed memory located between the CPU and main memory that is used to speed up data access. It works by storing copies of frequently used data from main memory so that it can be accessed more quickly than retrieving it from main memory. There are different levels of memory hierarchy, with cache memory being faster than main memory but smaller in size. Cache performance is measured by its hit ratio, which is the number of hits divided by total memory accesses.

Uploaded by

rohan Kottawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views20 pages

Cache 13115

Cache memory is a high-speed memory located between the CPU and main memory that is used to speed up data access. It works by storing copies of frequently used data from main memory so that it can be accessed more quickly than retrieving it from main memory. There are different levels of memory hierarchy, with cache memory being faster than main memory but smaller in size. Cache performance is measured by its hit ratio, which is the number of hits divided by total memory accesses.

Uploaded by

rohan Kottawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

CACHE MEMORY

INTRODUCTION
• Cache Memory is a special very high-speed memory. It is used to speed up and synchronizing with high-speed
CPU.
• Cache memory is costlier than main memory or disk memory but economical than CPU registers.
• Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU. It holds
frequently requested data and instructions so that they are immediately available to the CPU when needed.
• Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and
faster memory which stores copies of the data from frequently used main memory locations. There are various
different independent caches in a CPU, which store instructions and data.
Levels of memory:
• Level 1 or Register
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
• Level 2 or Cache memory
It is the fastest memory which has faster access time where data is temporarily stored for faster access.
• Level 3 or Main Memory
It is memory on which computer works currently. It is small in size and once power is off data no longer
stays in this memory.
• Level 4 or Secondary Memory
It is external memory which is not as fast as main memory but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the
cache -
• If the processor finds that the memory location is in the cache, a cache hit has occurred, and data is read from cache
• If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the
cache allocates a new entry and copies in data from main memory, then the request is fulfilled from the contents of
the cache.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses

We can improve Cache performance using higher cache block size, higher associativity, reduce miss rate, reduce miss
penalty, and reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which are as follows:
• Direct mapping
• Associative mapping
• Set-Associative mapping.

These are explained below

1. Direct Mapping – The simplest technique, known as direct mapping, maps each block of main memory into only
one possible cache line. In Direct mapping, assignee each memory block to a specific line in the cache. If a line is
previously taken up by a memory block when a new block needs to be loaded, the old block is trashed. An address
space is split into two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored
in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.
For purposes of cache access, each main memory address can be viewed as
consisting of three fields. The least significant w bits identify a unique word or
byte within a block of main memory. In most contemporary machines, the
address is at the byte level. The remaining s bits specify one of the 2s blocks of
main memory. The cache logic interprets these s bits as a tag of s-r bits (most
significant portion) and a line field of r bits. This latter field identifies one of the
m=2r lines of the cache.
2. Associative Mapping –
• In this type of mapping, the associative memory is used to store content and addresses of the memory word. Any
block can go into any line of the cache. This means that the word id bits are used to identify which word in the
block is needed, but the tag becomes all of the remaining bits. This enables the placement of any word at any place
in the cache memory.
• It is considered to be the fastest and the most flexible mapping form.
3. Set-associative Mapping –
• This form of mapping is an enhanced form of direct mapping where the drawbacks of direct mapping are removed.
• Set associative addresses the problem of possible thrashing in the direct mapping method. It does this by saying
that instead of having exactly one line that a block can map to in the cache, we will group a few lines together
creating a set. Then a block in memory can map to any one of the lines of a specific set.
• Set-associative mapping allows that each word that is present in the cache can have two or more words in the main
memory for the same index address.
• Set associative cache mapping combines the best of direct and associative cache mapping techniques.

In this case, the cache consists of a number of sets, each of which


consists of a number of lines. The relationships are:
Application of Cache Memory –
• Usually, the cache memory can store a reasonable number of blocks at any given time, but this number is small
compared to the total number of blocks in the main memory.
• The correspondence between the main memory blocks and those in the cache is specified by a mapping function.

Types of Cache –
• Primary Cache –
A primary cache is always located on the processor chip. This cache is small and its access time is comparable to
that of processor registers.
• Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the memory. It is referred to as the level 2 (L2)
cache. Often, the Level 2 cache is also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main memory. So to check which part of main memory should be
given priority and loaded in cache is decided based on locality of reference.

Types of Locality of reference

1. Spatial Locality of reference - This says that there is a chance that element will be present in the close proximity
to the reference point and next time if again searched then more close proximity to the point of reference.

2. Temporal Locality of reference - In this Least recently used algorithm will be used. Whenever there is page fault
occurs within a word will not only load word in main memory but complete page fault will be loaded because spatial
locality of reference rule says that if you are referring any word next word will be referred in its register that’s why
we load complete page table so the complete block will be loaded.
QUIZ

What is the high speed memory between the main memory and the
CPU called?
a) Register Memory
b) Cache Memory
c) Storage Memory
d) Virtual Memory
QUIZ

Answer: b
Explanation: It is called the Cache Memory. The cache memory is
the high speed memory between the main memory and the CPU.
QUIZ

Whenever the data is found in the cache memory it is called as _________


a) HIT
b) MISS
c) FOUND
d) ERROR
QUIZ

Answer: a
Explanation: Whenever the data is found in the cache memory,
it is called as Cache HIT. CPU first checks in the cache memory
since it is closest to the CPU.
QUIZ

In ____________ mapping, the data can be mapped anywhere in the


Cache Memory.
a) Associative
b) Direct
c) Set Associative
d) Indirect
QUIZ

Answer: a
Explanation: This happens in the associative mapping. In
this case, a block of data from the main memory can be
mapped anywhere in the cache memory.
QUIZ

The transfer between CPU and Cache is ______________


a) Block transfer
b) Word transfer
c) Set transfer
d) Associative transfer
QUIZ

Answer: b
Explanation: The transfer is a word transfer. In the memory
subsystem, word is transferred over the memory data bus and
it typically has a width of a word or half-word.
THANK YOU!

You might also like