Direct Mapping:
As the name suggests, it directly maps each block of main memory into the suitable cache
line. It is also the simplest of all mapping techniques. In this, we link one main memory block
to the cache line, and if another main memory block already acquires that cache line, it will
search for the new one. The address part is split into two parts tag field or index field. Only
the tag field is stored in a cache line, and the rest is stored in the main memory. Its
performance is directly proportional to the Hit Ratio.
i = j modulo m
where
i=cache line number
m=number of lines in the Cache
j= main memory block number
Associative Mapping
In this type of mapping, we use associative memory to store the address and content
of the memory word. We can store any block in any line of Cache. Word id bits
identify what bits are needed, and the tag becomes all the remaining bits. Because
of this, any word can be placed anywhere in the cache memory. It is the most flexible
and fastest mapping technique.
Set Associative Mapping:
This is the upgraded form of direct mapping as it removes the drawbacks of direct mapping.
It mainly targets the problem of thrashing, i.e., if the main memory block wants to point to
some location of cache memory, but some other main memory block is already pointing to
that. It solves this problem by following the method that instead of pointing one memory
block line to one cache memory line, we make a set of some lines, and through it, we can
point to the required cache line. It allow each word in the set can have more than one main
memory location.
m=v*k
i= j mod v
where
i=cache set number
m=number of lines in the cache number of sets
j=main memory block number
v=number of sets
k=number of lines in each set