Open In App

LRU Cache – Complete Tutorial

Last Updated : 26 Sep, 2024
Summarize
Comments
Improve
Suggest changes
Like Article
Like
Save
Share
Report
News Follow

What is LRU Cache?

Cache replacement algorithms are efficiently designed to replace the cache when the space is full. The Least Recently Used (LRU) is one of those algorithms. As the name suggests when the cache memory is full, LRU picks the data that is least recently used and removes it in order to make space for the new data. The priority of the data in the cache changes according to the need of that data i.e. if some data is fetched or updated recently then the priority of that data would be changed and assigned to the highest priority , and the priority of the data decreases if it remains unused operations after operations.

Operations on LRU Cache:

  • LRUCache (Capacity c): Initialize LRU cache with positive size capacity c.
  • get (key) : Returns the value of key ‘ k’ if it is present in the cache otherwise it returns -1. Also updates the priority of data in the LRU cache.
  • put (key, value): Update the value of the key if that key exists, Otherwise, add a key-value pair to the cache. If the number of keys exceeds the capacity of the LRU cache then dismiss the least recently used key.

Understanding LRU Cache is key to optimizing memory in operating systems. For a deeper dive into such core concepts, you can explore the GATE CS Self-Paced Course , which covers essential topics for GATE preparation.”

Working of LRU Cache:

Let’s suppose we have an LRU cache of capacity 3, and we would like to perform the following operations:

  • put (key=1, value=A) into the cache
  • put (key=2, value=B) into the cache
  • put (key=3, value=C) into the cache
  • get (key=2) from the cache
  • get (key=4) from the cache
  • put (key=4, value=D) into the cache
  • put (key=3, value=E) into the cache
  • get (key=4) from the cache
  • put (key=1, value=A) into the cache

The above operations are performed one after another as shown in the image below:

Working-of-LRU-Cache-copy-2

Detailed Explanation of each operation:

  1. put (key 1, value A) : Since LRU cache has empty capacity=3, there is no need for any replacement and we put {1 : A} at the top i.e. {1 : A} has highest priority.
  2. put (key 2, value B) : Since LRU cache has empty capacity=2, again there is no need for any replacement, but now the {2 : B} has the highest priority and priority of {1 : A} decrease.
  3. put (key 3, value C) : Still there is 1 empty space vacant in the cache, therefore put {3 : C} without any replacement, notice now the cache is full and the current order of priority from highest to lowest is {3:C}, {2:B}, {1:A}.
  4. get (key 2) : Now, return value of key=2 during this operation, also since key=2 is used, now the new priority order is {2:B}, {3:C}, {1:A}
  5. get (key 4): Observe that key 4 is not present in the cache, we return ‘-1’ for this operation.
  6. put (key 4, value D) : Observe that cache is FULL, now use LRU algorithm to determine which key is least recently used. Since {1:A} had the least priority, remove {1:A} from our cache and put {4:D} into the cache. Notice that the new priority order is {4:D}, {2:B}, {3:C}
  7. put (key 3, value E) : Since key=3 was already present in the cache having value=C, so this operation won’t result in removal of any key, rather it will update the value of key=3 to ‘ E’ . Now, the new order of priority will become {3:E}, {4:D}, {2:B}
  8. get (key 4) : Return the value of key=4. Now, new priority will become {4:D}, {3:E}, {2:B}
  9. put (key 1, value A) : Since our cache is FULL, so use our LRU algorithm to determine which key was least recently used, and since {2:B} had the least priority, remove {2:B} from our cache and put {1:A} into the cache. Now, the new priority order is {1:A}, {4:D}, {3:E}.

How to design your own LRU Cache?

Input: [ LRUCache cache = new LRUCache (2) , put(1 ,1) , put(2 ,2) , get(1) , put(3 ,3) , get(2) , put(4 ,4) , get(1) , get(3) , get(4)]
Output: [1 ,-1, -1, 3, 4]
Explanation: The values mentioned in the output are the values returned by get operations .

  • Initialize LRUCache class with capacity = 2.
  • cache.put(1, 1) : (key, pair) = (1,1) inserted and has the highest priority.
  • cache.put(2, 2) : ( key , pair) = (2,2) inserted and has the highest priority.
  • cache.get(1) : For key 1, value is 1, so 1 returned and (1,1) moved to the highest priority.
  • cache.put(3, 3) : Since cache is full, remove least recently used that is (2,2), (3,3) inserted with the highest priority.
  • cache.get(2): returns -1 (key 2 not found)
  • cache.put(4, 4) : Since the cache is full, remove least recently used that is (1,1). (4,5) inserted with the highest priority.
  • cache.get(1): return -1 (not found)
  • cache.get(3): return 3 , (3,3) will moved to the highest priority.
  • cache.get(4) : return 4 , (4,4) moved to the highest priority.

Thoughts about Implementation Using Arrays, Hashing and/or Heap

The idea is to implement using an array to store nodes, where each node holds a key-value pair. The primary operations, get and put, are performed with O(n) time complexity due to the need to search through the array. The size of the array will be equal to the given capacity of the cache.

put(int key, int value) If the cache is full, find the node with the oldest timestamp (least recently used) and replace this node with the new key and value.
else, simply add the new node to the end of the array with the timestamp of insertion.
Time Complexity: O(n) (because you might have to search for the oldest node)

get(int key) Search through the array for the node with the matching key.
If found, update its timestamp and return its value , else return -1.
Time Complexity: O(n) (because you might have to check every node)

Can we make both operations in O(1) time? we can think of hashing. With hashing, we can insert, get and delete in O(1) time, but changing priorities would take linear time. We can think of using heap along with hashing for priorities. We can find and remove the least recently used (lowest priority) in O(Log n) time which is more than O(1) and changing priority in the heap would also be required.

Please refer Design LRU Cache for details of all approaches.

Efficient Solution – Using Doubly Linked List and Hashing

The basic idea behind implementing an LRU (Least Recently Used) cache using a key-value pair approach is to manage element access and removal efficiently through a combination of a doubly linked list and a hash map.

  • When adding a new key-value pair, insert it as a new node at the head of the doubly linked list. This ensures that the newly added key-value pair is marked as the most recently used.
  • If the key is already present in the cache , get the corresponding node in the doubly linked list using hashmap , update its value and move it to the head of the list, and update its position in the hashmap also. This operation ensures that the accessed key-value pair is considered the most recently used.
  • The priority of nodes in the doubly linked list is based on their distance from the head . Key-value pairs closer to the head are more recently used and thus have higher priority . Also, key-value pairs closer to the tail are considered less recently used and have lower priority.
  • When the cache reaches its maximum capacity and a new key-value pair needs to be added, remove the node from hashmap as well as from the tail in the doubly linked list . Tail node represents the least recently used key-value pair and is removed to make space for the new entry.

Please refer LRU cache implementation using Doubly Linked List and Hashing for details

Complexity Analysis of the Efficient Solution

  • Time Complexity:
    • put() operation: O(1) i.e. time required to insert or update new key-value pair is constant
    • get() operation: O(1) i.e. time required to get the value of a key is constant
  • Auxiliary Space: O(c) where c is the capacity of the Cache.

Advantages of LRU cache:

  • Accessing data is done in O(1) time
  • Updating a key-value pair also takes O(1) time.
  • Removing the least recently used item is performed in O(1) time.
  • LRU plays a crucial role in page replacement strategies to minimize page faults .

Disadvantages of LRU cache:

  • It requires large cache size to increase efficiency.
  • It requires additional Data Structure to be implemented.
  • In LRU, error detection is difficult as compared to other algorithms.
  • It has limited acceptability.

Real-World Application of LRU Cache:

  • In Database Systems for fast query results.
  • In Operating Systems to minimize page faults.
  • Text Editors and IDEs to store frequently used files or code snippets.
  • Network routers and switches use LRU to increase the efficiency of computer network.
  • In compiler optimizations.
  • Text Prediction and autocompletion tools.


Previous Article
Next Article

Similar Reads

LRU Cache in Python using OrderedDict
LRU (Least Recently Used) Cache discards the least recently used items first. This algorithm requires keeping track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping "age bits" for cache-lines and track the "Least
4 min read
Design LRU Cache
Design a data structure for LRU Cache. It should support the following operations: get and put. get(key) - Returns the value of the given key if it exists in the cache; otherwise, returns -1.put(key, value) - Inserts or updates the key-value pair in the cache. If the cache reaches capacity, remove the least recently used item before adding the new
6 min read
LRU Approximation (Second Chance Algorithm)
If you are not familiar with Least Recently Used Algorithm, check Least Recently Used Algorithm(Page Replacement)This algorithm is a combination of using a queue, similar to FIFO (FIFO (Page Replacement)) alongside using an array to keep track of the bits used to give the queued page a "second chance". How does the algorithm work: Set all the value
15+ min read
Implementation of Least Recently Used (LRU) page replacement algorithm using Counters
Prerequisite - Least Recently Used (LRU) Page Replacement algorithm Least Recently Used page replacement algorithm replaces the page which is not used recently. Implementation: In this article, LRU is implemented using counters, a ctime (i.e., counter) variable is used to represent the current time, it is incremented for every page of the reference
8 min read
LRU Full Form
LRU stands for Least Recently Used. The development of the LRU algorithm ended the debate and research that had been going on about page replacement algorithms in the 1960s and 1970s. LRU replaces the line in the cache that has been in the cache the longest with no reference to it. It works on the idea that the more recently used blocks are more li
2 min read
Difference Between LRU and FIFO Page Replacement Algorithms in Operating System
Page replacement algorithms are used in operating systems to manage memory effectively. Page replacement algorithms are essential in operating systems for efficient memory management. These algorithms select which memory pages should be changed when a new page is brought. Least Recently Used (LRU) and First-In-First-Out (FIFO) are two popular page
3 min read
Program for Least Recently Used (LRU) Page Replacement algorithm
Prerequisite: Page Replacement AlgorithmsIn operating systems that use paging for memory management, page replacement algorithm are needed to decide which page needed to be replaced when new page comes in. Whenever a new page is referred and not present in memory, page fault occurs and Operating System replaces one of the existing pages with newly
14 min read
Page Faults in LRU | Implementation
LRU uses the concept of paging for memory management. A page replacement algorithm is needed to decide which page needs to be replaced when the new page comes in. Whenever a new page is referred to and is not present in memory, the page fault occurs and the Operating System replaces one of the existing pages with a newly needed page. LRU is one suc
15 min read
DNS Spoofing or DNS Cache poisoning
Prerequisite - Domain Name Server Before Discussing DNS Spoofing, First, discuss what is DNS.A Domain Name System (DNS) converts a human-readable name (such as www.geeksforgeeks.org) to a numeric IP address. The DNS system responds to one or more IP-address by which your computer connects to a website (such as geeksforgeeks.org) by using one of the
3 min read
Multilevel Cache Organisation
Cache is a random access memory used by the CPU to reduce the average time taken to access memory. Multilevel Caches is one of the techniques to improve Cache Performance by reducing the "MISS PENALTY". Miss Penalty refers to the extra time required to bring the data into cache from the Main memory whenever there is a "miss" in the cache. For clear
5 min read
Cache Memory Design
Prerequisite - Cache Memory A detailed discussion of the cache style is given in this article. The key elements are concisely summarized here. we are going to see that similar style problems should be self-addressed in addressing storage and cache style. They represent the subsequent categories: Cache size, Block size, Mapping function, Replacement
5 min read
Write Through and Write Back in Cache
Prerequisite - Multilevel Cache Organisation Cache is a technique of storing a copy of data temporarily in rapidly accessible storage memory. Cache stores most recently used words in small memory to increase the speed at which data is accessed. It acts as a buffer between RAM and CPU and thus increases the speed at which data is available to the pr
3 min read
Types of Cache Misses
Cache line prefetching is a technique used in computer processors to improve memory access performance. It involves fetching multiple contiguous cache lines from memory into the processor's cache in advance, anticipating that they will be needed in the near future. The cache is a small but fast memory located on the processor chip that stores recen
3 min read
Concept of Cache Memory Design
Cache Memory plays a significant role in reducing the processing time of a program by provide swift access to data/instructions. Cache memory is small and fast while the main memory is big and slow. The concept of caching is explained below. Caching Principle : The intent of cache memory is to provide the fastest access to resources without comprom
4 min read
Cache Hits in Memory Organization
The user has a memory machine. It has one layer for data storage and another layer for the cache. The user has stored an array with length N in the first layer. When the CPU needs data, it immediately checks in cache memory whether it has data or not. If data is present it results in CACHE HITS, else CACHE MISS, i.e., data is not in cache memory so
8 min read
Virtually Indexed Physically Tagged (VIPT) Cache
Prerequisites: Cache MemoryMemory AccessPagingTransition Look Aside Buffer Revisiting Cache AccessWhen a CPU generates physical address, the access to main memory precedes with access to cache. Data is checked in cache by using the tag and index/set bits as show here. Such cache where the tag and index bits are generated from physical address is ca
8 min read
Terminologies Cache Memory Organization
Cache Memory is a small, fast memory that holds a fraction of the overall contents of the memory. Its mathematical model is defined by its size, number of sets, associativity, block size, sub-block size, fetch strategy, and write strategy. Any node in the cache hierarchy can contain a common cache or two separate caches for instruction and or data.
4 min read
Factors affecting Cache Memory Performance
Computers are made of three primary blocs. A CPU, a memory, and an I/O system. The performance of a computer system is very much dependent on the speed with which the CPU can fetch instructions from the memory and write to the same memory. Computers are using cache memory to bridge the gap between the processor's ability to execute instructions and
5 min read
Cache Memory Performance
Types of Caches : L1 Cache : Cache built in the CPU itself is known as L1 or Level 1 cache. This type of cache holds most recent data so when, the data is required again so the microprocessor inspects this cache first so it does not need to go through main memory or Level 2 cache. The main significance behind above concept is "Locality of reference
5 min read
Why Arrays have better cache locality than Linked list?
What is a Cache locality?Cache locality refers to the property of computer programs where they tend to access a small, specific set of data repeatedly in a short period of time. By keeping this frequently accessed data in a small, fast memory region called a cache, the program can speed up its execution time. This is because accessing data from the
9 min read
Basic Cache Optimization Techniques
Generally, in any device, memories that are large(in terms of capacity), fast and affordable are preferred. But all three qualities can't be achieved at the same time. The cost of the memory depends on its speed and capacity. With the Hierarchical Memory System, all three can be achieved simultaneously. The cache is a part of the hierarchy present
5 min read
Least Frequently Used (LFU) Cache Implementation
Least Frequently Used (LFU) is a caching algorithm in which the least frequently used cache block is removed whenever the cache is overflowed. In LFU we check the old page as well as the frequency of that page and if the frequency of the page is larger than the old page we cannot remove it and if all the old pages are having same frequency then tak
9 min read
How to Implement Reverse DNS Look Up Cache?
Reverse DNS look up is using an internet IP address to find a domain name. For example, if you type 74.125.200.106 in browser, it automatically redirects to google.in. How to implement Reverse DNS Look Up cache? Following are the operations needed from cache: Add an IP address to URL Mapping in cache.Find URL for a given IP address. One solution is
15+ min read
How to Implement Forward DNS Look Up Cache?
We have discussed implementation of Reverse DNS Look Up Cache. Forward DNS look up is getting IP address for a given domain name typed in the web browser. The cache should do the following operations : 1. Add a mapping from URL to IP address 2. Find IP address for a given URL. There are a few changes from reverse DNS look up cache that we need to i
13 min read
Cache Oblivious Algorithm
Cache oblivious is a way of achieving algorithms that are efficient in arbitrary memory hierarchies without the use of complicated multi-level memory models. Cache oblivious algorithms are algorithms that use asymptotically optimal amounts of work, move data asymptotically optimally among multiple levels of cache, and indirectly use processor cache
14 min read
Cache oblivious kd-Tree
Cache-oblivious kd-tree data structures are a great utility that performs multi-dimensional orthogonal range searching. One of the salient terms of kd trees is binary space partitioning which periodically subdivides space into two convex sets by using hyperplanes as splitting. This article focuses on discussing the following topics in detail- Cache
9 min read
Difference between Cache Coherence and Memory Consistency
1. Cache coherence :Cache coherence in computer architecture refers to the consistency of shared resource data that is stored in multiple local caches. When clients in a system maintain caches of a shared memory resource, problems with incoherent data can arise, which is especially true for CPUs in a multiprocessing system.In a shared memory multip
2 min read
Difference between RAM and Cache
Memory is important in modern computing systems because it allows for efficient and quick data processing. Random Access Memory (RAM) and cache memory are two common types of memory used in computers. Both serve separate purposes and are required for a computer's maximum performance. RAM is the primary memory that stores data and programs currently
5 min read
Difference between Buffer and Cache
Among all the categories existing in the field of computing, both buffers and caches are important for improving system efficiency. But they are different from each other concerning their functions and the modes they employ. Thus, it is vital to comprehend the difference between a buffer and a cache for those who wants to consider their system more
5 min read
Cache Memory in Computer Organization
Cache memory is a small, high-speed storage area in a computer. The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations. There are various independent caches in a CPU, which store instructions and data. The most important use of cache memory is that it is used to reduce the average time to
9 min read
three90RightbarBannerImg