Design LRU Cache
What is the LRU Cache Problem?
Section titled “What is the LRU Cache Problem?”Design and implement a Least Recently Used (LRU) Cache data structure that supports get and put operations in O(1) time complexity. The cache should maintain a fixed capacity and evict the least recently used item when the capacity is exceeded. The implementation should combine a HashMap for O(1) lookups and a Doubly Linked List for O(1) insertion and deletion operations, ensuring efficient cache management with thread-safe operations.
This problem is a staple in interviews because it requires deep knowledge of how pointers and hash tables work together to manage memory and access patterns.
Problem Overview
Section titled “Problem Overview”Design a cache that stores a limited number of items. When the capacity is reached, it should automatically remove the item that hasn’t been used for the longest time.
Core Requirements
Section titled “Core Requirements”Functional Requirements:
- Get Operation: Retrieve the value associated with a key in O(1) time. Mark the accessed item as most recently used. Return -1 if the key is not found.
- Put Operation: Insert or update a key-value pair in O(1) time. Mark the item as most recently used.
- Eviction Logic: When at capacity, evict the least recently used item (the tail of the list) before adding a new item.
- Fixed Capacity: Maintain a fixed, positive capacity specified during initialization.
- Order Tracking: Use a Doubly Linked List to maintain access order (Head = Most Recent, Tail = Least Recent).
- Data Combination: Use a HashMap to map keys to Doubly Linked List nodes for O(1) access.
Non-Functional Requirements:
- Time Complexity: Both
getandputmust execute in O(1) time complexity on average. - Thread Safety: Ensure the cache works correctly when accessed by multiple threads using appropriate synchronization.
- Memory Efficiency: Manage memory efficiently by properly linking and unlinking nodes.
- Modularity: Design the system to be extensible for different eviction policies or integration into larger frameworks.
What’s Expected?
Section titled “What’s Expected?”1. System Architecture
Section titled “1. System Architecture”To achieve $O(1)$ for both operations, you must combine a HashMap (for fast lookup) and a Doubly Linked List (for fast reordering).
2. Key Classes to Design
Section titled “2. Key Classes to Design”classDiagram
class LRUCache {
-int capacity
-Map~int, Node~ map
-Node head
-Node tail
+get(key) int
+put(key, value) void
-moveToHead(node)
-removeNode(node)
}
class Node {
-int key
-int value
-Node prev
-Node next
}
LRUCache o-- Node
System Flow
Section titled “System Flow”Put Operation Logic
Section titled “Put Operation Logic”Key Design Challenges
Section titled “Key Design Challenges”1. Why two data structures?
Section titled “1. Why two data structures?”- A HashMap gives $O(1)$ lookup but has no concept of order.
- A LinkedList (specifically Doubly) allows $O(1)$ insertion/deletion if you have the node reference, and it maintains order.
- Together, the HashMap points directly to the Nodes in the list, allowing you to jump to any node, remove it, and move it to the head in constant time.
2. Edge Cases in Pointer Logic
Section titled “2. Edge Cases in Pointer Logic”Managing the head and tail can be error-prone, especially with 0 or 1 items.
Solution: Use Dummy Head and Tail nodes. By initializing the list with two empty nodes connected to each other, you eliminate null checks and simplify the add and remove logic.
3. Thread Safety
Section titled “3. Thread Safety”A simple HashMap is not thread-safe. If two threads put at the same time, the pointers in the Linked List could get corrupted.
Solution: Use Synchronization. Wrap the get and put methods in a synchronized block or use a ReentrantLock. Alternatively, for higher performance, use a ConcurrentHashMap combined with fine-grained locking on the nodes, though a global lock is usually sufficient for this interview problem.
What You’ll Learn
Section titled “What You’ll Learn”By solving this problem, you’ll master:
- ✅ Hybrid Data Structures - Combining primitives for advanced goals.
- ✅ Pointer Management - Handling complex linked structures.
- ✅ Efficiency Optimization - Stripping logic down to $O(1)$.
- ✅ Cache Eviction Policies - Understanding how real-world caches (like Redis or Memcached) work.
View Complete Solution & Practice
Section titled “View Complete Solution & Practice”Ready to see the full implementation? Open the interactive playground to access:
- 🎯 Step-by-step guidance through the 8-step LLD approach
- 📊 Interactive UML builder to visualize your design
- 💻 Complete Code Solutions in Python, Java, C++, TypeScript, JavaScript, C#
- 🤖 AI-powered review of your design and code
Related Problems
Section titled “Related Problems”After mastering LRU Cache, try these similar problems:
- LFU Cache - Evicting based on frequency (Hard).
- Hash Map - Designing the underlying storage.
- Browser History - Another application of linked lists.