🎯 LRU is Default
Use LRU for most cases. It’s simple, effective, and handles temporal locality well.
Imagine your cache is a parking lot with 100 spaces. When it’s full and a new car arrives, you need to decide: which car leaves?
That’s cache eviction - deciding what to remove when cache is full.
Maximize cache hit rate - keep the data you’ll access soon, evict data you won’t need.
Most popular policy. Evicts the item that hasn’t been accessed for the longest time.
Think of it like a stack of books on your desk. When you use a book, you put it on top. When your desk is full, you remove books from the bottom (least recently used).
Algorithm:
When to use:
Trade-offs:
This is a classic interview problem. Here’s how to implement it efficiently:
1from collections import OrderedDict2
3class LRUCache:4 def __init__(self, capacity: int):5 self.capacity = capacity6 # OrderedDict maintains insertion order7 # Most recent at end, least recent at beginning8 self.cache = OrderedDict()9
10 def get(self, key: int) -> int:11 if key not in self.cache:12 return -113
14 # Move to end (most recent)15 self.cache.move_to_end(key)16 return self.cache[key]17
18 def put(self, key: int, value: int) -> None:19 if key in self.cache:20 # Update existing21 self.cache.move_to_end(key)22 else:23 # Check if full24 if len(self.cache) >= self.capacity:25 # Evict least recent (first item)26 self.cache.popitem(last=False)27
28 self.cache[key] = value29 # Ensure it's at end (most recent)30 self.cache.move_to_end(key)1import java.util.LinkedHashMap;2import java.util.Map;3
4class LRUCache {5 private final int capacity;6 private final LinkedHashMap<Integer, Integer> cache;7
8 public LRUCache(int capacity) {9 this.capacity = capacity;10 // LinkedHashMap with accessOrder=true maintains LRU order11 this.cache = new LinkedHashMap<Integer, Integer>(12 capacity, 0.75f, true) {13 @Override14 protected boolean removeEldestEntry(Map.Entry eldest) {15 return size() > capacity;16 }17 };18 }19
20 public int get(int key) {21 return cache.getOrDefault(key, -1);22 }23
24 public void put(int key, int value) {25 cache.put(key, value);26 // LinkedHashMap automatically maintains order27 }28}More Efficient Implementation (HashMap + Doubly Linked List for O(1) operations):
1class Node:2 def __init__(self, key=0, value=0):3 self.key = key4 self.value = value5 self.prev = None6 self.next = None7
8class LRUCache:9 def __init__(self, capacity: int):10 self.capacity = capacity11 self.cache = {} # key -> node12
13 # Dummy head and tail for easier operations14 self.head = Node()15 self.tail = Node()16 self.head.next = self.tail17 self.tail.prev = self.head18
19 def _add_node(self, node: Node):20 # Add after head21 node.prev = self.head22 node.next = self.head.next23 self.head.next.prev = node24 self.head.next = node25
26 def _remove_node(self, node: Node):27 node.prev.next = node.next28 node.next.prev = node.prev29
30 def _move_to_head(self, node: Node):31 self._remove_node(node)32 self._add_node(node)33
34 def _pop_tail(self) -> Node:35 last = self.tail.prev36 self._remove_node(last)37 return last38
39 def get(self, key: int) -> int:40 node = self.cache.get(key)41 if not node:42 return -143
44 self._move_to_head(node)45 return node.value46
47 def put(self, key: int, value: int):48 node = self.cache.get(key)49
50 if not node:51 if len(self.cache) >= self.capacity:52 tail = self._pop_tail()53 del self.cache[tail.key]54
55 node = Node(key, value)56 self.cache[key] = node57 else:58 node.value = value59
60 self._move_to_head(node)1import java.util.HashMap;2import java.util.Map;3
4class Node {5 int key, value;6 Node prev, next;7
8 Node() {}9 Node(int key, int value) {10 this.key = key;11 this.value = value;12 }13}14
15class LRUCache {16 private final int capacity;17 private final Map<Integer, Node> cache;18 private final Node head, tail;19
20 public LRUCache(int capacity) {21 this.capacity = capacity;22 this.cache = new HashMap<>();23 this.head = new Node();24 this.tail = new Node();25 head.next = tail;26 tail.prev = head;27 }28
29 private void addNode(Node node) {30 node.prev = head;31 node.next = head.next;32 head.next.prev = node;33 head.next = node;34 }35
36 private void removeNode(Node node) {37 node.prev.next = node.next;38 node.next.prev = node.prev;39 }40
41 private void moveToHead(Node node) {42 removeNode(node);43 addNode(node);44 }45
46 private Node popTail() {47 Node last = tail.prev;48 removeNode(last);49 return last;50 }51
52 public int get(int key) {53 Node node = cache.get(key);54 if (node == null) return -1;55
56 moveToHead(node);57 return node.value;58 }59
60 public void put(int key, int value) {61 Node node = cache.get(key);62
63 if (node == null) {64 if (cache.size() >= capacity) {65 Node tail = popTail();66 cache.remove(tail.key);67 }68
69 node = new Node(key, value);70 cache.put(key, node);71 } else {72 node.value = value;73 }74
75 moveToHead(node);76 }77}Frequency-based eviction. Evicts the item with the lowest access count.
Think of it like a popularity contest. Items with more “votes” (accesses) stay longer. When cache is full, the least popular item leaves.
Algorithm:
When to use:
Trade-offs:
Simple queue-based eviction. Evicts the oldest item regardless of usage.
Like a queue at a store - first in, first out. When cache is full, the oldest item leaves, even if it was just accessed.
Algorithm:
When to use:
Trade-offs:
Time-based expiration. Items expire after a fixed time period.
Like milk with an expiration date. After a certain time, items are automatically removed, regardless of usage.
Algorithm:
When to use:
Trade-offs:
| Policy | Complexity | Hit Rate | Use Case | Implementation |
|---|---|---|---|---|
| LRU | Medium | High | General purpose | HashMap + Doubly Linked List |
| LFU | High | High | Frequency-based | HashMap + Frequency tracking |
| FIFO | Low | Low | Simple cases | Queue |
| TTL | Low | Medium | Time-sensitive | Timestamp tracking |
At the code level, eviction policies are strategies you can swap:
1from abc import ABC, abstractmethod2from typing import Any3
4class EvictionStrategy(ABC):5 @abstractmethod6 def evict(self, cache: dict) -> Any:7 """Return key to evict"""8 pass9
10class LRUStrategy(EvictionStrategy):11 def evict(self, cache: dict) -> Any:12 # Return least recently used (first in OrderedDict)13 return next(iter(cache))14
15class LFUStrategy(EvictionStrategy):16 def __init__(self):17 self.access_counts = {}18
19 def evict(self, cache: dict) -> Any:20 # Return least frequently used21 return min(self.access_counts.items(), key=lambda x: x[1])[0]22
23class Cache:24 def __init__(self, capacity: int, strategy: EvictionStrategy):25 self.capacity = capacity26 self.strategy = strategy27 self.cache = {}28
29 def evict_if_needed(self):30 if len(self.cache) >= self.capacity:31 key = self.strategy.evict(self.cache)32 del self.cache[key]1import java.util.Map;2
3interface EvictionStrategy<K> {4 K evict(Map<K, ?> cache);5}6
7class LRUStrategy<K> implements EvictionStrategy<K> {8 public K evict(Map<K, ?> cache) {9 // Return first key (least recent)10 return cache.keySet().iterator().next();11 }12}13
14class LFUStrategy<K> implements EvictionStrategy<K> {15 private final Map<K, Integer> accessCounts = new HashMap<>();16
17 public K evict(Map<K, ?> cache) {18 // Return key with minimum access count19 return accessCounts.entrySet().stream()20 .min(Map.Entry.comparingByValue())21 .map(Map.Entry::getKey)22 .orElse(null);23 }24}25
26class Cache<K, V> {27 private final int capacity;28 private final EvictionStrategy<K> strategy;29 private final Map<K, V> cache;30
31 public Cache(int capacity, EvictionStrategy<K> strategy) {32 this.capacity = capacity;33 this.strategy = strategy;34 this.cache = new HashMap<>();35 }36
37 private void evictIfNeeded() {38 if (cache.size() >= capacity) {39 K key = strategy.evict(cache);40 cache.remove(key);41 }42 }43}Most production systems use combinations:
Pre-populate cache with likely-to-be-accessed data:
Track these metrics:
🎯 LRU is Default
Use LRU for most cases. It’s simple, effective, and handles temporal locality well.
📊 Frequency vs Recency
LRU = recency, LFU = frequency. Choose based on your access patterns.
⚡ O(1) Operations
Proper LRU implementation uses HashMap + Doubly Linked List for O(1) get/put.
🔄 Strategy Pattern
Make eviction policy swappable using strategy pattern in your code.