LRU cache
Time O(1) get / put · Space O(capacity)
Problem
Cache with a fixed capacity that evicts the least-recently-used entry on insert when full. `get` and `put` must be O(1).
Clarify first
- Is capacity fixed at construction or dynamic?
- Is access (get) thread-safe? If yes, coarse lock or lock-free?
- Are we optimizing for hit latency (in-memory) or hit rate (tiered cache)?
Approach
Reference implementation
class Node: __slots__ = ('key', 'val', 'prev', 'next') def __init__(self, k, v): self.key, self.val, self.prev, self.next = k, v, None, None
class LRU: def __init__(self, cap: int): self.cap = cap self.map: dict[int, Node] = {} # Sentinel head/tail — removes null-checks self.head, self.tail = Node(0, 0), Node(0, 0) self.head.next, self.tail.prev = self.tail, self.head
def _remove(self, n: Node) -> None: n.prev.next, n.next.prev = n.next, n.prev
def _add_to_front(self, n: Node) -> None: n.prev, n.next = self.head, self.head.next self.head.next.prev = n self.head.next = n
def get(self, key: int) -> int: n = self.map.get(key) if not n: return -1 self._remove(n); self._add_to_front(n) return n.val
def put(self, key: int, val: int) -> None: if key in self.map: self._remove(self.map[key]) n = Node(key, val) self.map[key] = n self._add_to_front(n) if len(self.map) > self.cap: lru = self.tail.prev self._remove(lru) del self.map[lru.key]Follow-up questions
- ? Make it thread-safe. What concurrency primitive is cheapest — a single lock, striped locks, or lock-free?
- ? Implement LFU instead. Why is it harder to get O(1)?
- ? How would you extend this to a distributed cache?