Glossary
Cache Eviction Policy

Cache Eviction Policy

Michael Hakimi

Why do some apps feel lightning-fast while others lag, even on the same device? A big part of that speed comes down to cache management, and at the heart of it is the cache eviction policy.

Caching helps store frequently used data in a faster storage layer so that your system doesn’t have to fetch it from a slower source every time. But since cache storage is limited, at some point, old data needs to be removed to make space for new data

That’s where cache eviction policies come in—they decide what stays and what goes.

What Is Cache Eviction?

Think of your cache like a small fridge in your room. It can only hold so much, so when you buy new snacks, you might have to throw out the old ones. But how do you decide what to throw out? That’s exactly what cache eviction policies do.

Cache eviction is the process of removing data from the cache when it reaches its storage limit. The key question is:
👉 Which data should be removed to make space for new data?

The answer depends on the cache eviction policy in place.

What Is a Cache Eviction Policy?

A cache eviction policy is a set of rules that determine which data gets removed when the cache is full. Different systems use different policies based on their needs. Some prioritize recently used data, while others focus on how often data is accessed.

Choosing the right cache eviction strategy can make or break the performance of an application, database, or operating system.

Types of Cache Eviction Policies

There are several cache eviction policies, each suited for different scenarios. Let’s explore the most commonly used ones.

1. Least Recently Used (LRU)

👉 Removes the data that hasn’t been used for the longest time.

  • Think of your phone’s recent apps list—if you haven’t used an app in a while, the system might remove it from memory to make space for something new.
  • Best for workloads where old data is less likely to be accessed again.

🔹 Used in: Operating systems, databases, and web caching.

2. Least Frequently Used (LFU)

👉 Removes data that has been used the least number of times.

  • If a piece of data has rarely been accessed, it’s probably not important, so it gets evicted first.
  • Best for scenarios where popular data should stay in the cache longer.

🔹 Used in: Content delivery networks (CDNs) and database caching.

3. First In, First Out (FIFO)

👉 Removes the oldest data first, regardless of how often it was used.

  • Just like a queue in a grocery store, the first item that enters the cache is the first to leave.
  • Works well for simple, time-sensitive caching needs but doesn’t consider usage patterns.

🔹 Used in: Simple caching systems where recency doesn’t matter.

4. Random Replacement (RR)

👉 Randomly selects a cache entry to remove.

  • Sometimes, randomness is the easiest solution—especially when precise tracking is too expensive.
  • Best for systems where tracking usage data is impractical.

🔹 Used in: Hardware caches and some network devices.

5. Adaptive Replacement Cache (ARC)

👉 Combines LRU and LFU for smarter cache eviction.

  • ARC dynamically adjusts between recency-based and frequency-based eviction based on actual workload patterns.
  • It’s more efficient than basic LRU or LFU alone.

🔹 Used in: High-performance database systems like PostgreSQL.

{{cool_component}}

How Cache Eviction Strategies Work

Now that you know the types of cache eviction policies, let’s talk about strategies—the broader approach to managing cache eviction.

1. Write-Through Caching

  • Writes data to both the cache and the main storage at the same time.
  • Ensures data is always up-to-date but comes with a performance cost.

🔹 Used in: Databases and cloud storage systems.

2. Write-Back Caching

  • Writes data to the cache first and updates the main storage later.
  • Faster performance but has a risk of data loss if the system crashes before syncing.

🔹 Used in: CPU caches and SSDs.

3. Lazy Eviction

  • Only removes data when necessary, rather than proactively.
  • Can be more efficient but may lead to occasional slowdowns.

🔹 Used in: Web browsers and distributed systems.

Cache Eviction Algorithms

Cache eviction policies set the rules, but algorithms determine how those rules are applied. Let’s look at a few well-known cache eviction algorithms:

1. LRU Algorithm (Least Recently Used)

  • Maintains a list of cache entries and evicts the one that was accessed the longest time ago.
  • Implemented using a linked list or a hash table for efficiency.

🔹 Example: Operating systems use LRU for memory paging.

2. LFU Algorithm (Least Frequently Used)

  • Keeps a count of how many times each entry is accessed and evicts the one with the lowest count.
  • Requires more memory to track usage but works well for long-term caching needs.

🔹 Example: Content recommendation systems use LFU to keep popular items in cache.

3. Clock Algorithm (Enhanced LRU)

  • Uses a circular queue and a "clock hand" to track which data is least used.
  • More efficient than LRU because it doesn’t require constant list updates.

🔹 Example: Used in operating systems for virtual memory management.

4. Two-Queue Algorithm (2Q)

  • Splits cache into two queues:
    • One for new entries.
    • One for long-term entries.
  • If an entry is accessed frequently, it moves from the first queue to the second.

🔹 Example: PostgreSQL uses this for database caching.

Performance Trade-offs of Eviction Policies

Choosing the wrong cache eviction policy can severely impact cache efficiency, CPU usage, and response times. Here’s how different strategies affect system performance:

🔹 Time Complexity of Eviction Algorithms

Each eviction policy has a different computational cost, impacting system speed:

Eviction Policy Access Time Complexity Eviction Complexity Memory Overhead
LRU (Least Recently Used) O(1) (via HashMap + Linked List) O(1) Medium
LFU (Least Frequently Used) O(1) (via HashMap) O(log n) (heap-based) High
FIFO (First In, First Out) O(1) O(1) Low
Random Replacement (RR) O(1) O(1) Very Low
ARC (Adaptive Replacement Cache) O(1) O(1) High

LRU is great for quick access but has memory overhead from linked lists.
LFU improves efficiency but needs extra tracking and sorting, making it computationally expensive.
FIFO is simple but doesn’t consider usage patterns, which can lead to frequent cache misses.
Random Replacement has low overhead but can lead to unpredictable performance.

🔹 Memory Overhead and Cache Efficiency

  • Tracking Access History:
    • LRU maintains linked lists or counters, requiring extra memory.
    • LFU keeps usage counts, increasing storage costs.
  • Inefficient Evictions Can Increase Cache Misses:
    • FIFO doesn’t consider recency or frequency, often evicting useful data.
    • Random Replacement might throw away high-priority data, leading to unnecessary recomputation.

🔸 For high-performance systems, a hybrid approach (like ARC) is often better than a single eviction policy.

Choosing the Right Cache Eviction Policy

So, how do you pick the best cache eviction policy? It depends on your needs:

✔️ Need to remove least-used data? → Use LFU
✔️ Want to evict old data first? → Use FIFO
✔️ Need a balance between old and frequently used data? → Use ARC
✔️ Prefer simplicity? → Use Random Replacement (RR)

The right strategy depends on your workload, hardware, and performance goals.

Cache Eviction in Distributed Systems

Handling cache eviction in distributed caching is much more complex than in single-node caches.

🔹 Why Is Distributed Cache Eviction Harder?

  • Multiple nodes manage cache independently, leading to inconsistencies.
  • Evicting data on one node doesn’t mean it’s evicted everywhere.
  • Coordination overhead: If eviction isn't well-managed, it can result in stale data, race conditions, and performance bottlenecks.

🔹 Strategies for Distributed Cache Eviction

1️⃣ Centralized Eviction

  • A single node or master decides which data should be evicted across all cache nodes.
  • Pros: Ensures consistent eviction, prevents cache poisoning.
  • Cons: High dependency on a central authority (can be a bottleneck).
  • Example: Memcached’s Least Recently Used (LRU) eviction is centrally managed.

2️⃣ Decentralized Eviction (Local Decision-Making)

  • Each cache node evicts data independently based on local policies.
  • Pros: More scalable and avoids single points of failure.
  • Cons: Can lead to data inconsistencies (one node may evict a key still needed by another).
  • Example: Redis eviction policies are per-node, meaning keys may remain in some nodes but not others.

3️⃣ Cooperative Eviction (Consensus-Based)

  • Nodes share eviction metadata to coordinate removal across the system.
  • Uses gossip protocols or distributed leader election to maintain consistency.
  • Pros: Balances scalability and eviction accuracy.
  • Cons: Requires more network communication.
  • Example: Apache Ignite and Amazon DynamoDB use cooperative eviction techniques.

Conclusion

A cache eviction policy is crucial for maintaining fast and efficient caching. Different cache eviction strategies like LRU, LFU, and FIFO help systems decide which data to remove when space runs out. Choosing the right policy can significantly improve performance in databases, web applications, and even operating systems.

At the end of the day, there’s no one-size-fits-all approach. It all comes down to how frequently your data changes, how often it’s accessed, and what level of performance you need.

Published on:
March 19, 2025

Related Glossary

See All Terms
This is some text inside of a div block.