🌐 Shared Cache
Distributed caching provides shared cache accessible by all application servers.
When your application runs on one server, caching is simple - just use local memory. But what happens when you scale to multiple servers?
The problem: Each server has its own cache. Data cached on Server 1 isn’t available on Server 2. You’re wasting memory and getting inconsistent results.
The solution: Distributed caching - a shared cache accessible by all servers.
Distributed caching means using a cache that runs on separate servers and is shared by all your application servers.
| Benefit | Description |
|---|---|
| Shared Cache | All servers see the same cached data |
| Larger Capacity | Sum of all cache servers, not limited to one machine |
| High Availability | Cache survives individual server failures |
| Consistency | Updates visible to all servers immediately |
| Memory Efficiency | One copy instead of N copies |
The two most popular distributed caching solutions:
Feature-rich in-memory data store. More than just a cache.
Redis Strengths:
Redis Use Cases:
Simple, fast key-value store. Pure caching solution.
Memcached Strengths:
Memcached Use Cases:
| Feature | Redis | Memcached |
|---|---|---|
| Data Structures | Rich (strings, lists, sets, etc.) | Key-value only |
| Persistence | Yes (RDB, AOF) | No |
| Performance | Fast | Faster (simpler) |
| Memory Efficiency | Higher overhead | Lower overhead |
| Replication | Built-in | Client-side sharding |
| Use Case | Feature-rich caching | Simple caching |
Problem: When Server 1 updates cache, how do Servers 2 and 3 know?
Solutions:
Write-Through to Shared Cache
Cache Invalidation
Short TTL
At the code level, you need to design cache client wrappers that abstract Redis/Memcached:
1from abc import ABC, abstractmethod2from typing import Optional, Any3import redis4import memcache5
6class CacheClient(ABC):7 @abstractmethod8 def get(self, key: str) -> Optional[Any]:9 pass10
11 @abstractmethod12 def set(self, key: str, value: Any, ttl: int = 300) -> bool:13 pass14
15 @abstractmethod16 def delete(self, key: str) -> bool:17 pass18
19class RedisCacheClient(CacheClient):20 def __init__(self, host: str = 'localhost', port: int = 6379):21 self.client = redis.Redis(host=host, port=port, decode_responses=True)22
23 def get(self, key: str) -> Optional[Any]:24 try:25 return self.client.get(key)26 except redis.RedisError:27 # Handle failure gracefully28 return None29
30 def set(self, key: str, value: Any, ttl: int = 300) -> bool:31 try:32 return self.client.setex(key, ttl, value)33 except redis.RedisError:34 return False35
36 def delete(self, key: str) -> bool:37 try:38 return bool(self.client.delete(key))39 except redis.RedisError:40 return False41
42class MemcachedCacheClient(CacheClient):43 def __init__(self, servers: list = None):44 self.client = memcache.Client(servers or ['127.0.0.1:11211'])45
46 def get(self, key: str) -> Optional[Any]:47 try:48 return self.client.get(key)49 except Exception:50 return None51
52 def set(self, key: str, value: Any, ttl: int = 300) -> bool:53 try:54 return self.client.set(key, value, time=ttl)55 except Exception:56 return False57
58 def delete(self, key: str) -> bool:59 try:60 return self.client.delete(key)61 except Exception:62 return False63
64# Usage - application code doesn't care about implementation65class UserService:66 def __init__(self, cache: CacheClient):67 self.cache = cache68
69 def get_user(self, user_id: int):70 # Cache-aside pattern71 cache_key = f"user:{user_id}"72 user = self.cache.get(cache_key)73
74 if user:75 return user76
77 # Cache miss - fetch from DB78 user = self._fetch_from_db(user_id)79
80 # Store in cache81 if user:82 self.cache.set(cache_key, user, ttl=300)83
84 return user1import java.util.Optional;2
3interface CacheClient {4 Optional<String> get(String key);5 boolean set(String key, String value, int ttlSeconds);6 boolean delete(String key);7}8
9class RedisCacheClient implements CacheClient {10 private final Jedis jedis;11
12 public RedisCacheClient(String host, int port) {13 this.jedis = new Jedis(host, port);14 }15
16 public Optional<String> get(String key) {17 try {18 String value = jedis.get(key);19 return Optional.ofNullable(value);20 } catch (Exception e) {21 // Handle failure gracefully22 return Optional.empty();23 }24 }25
26 public boolean set(String key, String value, int ttlSeconds) {27 try {28 return "OK".equals(jedis.setex(key, ttlSeconds, value));29 } catch (Exception e) {30 return false;31 }32 }33
34 public boolean delete(String key) {35 try {36 return jedis.del(key) > 0;37 } catch (Exception e) {38 return false;39 }40 }41}42
43class MemcachedCacheClient implements CacheClient {44 private final MemcachedClient client;45
46 public MemcachedCacheClient(String host, int port) {47 this.client = new MemcachedClient(48 new InetSocketAddress(host, port));49 }50
51 public Optional<String> get(String key) {52 try {53 return Optional.ofNullable((String) client.get(key));54 } catch (Exception e) {55 return Optional.empty();56 }57 }58
59 public boolean set(String key, String value, int ttlSeconds) {60 try {61 return client.set(key, ttlSeconds, value).get();62 } catch (Exception e) {63 return false;64 }65 }66
67 public boolean delete(String key) {68 try {69 return client.delete(key).get();70 } catch (Exception e) {71 return false;72 }73 }74}75
76// Usage - application code doesn't care about implementation77class UserService {78 private final CacheClient cache;79
80 public UserService(CacheClient cache) {81 this.cache = cache;82 }83
84 public Optional<User> getUser(int userId) {85 // Cache-aside pattern86 String cacheKey = "user:" + userId;87 Optional<String> cached = cache.get(cacheKey);88
89 if (cached.isPresent()) {90 return Optional.of(deserialize(cached.get()));91 }92
93 // Cache miss - fetch from DB94 Optional<User> user = fetchFromDb(userId);95
96 // Store in cache97 user.ifPresent(u -> cache.set(cacheKey, serialize(u), 300));98
99 return user;100 }101}Important: Don’t create new connections for each request. Use connection pooling:
1import redis2from redis.connection import ConnectionPool3
4# Create connection pool5pool = ConnectionPool(6 host='localhost',7 port=6379,8 max_connections=50, # Max connections in pool9 decode_responses=True10)11
12# Reuse pool across requests13class CacheService:14 def __init__(self):15 self.redis = redis.Redis(connection_pool=pool)16
17 def get(self, key: str):18 return self.redis.get(key)1import redis.clients.jedis.JedisPool;2import redis.clients.jedis.JedisPoolConfig;3
4// Create connection pool5JedisPoolConfig poolConfig = new JedisPoolConfig();6poolConfig.setMaxTotal(50); // Max connections7poolConfig.setMaxIdle(10); // Max idle connections8
9JedisPool pool = new JedisPool(10 poolConfig, "localhost", 6379);11
12// Reuse pool across requests13class CacheService {14 private final JedisPool pool;15
16 public CacheService(JedisPool pool) {17 this.pool = pool;18 }19
20 public String get(String key) {21 try (Jedis jedis = pool.getResource()) {22 return jedis.get(key);23 }24 }25}Handle cache failures gracefully:
1import time2from typing import Callable, Optional, Any3
4class CacheWithRetry:5 def __init__(self, cache: CacheClient, max_retries: int = 3):6 self.cache = cache7 self.max_retries = max_retries8
9 def get_with_retry(self, key: str) -> Optional[Any]:10 for attempt in range(self.max_retries):11 try:12 return self.cache.get(key)13 except Exception as e:14 if attempt == self.max_retries - 1:15 # Last attempt failed - return None (cache miss)16 return None17 # Exponential backoff18 time.sleep(2 ** attempt)19 return None1import java.util.Optional;2import java.util.function.Supplier;3
4class CacheWithRetry {5 private final CacheClient cache;6 private final int maxRetries;7
8 public CacheWithRetry(CacheClient cache, int maxRetries) {9 this.cache = cache;10 this.maxRetries = maxRetries;11 }12
13 public Optional<String> getWithRetry(String key) {14 for (int attempt = 0; attempt < maxRetries; attempt++) {15 try {16 return cache.get(key);17 } catch (Exception e) {18 if (attempt == maxRetries - 1) {19 // Last attempt failed - return empty (cache miss)20 return Optional.empty();21 }22 // Exponential backoff23 try {24 Thread.sleep((long) Math.pow(2, attempt) * 1000);25 } catch (InterruptedException ie) {26 Thread.currentThread().interrupt();27 return Optional.empty();28 }29 }30 }31 return Optional.empty();32 }33}For very large caches, shard data across multiple cache nodes:
Sharding Strategy:
🌐 Shared Cache
Distributed caching provides shared cache accessible by all application servers.
🔴 Redis vs Memcached
Redis = feature-rich, Memcached = simple and fast. Choose based on needs.
🏗️ Abstract Implementation
Design cache interfaces that abstract Redis/Memcached. Makes switching easier.
🔌 Connection Pooling
Always use connection pooling. Don’t create connections per request.