GCP Cloud Memorystore
Cloud Memorystore is GCP's fully managed in-memory database service. It supports two popular open-source caching engines: Redis and Memcached. In-memory databases store data entirely in RAM instead of on disk — making reads and writes microseconds fast rather than milliseconds. Memorystore handles provisioning, replication, failover, patching, and backups automatically.
Think of an application's main database as a library. Every time a user visits a popular page, the application walks to the library, finds the book, and brings it back — this takes time. Memorystore is a small bookshelf right next to the developer's desk. The most popular books sit there, ready to grab instantly. The library (database) is only visited when the bookshelf does not have what is needed.
Why Use a Cache?
Without Cache:
User requests product page
│
▼
Application queries Cloud SQL: "SELECT * FROM products WHERE id=123"
│ (5–20ms database query)
▼
Database returns results → Application renders page → User sees response
(Every request hits the database, even for the same data)
With Memorystore Cache:
User requests product page
│
▼
Application checks Redis: GET product:123
│
├── Cache Hit → Returns in 0.1ms ✓ (database not touched)
└── Cache Miss → Queries Cloud SQL → Stores result in Redis → Returns
(Next request will be a cache hit)
Redis vs Memcached
| Feature | Redis | Memcached |
|---|---|---|
| Data types | Strings, Lists, Sets, Sorted Sets, Hashes, Streams | Strings only |
| Persistence | Optional (AOF, RDB snapshots) | None — volatile only |
| Replication | Yes (primary + replica) | No |
| Pub/Sub | Yes | No |
| Use case | Cache, sessions, queues, leaderboards, rate limiting | Simple object caching, distributed cache |
| Recommendation | Most applications should choose Redis | Multi-threaded cache for very simple use cases |
Creating a Memorystore Redis Instance
# Create a Redis instance (Basic tier — no replication) gcloud redis instances create my-cache \ --size=1 \ --region=us-central1 \ --redis-version=redis_7_0 \ --tier=basic # Create a Standard tier Redis instance (with replication and automatic failover) gcloud redis instances create my-ha-cache \ --size=5 \ --region=us-central1 \ --redis-version=redis_7_0 \ --tier=standard # Get the Redis instance IP and port gcloud redis instances describe my-cache --region=us-central1 # Output includes: host: 10.0.0.3, port: 6379
Important: Memorystore Redis instances are only accessible from within the same VPC network. They have no public IP. Applications must run inside the same VPC (Compute Engine, GKE, Cloud Run with VPC connector) to connect.
Connecting to Redis
# Connect from a Compute Engine VM (inside the same VPC) redis-cli -h 10.0.0.3 -p 6379 # Basic Redis commands SET user:session:abc123 "user_data_json" EX 3600 # Store with 1-hour expiry GET user:session:abc123 # Retrieve DEL user:session:abc123 # Delete EXISTS user:session:abc123 # Check existence (1=yes, 0=no) TTL user:session:abc123 # Check remaining TTL in seconds
Using Redis in a Python Application
import redis
import json
import time
# Connect to Memorystore Redis
r = redis.Redis(host='10.0.0.3', port=6379, decode_responses=True)
def get_product(product_id):
"""Fetch a product with Redis caching."""
cache_key = f"product:{product_id}"
# Check cache first
cached = r.get(cache_key)
if cached:
print("Cache HIT")
return json.loads(cached)
# Cache miss — query the database
print("Cache MISS — querying database")
product = query_database(product_id) # Simulated DB call (takes 15ms)
# Store in cache for 10 minutes
r.setex(cache_key, 600, json.dumps(product))
return product
def query_database(product_id):
"""Simulated database query."""
time.sleep(0.015) # Simulate 15ms DB latency
return {"id": product_id, "name": "Laptop", "price": 49999, "stock": 25}
# First call — cache miss
product = get_product("P001") # Takes ~15ms
# Second call — cache hit
product = get_product("P001") # Takes ~0.1ms ✓
Common Redis Data Patterns
Session Storage
# Store user session data
session_data = json.dumps({"user_id": "U123", "role": "admin", "cart": []})
r.setex("session:tok_abc123", 86400, session_data) # Expires in 24 hours
# Retrieve session on each request
session = r.get("session:tok_abc123")
if not session:
# Session expired or invalid → redirect to login
pass
Rate Limiting
def check_rate_limit(user_ip, limit=100, window=60):
"""Allow max 100 requests per minute per IP."""
key = f"ratelimit:{user_ip}"
count = r.incr(key)
if count == 1:
r.expire(key, window) # Start the 60-second window on first request
if count > limit:
return False, f"Rate limit exceeded. Retry after {r.ttl(key)} seconds."
return True, f"Request {count}/{limit}"
allowed, message = check_rate_limit("192.168.1.100")
Leaderboard with Sorted Sets
# Add/update player scores
r.zadd("leaderboard:quiz-001", {"rahul": 950, "priya": 1100, "ali": 875})
# Get top 5 players (highest scores first)
top5 = r.zrevrange("leaderboard:quiz-001", 0, 4, withscores=True)
for rank, (player, score) in enumerate(top5, 1):
print(f"{rank}. {player}: {score}")
# Output:
# 1. priya: 1100.0
# 2. rahul: 950.0
# 3. ali: 875.0
# Get a specific player's rank
rank = r.zrevrank("leaderboard:quiz-001", "rahul") # Returns 1 (0-indexed)
Pub/Sub with Redis
# Publisher — sends real-time notifications
r.publish("notifications:user_001", json.dumps({
"type": "order_shipped",
"order_id": "ORD-9001",
"message": "Your order has been shipped!"
}))
# Subscriber — listens for real-time notifications
pubsub = r.pubsub()
pubsub.subscribe("notifications:user_001")
for message in pubsub.listen():
if message["type"] == "message":
data = json.loads(message["data"])
print(f"Notification: {data['message']}")
Cache Eviction Policies
When Redis reaches its memory limit, it needs to decide which keys to remove. This is controlled by the eviction policy:
| Policy | Behavior | Best For |
|---|---|---|
| noeviction | Return error when memory full (no keys deleted) | Session store (data must never be lost) |
| allkeys-lru | Remove least recently used keys across all keys | General caching |
| volatile-lru | Remove least recently used keys that have an expiry set | Mixed store (some cached, some permanent) |
| allkeys-lfu | Remove least frequently used keys across all keys | Access pattern–aware caching |
Key Takeaways
- Cloud Memorystore is a fully managed Redis or Memcached service for sub-millisecond data access.
- Redis supports rich data types: strings, lists, sets, sorted sets, hashes, and streams.
- Memorystore instances live inside a VPC — accessible only from within the same network.
- Common use cases include database query caching, session storage, rate limiting, and leaderboards.
- Redis Sorted Sets natively support leaderboard-style ranking with O(log N) operations.
- Eviction policies control which keys are removed when Redis memory fills up.
