Skip to content

Redis Reference — redis-cli, data types, SCAN, pub/sub, transactions, and production patterns

Redis is an in-memory data structure store used for caching, sessions, pub/sub, rate limiting, and distributed locks. It’s single-threaded so commands are atomic by default.

1. Core Data Types

String, Hash, List, Set, Sorted Set — when to use each
Type Use case Key commands
String Cache, counters, flags GET, SET, INCR, EXPIRE, SETNX
Hash User objects, config maps HGET, HSET, HMGET, HGETALL, HINCRBY
List Task queues, recent items LPUSH, RPOP, LRANGE, BLPOP
Set Unique visitors, tags SADD, SISMEMBER, SUNION, SCARD
Sorted Set Leaderboards, rate limiting ZADD, ZRANGE, ZRANK, ZINCRBY
# String:
SET user:123:name "Alice" EX 3600
GET user:123:name
INCR page:views
SETNX lock:job:42 1

# Hash (atomic field updates — no deserialize needed):
HSET user:123 name Alice email alice@example.com age 30
HGET user:123 name
HINCRBY user:123 login_count 1
HGETALL user:123

# List (task queue):
LPUSH queue:emails '{"to":"alice@example.com"}'
RPOP queue:emails
BRPOP queue:emails 30    # blocking pop, wait up to 30s

# Sorted Set (leaderboard):
ZADD leaderboard 1500 "alice"
ZADD leaderboard 2200 "bob"
ZRANGE leaderboard 0 -1 WITHSCORES REV    # highest first
ZINCRBY leaderboard 100 "alice"            # atomic score increment

2. Caching Patterns

Cache-aside, stampede prevention, invalidation and TTL strategies
# Cache-aside (most common):
async def get_user(user_id: int):
    cached = await redis.get(f"user:{user_id}")
    if cached:
        return json.loads(cached)
    user = await db.get_user(user_id)
    if user:
        await redis.set(f"user:{user_id}", json.dumps(user), ex=3600)
    return user

# Invalidate on update:
async def update_user(user_id: int, data: dict):
    user = await db.update_user(user_id, data)
    await redis.delete(f"user:{user_id}")
    return user

# Stampede prevention — lock while populating:
async def get_user_safe(user_id: int):
    key = f"user:{user_id}"
    cached = await redis.get(key)
    if cached:
        return json.loads(cached)
    # Only one worker populates the cache
    acquired = await redis.set(f"lock:{key}", "1", nx=True, ex=10)
    if not acquired:
        await asyncio.sleep(0.05)
        return await get_user_safe(user_id)
    try:
        user = await db.get_user(user_id)
        await redis.set(key, json.dumps(user), ex=3600)
        return user
    finally:
        await redis.delete(f"lock:{key}")

# TTL tips:
# Short TTL (60-300s): user profiles, session-like data
# Medium TTL (1h): product catalog, config
# Use jitter: ex = int(3600 * random.uniform(0.9, 1.1))

3. Pub/Sub and Streams

Fire-and-forget pub/sub vs durable Streams with consumer groups
# Pub/Sub (subscriber misses messages while offline):
await redis.publish("notifications:123", json.dumps({"type": "order_shipped"}))

async def subscribe():
    pubsub = redis.pubsub()
    await pubsub.subscribe("notifications:123")
    async for msg in pubsub.listen():
        if msg["type"] == "message":
            await handle(json.loads(msg["data"]))

# Pattern subscribe:
await pubsub.psubscribe("notifications:*")

# Streams (durable — messages persist, consumer groups track acks):
await redis.xadd("orders", {"order_id": "ord_abc", "status": "shipped"})

await redis.xgroup_create("orders", "processors", id="0", mkstream=True)

messages = await redis.xreadgroup("processors", "worker-1", {"orders": ">"})
for stream, entries in messages:
    for entry_id, data in entries:
        await process(data)
        await redis.xack("orders", "processors", entry_id)

# Find unacknowledged messages (for recovery):
pending = await redis.xpending("orders", "processors", "-", "+", count=10)

4. Distributed Locks and Rate Limiting

Lua scripts for atomic lock release and sliding window rate limiting
import uuid, time

# Acquire lock (SET NX with TTL):
async def acquire_lock(key: str, ttl_ms: int = 5000):
    token = str(uuid.uuid4())
    acquired = await redis.set(f"lock:{key}", token, nx=True, px=ttl_ms)
    return token if acquired else None

# Release — atomic check-then-delete (Lua):
# Never DEL without verifying token — you'd release someone else's lock
RELEASE_LUA = (
    "if redis.call('get', KEYS[1]) == ARGV[1] then "
    "return redis.call('del', KEYS[1]) "
    "else return 0 end"
)

async def release_lock(key: str, token: str) -> bool:
    result = await redis.eval(RELEASE_LUA, 1, f"lock:{key}", token)
    return result == 1

# Usage:
async def process_job(job_id: str):
    token = await acquire_lock(f"job:{job_id}", ttl_ms=10000)
    if not token:
        return   # another worker has it
    try:
        await do_work(job_id)
    finally:
        await release_lock(f"job:{job_id}", token)

# Sliding window rate limit (Lua — atomic):
RATE_LUA = (
    "local window = tonumber(ARGV[1]) "
    "local limit = tonumber(ARGV[2]) "
    "local now = tonumber(ARGV[3]) "
    "redis.call('ZREMRANGEBYSCORE', KEYS[1], '-inf', now - window) "
    "local count = redis.call('ZCARD', KEYS[1]) "
    "if count < limit then "
    "  redis.call('ZADD', KEYS[1], now, now) "
    "  redis.call('EXPIRE', KEYS[1], window / 1000) "
    "  return 1 "
    "else return 0 end"
)

async def check_rate_limit(user_id: str, limit=100, window_ms=60000) -> bool:
    now = int(time.time() * 1000)
    result = await redis.eval(RATE_LUA, 1, f"ratelimit:{user_id}", window_ms, limit, now)
    return result == 1

5. Connection Pooling and Production Config

redis-py async pool, RDB vs AOF persistence, memory eviction policies
# redis-py async connection pool:
import redis.asyncio as aioredis

pool = aioredis.ConnectionPool.from_url(
    "redis://localhost:6379/0",
    max_connections=50,
    decode_responses=True,
)
redis_client = aioredis.Redis(connection_pool=pool)

# ioredis (Node.js):
# const redis = new Redis({
#   host: "localhost", port: 6379,
#   retryStrategy: (times) => Math.min(times * 50, 2000),
# });

# redis.conf persistence:
# RDB snapshots (some data loss, fast restart):
#   save 900 1
#   save 300 10
# AOF (minimal data loss):
#   appendonly yes
#   appendfsync everysec

# Memory eviction when maxmemory is hit:
# maxmemory 2gb
# maxmemory-policy allkeys-lru     # pure cache: evict any LRU key
# maxmemory-policy volatile-lru    # mixed: evict TTL-keyed LRU only
# maxmemory-policy noeviction      # durable: return error when full

# Slow command logging:
# slowlog-log-slower-than 10000    # in microseconds (10ms)
# redis-cli SLOWLOG GET 10

Track Redis releases at ReleaseRun. Related: PostgreSQL Advanced SQL | FastAPI Reference | Express.js Reference | PostgreSQL EOL Tracker | Redis EOL Tracker

🔍 Free tool: npm Package Health Checker — check ioredis, redis, and other npm packages for known CVEs and active maintenance before adding them to your stack.

Founded

2023 in London, UK

Contact

hello@releaserun.com