RabbitMQ Reference: Exchanges, Dead-Letter Queues, Consumer Patterns & Kafka Comparison
RabbitMQ is the most widely deployed AMQP message broker. Unlike Kafka (append-only log), RabbitMQ is a traditional message queue with routing, acknowledgement, dead-letter queues, and per-message TTL. This covers the patterns that matter in production.
1. Core Concepts
Exchanges, queues, bindings, and routing patterns
| Exchange Type | Routing | Use for |
|---|---|---|
direct |
Exact routing key match | Task queues, worker pools |
topic |
Pattern match (*.warn.# wildcards) | Log routing, event filtering |
fanout |
Broadcasts to all bound queues | Notifications, cache invalidation |
headers |
Message header attributes | Complex routing without routing keys |
# Message flow: # Producer → Exchange → Binding (routing key) → Queue → Consumer # Key concepts: # - Exchange: receives messages from producers, routes to queues # - Queue: buffers messages until consumed # - Binding: rule connecting exchange to queue (with routing key) # - Routing key: string attached to message (direct/topic matching) # - Ack: consumer confirms message processed → RabbitMQ removes it # - Nack: consumer rejects → requeue or dead-letter # Default exchange: # Every queue has an implicit binding to the default exchange # Routing key = queue name → publish directly to a queue by name
2. rabbitmqctl & Management CLI
Essential CLI commands for operations and debugging
# Status and health: rabbitmqctl status # node status (version, memory, file descriptors) rabbitmqctl node_health_check # basic health check rabbitmqctl cluster_status # cluster members + disk/RAM nodes # Queues: rabbitmqctl list_queues name messages consumers memory # queue stats rabbitmqctl list_queues name messages_ready messages_unacknowledged rabbitmqctl purge_queue my-queue # delete all messages (DANGEROUS) rabbitmqctl delete_queue my-queue # delete queue entirely # Consumers and connections: rabbitmqctl list_consumers # queue → consumer_tag → channel rabbitmqctl list_connections user peer_address state rabbitmqctl list_channels consumer_count messages_unacknowledged # Exchanges and bindings: rabbitmqctl list_exchanges name type durable rabbitmqctl list_bindings source_name source_kind destination_name routing_key # User and vhost management: rabbitmqctl add_user myuser mypassword rabbitmqctl set_user_tags myuser administrator rabbitmqctl add_vhost /production rabbitmqctl set_permissions -p /production myuser ".*" ".*" ".*" # configure/write/read rabbitmqctl list_users rabbitmqctl list_vhosts # Message tracing (debugging): rabbitmq-plugins enable rabbitmq_tracing rabbitmqctl trace_on # capture all messages to amq.rabbitmq.trace
3. Queue and Message Patterns
Durable queues, TTL, dead-letter exchanges, priority queues
# Durable queue + persistent messages (survive broker restart):
# When declaring queue — durable:true means queue survives restart
# delivery_mode:2 (persistent) means message survives restart
# BOTH must be set — durable queue + persistent message
# Queue with dead-letter exchange (DLX):
# When messages are rejected (nack with requeue=false) or expire,
# they're routed to the DLX instead of being dropped
channel.queue_declare(
queue='my-queue',
durable=True,
arguments={
'x-dead-letter-exchange': 'dlx', # route failed messages here
'x-dead-letter-routing-key': 'failed', # with this routing key
'x-message-ttl': 30000, # messages expire after 30s
'x-max-length': 10000, # max messages (overflow → DLX)
'x-overflow': 'reject-publish', # reject new messages when full
}
)
# Priority queue (0-255, higher = higher priority):
channel.queue_declare(
queue='priority-queue',
arguments={'x-max-priority': 10} # max priority level
)
# Publish with priority:
channel.basic_publish(
exchange='',
routing_key='priority-queue',
body='important message',
properties=pika.BasicProperties(priority=8) # 8 out of 10
)
# Quorum queues (Raft-based, replicated — recommended for production):
channel.queue_declare(
queue='my-queue',
durable=True,
arguments={'x-queue-type': 'quorum'} # replicated across 3 nodes
)
# Quorum queues replace mirrored queues (deprecated in 3.9+)
4. Consumer Patterns
Acknowledgements, prefetch, competing consumers
# Prefetch count — critical for fair dispatch:
# Without prefetch: RabbitMQ dispatches all messages to first available consumer
# With prefetch: each consumer has at most N unacknowledged messages at once
channel.basic_qos(prefetch_count=1) # fair dispatch — no consumer overwhelmed
# prefetch_count=10 → better throughput, slight unfairness
# prefetch_count=1 → perfectly fair, lower throughput (wait for each ack)
# Manual acknowledgement (default mode for reliability):
def callback(ch, method, properties, body):
try:
process_message(body)
ch.basic_ack(delivery_tag=method.delivery_tag) # success → remove from queue
except Exception:
ch.basic_nack(
delivery_tag=method.delivery_tag,
requeue=False # False → route to DLX; True → requeue (can cause infinite loops!)
)
channel.basic_consume(queue='my-queue', on_message_callback=callback, auto_ack=False)
# Competing consumers (worker pool):
# Spin up N consumers on the same queue
# RabbitMQ dispatches each message to exactly one consumer (round-robin with prefetch)
# Scale workers independently of producers
# Publisher confirms (ensure messages reach broker):
channel.confirm_delivery()
channel.basic_publish(...)
# Raises if broker fails to accept the message
auto_ack=True with slow processing. Messages are removed from the queue the moment they’re delivered — if the consumer crashes mid-processing, the message is lost permanently. Use manual ack with auto_ack=False.5. RabbitMQ vs Kafka
When to use each — they solve different problems
| Dimension | RabbitMQ | Kafka |
|---|---|---|
| Model | Message queue — messages are consumed and removed | Append-only log — messages persist with replay |
| Retention | Until consumed (or TTL) | Configurable (days/weeks/forever) |
| Consumer groups | Each consumer gets each message once | Consumer group reads from partitions, replay possible |
| Routing | Exchanges + routing keys (complex routing) | Topic partitions only |
| Throughput | Lower (but sufficient for most use cases) | Very high (millions/sec per partition) |
| Ordering | Per-queue FIFO, with priority queues | Per-partition ordering |
| Best for | Task queues, RPC, complex routing, DLQ patterns | Event streams, audit logs, event sourcing, analytics |
# Rule of thumb: # RabbitMQ: you need to DISPATCH work to workers and CONFIRM it's done # Kafka: you need to LOG events and let multiple consumers READ the same stream # Both: event-driven architectures where you don't want tight coupling
6. Docker + K8s Deployment
Docker Compose for dev, Helm for K8s
# Docker Compose (dev):
services:
rabbitmq:
image: rabbitmq:3.13-management-alpine # -management includes the HTTP API + UI
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI (http://localhost:15672, guest/guest)
environment:
RABBITMQ_DEFAULT_USER: myuser
RABBITMQ_DEFAULT_PASS: mypassword
RABBITMQ_DEFAULT_VHOST: /production
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 10s
timeout: 5s
retries: 5
# Production K8s — use the official Bitnami Helm chart:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install rabbitmq bitnami/rabbitmq --set replicaCount=3 \ # 3-node cluster for quorum queues
--set auth.username=myuser --set auth.password=mypassword --set auth.erlangCookie=secret \ # must be same on all nodes
--set persistence.size=20Gi
# Connection string format:
# amqp://user:password@host:5672/vhost
# amqps://user:password@host:5671/vhost (TLS)
Track RabbitMQ, Kafka, and message broker releases.
ReleaseRun monitors Kubernetes, Docker, and 13+ DevOps technologies.
Related: Apache Kafka Reference | Docker Compose Reference | Kubernetes YAML Reference
Founded
2023 in London, UK
Contact
hello@releaserun.com