Skip to content

HashiCorp Consul Reference: Service Discovery, KV Store, Connect Mesh & ACLs

HashiCorp Consul provides service discovery, health checking, a distributed KV store, and service mesh. This covers the CLI and integration patterns you need in production.

1. Core Concepts

Services, health checks, agents, and the catalog
Component What it does
Agent (client mode) Runs on every node, registers local services, forwards to servers
Agent (server mode) Stores state, participates in Raft consensus (3 or 5 servers for HA)
Service catalog Registry of all services with health status
Health check HTTP/TCP/script/TTL checks — unhealthy services removed from DNS
KV store Distributed key-value store (not for secrets — use Vault for that)
Connect / service mesh mTLS between services, intentions-based access control
# Key insight: Consul DNS removes unhealthy instances automatically
# App queries: my-service.service.consul → only healthy endpoints
# No load balancer update, no manual intervention needed

consul members              # list all agents
consul info                 # local agent info + stats
consul catalog services     # all registered services
consul catalog nodes        # all registered nodes

2. Service Registration

Register services via config file or API
# /etc/consul.d/my-service.hcl (agent auto-loads from config dir)
service {
  name = "my-api"
  id   = "my-api-1"          # unique per instance (for multiple on same node)
  port = 8080
  tags = ["v2", "production"]
  meta = {
    version = "2.1.0"
    env     = "production"
  }

  check {
    http     = "http://localhost:8080/health"
    interval = "10s"
    timeout  = "2s"
    # Status: passing / warning / critical
    # critical = removed from healthy service list
  }
}

# Reload without restart:
consul reload               # picks up new/changed config files

# Register via API (for dynamic registration):
curl -X PUT http://localhost:8500/v1/agent/service/register   -H 'Content-Type: application/json'   -d '{"Name":"my-api","Port":8080,"Check":{"HTTP":"http://localhost:8080/health","Interval":"10s"}}'

# Deregister:
curl -X PUT http://localhost:8500/v1/agent/service/deregister/my-api-1

3. DNS Service Discovery

Look up services by DNS — the primary interface
# Consul DNS runs on port 8600 by default
# Service lookup: .service[.]..consul

# Lookup all healthy instances of my-api:
dig @127.0.0.1 -p 8600 my-api.service.consul
# Returns: A records for all passing instances only

# SRV record (includes port):
dig @127.0.0.1 -p 8600 my-api.service.consul SRV
# Returns: hostname + port for each instance

# Filter by tag:
dig @127.0.0.1 -p 8600 v2.my-api.service.consul

# Direct node lookup:
dig @127.0.0.1 -p 8600 my-node.node.consul

# Make app use Consul DNS by default (two approaches):
# 1. Forward .consul queries to Consul DNS in systemd-resolved:
# /etc/systemd/resolved.conf.d/consul.conf:
# [Resolve]
# DNS=127.0.0.1:8600
# Domains=~consul

# 2. Bind Consul to port 53 directly (requires CAP_NET_BIND_SERVICE or root)

# Test service discovery:
consul catalog services                      # list all services
consul health service my-api                 # health status per instance
consul health service my-api -passing        # only healthy instances

4. KV Store

Distributed config and feature flags — not for secrets
# Basic CRUD:
consul kv put config/myapp/log_level "info"
consul kv get config/myapp/log_level          # returns value
consul kv get -recurse config/myapp/           # get all keys under prefix
consul kv delete config/myapp/log_level
consul kv delete -recurse config/myapp/        # delete all under prefix

# Export/import (backup pattern):
consul kv export config/ > config-backup.json
consul kv import @config-backup.json

# Watch for changes (block until value changes):
consul watch -type=key -key=config/myapp/log_level cat
# Long-polls and executes the command whenever value changes

# Flags (metadata attached to key):
consul kv put -flags=42 config/myapp/feature_x "enabled"
consul kv get -detailed config/myapp/feature_x  # shows flags, session, modify index

# CAS (Check-and-Set — optimistic locking):
# Get current modify index first:
consul kv get -detailed config/lock | grep ModifyIndex
# Then CAS update (fails if index changed since read):
consul kv put -cas -modify-index=42 config/lock "my-lock-value"
Consul KV is for configuration and coordination, not secrets. Don’t store passwords or API keys here — use Vault. KV data is not encrypted at rest by default and is visible to all agents.

5. Consul Template

Render config files from Consul KV + service catalog
# consul-template watches Consul and re-renders files when data changes
# Then optionally runs a reload command

# config.hcl:
consul { address = "127.0.0.1:8500" }

template {
  source      = "/etc/nginx/nginx.conf.tmpl"
  destination = "/etc/nginx/nginx.conf"
  command     = "systemctl reload nginx"  # runs after render
}

# nginx.conf.tmpl:
upstream backend {
  {{range service "my-api"}}
  server {{.Address}}:{{.Port}};
  {{else}}
  # No healthy instances — fail safe:
  server 127.0.0.1:8081;  # fallback/maintenance page
  {{end}}
}

# Render from KV:
{{ key "config/myapp/log_level" }}
{{ keyOrDefault "config/myapp/feature_x" "disabled" }}  # default if key missing

# Run consul-template:
consul-template -config=config.hcl
# Every time a new my-api instance registers or deregisters:
# → nginx.conf re-rendered with new upstream list
# → nginx reloaded automatically

6. Service Mesh (Connect)

mTLS between services, intentions-based firewall
# Enable Connect in service registration:
service {
  name = "my-api"
  port = 8080
  connect { native = false }   # use sidecar proxy (envoy)
}

# Intentions: control which services can talk to which
# (default-deny is the secure stance)
consul intention create -allow my-frontend my-api   # allow frontend → api
consul intention create -deny  '*'          my-api   # deny everything else

# List intentions:
consul intention list

# Check if a connection is allowed:
consul intention check my-frontend my-api   # returns "Allowed" or "Denied"

# Kubernetes + Consul Connect (Consul on K8s):
# Annotate pods — Consul injects the Envoy sidecar automatically:
spec:
  template:
    metadata:
      annotations:
        "consul.hashicorp.com/connect-inject": "true"
        "consul.hashicorp.com/service-name": "my-api"
        # Transparent proxy: all traffic routed through Envoy (no app changes)
        "consul.hashicorp.com/transparent-proxy": "true"

7. ACLs & Bootstrap

Access control for production Consul clusters
# Bootstrap ACL system (one-time, generates management token):
consul acl bootstrap
# Save the token immediately — it cannot be recovered

export CONSUL_HTTP_TOKEN=""

# Create a policy:
consul acl policy create -name "my-service-policy" -rules - <

Track HashiCorp Consul, Vault, Terraform, and infrastructure tool releases.
ReleaseRun monitors releases for Kubernetes, Docker, and 13+ DevOps technologies.

Related: HashiCorp Vault Reference | Kubernetes Networking Reference | Terraform Reference

🔍 Free tool: K8s YAML Security Linter — if you run Consul on Kubernetes, check your manifests for 12 security misconfigurations.

Founded

2023 in London, UK

Contact

hello@releaserun.com