Skip to content

Linkerd Reference: Service Mesh, mTLS, Golden Metrics, ServiceProfile & Traffic Split

Linkerd is a CNCF-graduated service mesh focused on simplicity and low overhead. It uses lightweight micro-proxies (Rust-based, not Envoy) for mTLS and observability — often faster to install and operate than Istio.

1. Linkerd vs Istio

Choosing between Linkerd and Istio
Feature Linkerd Istio
Proxy Linkerd2-proxy (Rust, ~10MB) Envoy (C++, ~50MB+)
Resource overhead Very low (~10MB RAM per proxy) Higher (~50-150MB per proxy)
Installation ~5 min, one CLI command More config options, longer
Traffic management Basic (retries, timeouts, circuit breaking) Advanced (canary, header routing, fault injection)
Protocols HTTP/1.1, HTTP/2, gRPC (auto-detected) HTTP, gRPC, TCP, custom protocols
Observability Built-in golden metrics (P50/P95/P99) per route Needs Prometheus/Grafana setup
Service profiles Yes (per-route retries, timeouts, traffic split) VirtualService + DestinationRule
Best for Simplicity, low overhead, golden metrics Complex traffic management, multi-protocol
# Rule of thumb:
# Linkerd: you want mTLS + observability with minimal complexity
# Istio: you need canary releases, header-based routing, or multi-protocol meshes

2. Installation

Install Linkerd in 5 minutes
# Install Linkerd CLI:
brew install linkerd   # macOS
# Or:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=$HOME/.linkerd2/bin:$PATH

# Pre-flight check:
linkerd check --pre               # verifies K8s cluster is ready for Linkerd

# Install Linkerd CRDs:
linkerd install --crds | kubectl apply -f -

# Install Linkerd control plane:
linkerd install | kubectl apply -f -

# Verify installation:
linkerd check                     # all checks should show ✓

# Install Linkerd Viz (metrics + dashboard):
linkerd viz install | kubectl apply -f -
linkerd viz check
linkerd viz dashboard &           # opens browser dashboard

# Inject sidecar into a namespace:
kubectl get namespace production -o yaml | linkerd inject - | kubectl apply -f -
# All existing pods need a rolling restart to get the proxy:
kubectl rollout restart deployment -n production

3. Injecting the Proxy

Enable Linkerd for namespaces, deployments, and workloads
# Annotate namespace (new pods auto-injected):
kubectl annotate namespace production linkerd.io/inject=enabled

# Inject existing Deployment without reapplying:
kubectl get deploy my-app -n production -o yaml |   linkerd inject - | kubectl apply -f -

# Inject all Deployments in a namespace:
kubectl get deployments -n production -o yaml |   linkerd inject - | kubectl apply -f -

# Verify injection:
linkerd check --proxy -n production      # confirms proxies are healthy
kubectl describe pod my-pod -n production | grep linkerd-proxy

# Get real-time metrics for a deployment:
linkerd viz stat deployment -n production
# Shows: RPS, success rate, P50/P95/P99 latency — per deployment

# Watch live traffic:
linkerd viz tap deployment/my-app -n production
# Shows every request in real time with response code + latency

# Edge stats (per connection source-destination pair):
linkerd viz edges deployment -n production

4. ServiceProfile — Retries, Timeouts & Traffic Splits

Per-route configuration: retry on 5xx, timeouts, canary
# Generate a ServiceProfile from a running service:
linkerd viz profile --tap deployment/my-app -n production --tap-duration 10s > sp.yaml
# Captures real traffic routes for 10s and generates the profile

# ServiceProfile example (per-route config):
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
  name: my-service.production.svc.cluster.local   # must match FQDN
  namespace: production
spec:
  routes:
    - name: GET /api/users
      condition:
        method: GET
        pathRegex: /api/users(/.*)?
      responseClasses:
        - condition: {status: {min: 500, max: 599}}
          isFailure: true        # count 5xx as failure (for success rate metric)
      timeout: 5s               # per-route timeout
      isRetryable: true         # retry on failure
    - name: POST /api/users
      condition:
        method: POST
        pathRegex: /api/users
      isRetryable: false        # don't retry POST (not idempotent)

# Traffic split (canary release via SMI TrafficSplit):
# Install SMI extension: linkerd smi install | kubectl apply -f -
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
  name: my-service-split
  namespace: production
spec:
  service: my-service
  backends:
    - service: my-service-v1
      weight: "900m"            # 90%
    - service: my-service-v2
      weight: "100m"            # 10%

5. mTLS & Security

Verify and configure zero-trust networking
# Linkerd enables mTLS by default for all meshed traffic
# No configuration needed — it just works

# Verify mTLS is active:
linkerd viz edges deployment -n production
# "SECURED" = mTLS active between those pods
linkerd viz tap deployment/my-app -n production
# tls=true in the output confirms mTLS for each request

# Check certificates:
linkerd identity -n production my-pod-xxx
# Shows the SPIFFE identity and certificate for the proxy

# Authorization Policy (Linkerd 2.10+ — control which services can communicate):
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
  name: my-api-server
  namespace: production
spec:
  podSelector:
    matchLabels: {app: my-api}
  port: 8080
  proxyProtocol: HTTP/2

apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
  name: allow-frontend
  namespace: production
spec:
  server:
    name: my-api-server
  client:
    meshTLS:
      serviceAccounts:
        - name: my-frontend
          namespace: production

6. Debugging & Operations

Diagnose failures, upgrade Linkerd, and multicluster
# Get a quick health overview:
linkerd check                             # control plane health
linkerd check --proxy -n production       # proxy health per namespace
linkerd viz dashboard                     # visual topology + metrics

# Debug a specific request failure:
linkerd viz tap deploy/my-app -n production --to svc/my-service
# Shows each req/resp with timing — use to find which route is failing

# Top commands (live traffic sorted by metric):
linkerd viz top deploy/my-app -n production  # sorted by RPS
linkerd viz top svc/my-service -n production --to-namespace production

# Upgrade Linkerd:
linkerd upgrade | kubectl apply -f -     # upgrade control plane
# Proxies update on pod restart — trigger rolling restart per namespace:
kubectl rollout restart deployment -n production

# Uninstall:
linkerd viz uninstall | kubectl delete -f -
linkerd uninstall | kubectl delete -f -  # removes CRDs, control plane, all proxies

# Multicluster (link two clusters — traffic can cross cluster boundaries):
linkerd multicluster install | kubectl apply -f -
linkerd multicluster link --context=cluster-2 | kubectl apply -f -
linkerd multicluster gateways                # verify gateway status
# After linking: use ServiceMirror to mirror services from cluster-2 to cluster-1

Track Linkerd, Istio, and service mesh releases.
ReleaseRun monitors Kubernetes, Docker, and 13+ DevOps technologies.

Related: Istio Reference | Kubernetes Networking Reference | Prometheus Reference

🔍 Free tool: K8s YAML Security Linter — check your K8s workload manifests for 12 security misconfigurations before injecting the Linkerd proxy.

Founded

2023 in London, UK

Contact

hello@releaserun.com