Skip to content

Istio Reference: VirtualService, DestinationRule, mTLS, Gateway & Traffic Management

Istio is the most widely deployed service mesh — it adds mTLS, traffic management, observability, and policy enforcement to Kubernetes without changing application code. The sidecar proxy (Envoy) is injected automatically alongside each pod.

1. Installation & Core Concepts

istioctl install, namespaces, and the data/control plane
# Install Istio with istioctl:
curl -L https://istio.io/downloadIstio | sh -
export PATH=$PWD/istio-x.x.x/bin:$PATH

# Install (production profile — HA control plane):
istioctl install --set profile=default -y
# Profiles: minimal (no ingress gateway), default, demo (all features, not prod), empty

# Enable sidecar injection for a namespace:
kubectl label namespace production istio-injection=enabled
# All pods deployed in this namespace get an Envoy sidecar automatically

# Verify installation:
istioctl verify-install
kubectl get pods -n istio-system      # istiod + ingress gateway

# Sidecar injection status:
kubectl get namespace -L istio-injection
istioctl analyze -n production         # check for configuration issues
kubectl describe pod my-pod -n production   # look for "istio-proxy" container

# Control plane (istiod): manages certificates, pushes config to Envoy sidecars
# Data plane (Envoy): intercepts all pod network traffic, enforces policies

# Service Graph:
istioctl proxy-config cluster my-pod-xxx -n production   # upstream clusters Envoy knows
istioctl proxy-config listeners my-pod-xxx -n production # listeners (how traffic enters)

2. Traffic Management

VirtualService, DestinationRule — canary releases and retries
# VirtualService: defines routing rules for traffic going to a service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: my-service
  namespace: production
spec:
  hosts:
    - my-service                    # matches K8s Service name (within cluster)
  http:
    # Canary release: send 10% to v2
    - route:
        - destination:
            host: my-service
            subset: v1             # defined in DestinationRule below
          weight: 90
        - destination:
            host: my-service
            subset: v2
          weight: 10

    # Retry on upstream errors:
    - route:
        - destination: {host: my-service}
      retries:
        attempts: 3
        perTryTimeout: 5s
        retryOn: "gateway-error,connect-failure,retriable-4xx"

    # Timeout:
    - route:
        - destination: {host: my-service}
      timeout: 10s

    # Circuit breaker (in DestinationRule, not VirtualService):
    # → see DestinationRule section below

    # Header-based routing (A/B testing by user group):
    - match:
        - headers:
            x-user-group:
              exact: beta
      route:
        - destination: {host: my-service, subset: v2}
    - route:
        - destination: {host: my-service, subset: v1}  # default

# DestinationRule: defines subsets (versions) and traffic policies
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: my-service
spec:
  host: my-service
  trafficPolicy:
    connectionPool:
      tcp: {maxConnections: 100}
      http: {http1MaxPendingRequests: 100, http2MaxRequests: 1000}
    outlierDetection:               # circuit breaker: eject unhealthy hosts
      consecutive5xxErrors: 5
      interval: 30s
      baseEjectionTime: 30s
      maxEjectionPercent: 100
  subsets:
    - name: v1
      labels: {version: v1}        # matches pod labels
    - name: v2
      labels: {version: v2}

3. Gateway & Ingress

Expose services to external traffic via Istio Gateway
# Istio Gateway (replaces nginx Ingress for Istio environments):
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: my-gateway
  namespace: production
spec:
  selector:
    istio: ingressgateway           # the deployed Istio gateway pod
  servers:
    - port: {number: 443, name: https, protocol: HTTPS}
      tls:
        mode: SIMPLE
        credentialName: my-tls-cert # K8s Secret with tls.crt + tls.key
      hosts: ["my-app.example.com"]
    - port: {number: 80, name: http, protocol: HTTP}
      hosts: ["my-app.example.com"]
      tls:
        httpsRedirect: true         # force HTTPS

# VirtualService to route gateway traffic to service:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: my-app-gateway-route
spec:
  hosts: ["my-app.example.com"]
  gateways: ["production/my-gateway"]  # bind to gateway
  http:
    - route:
        - destination:
            host: my-service
            port: {number: 8080}

# Get gateway external IP:
kubectl get svc istio-ingressgateway -n istio-system

4. mTLS & Security Policies

Zero-trust networking — encrypt all service-to-service traffic
# Enable STRICT mTLS (all traffic must be encrypted — recommended for prod):
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production             # applies to all pods in namespace
spec:
  mtls:
    mode: STRICT                    # reject non-mTLS traffic
    # PERMISSIVE = accept both plain + mTLS (migration mode)
    # STRICT = only mTLS (fully zero-trust)

# AuthorizationPolicy — who can talk to whom:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allow-frontend-only
  namespace: production
spec:
  selector:
    matchLabels:
      app: my-api                   # applies to my-api pods
  action: ALLOW
  rules:
    - from:
        - source:
            principals:             # based on SPIFFE identity
              - "cluster.local/ns/production/sa/my-frontend"
      to:
        - operation:
            methods: ["GET", "POST"]
            paths: ["/api/*"]

# Check mTLS status:
istioctl x authz check my-pod -n production
kubectl get peerauthentication -A
Start with PERMISSIVE, then switch to STRICT once you’ve confirmed all service-to-service traffic is going through the mesh. Switching directly to STRICT on an existing cluster can break services that aren’t yet injected.

5. Observability

istioctl, Kiali, metrics, and distributed tracing
# Built-in observability addons (install for non-production/demo):
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/prometheus.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/grafana.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/kiali.yaml    # service graph UI
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/jaeger.yaml   # distributed tracing

# Access dashboards:
istioctl dashboard kiali            # service mesh topology + traffic
istioctl dashboard grafana          # metrics dashboards
istioctl dashboard jaeger           # distributed traces

# Key Istio metrics (auto-generated by Envoy — no instrumentation needed):
istio_requests_total                # request count by source, dest, method, status
istio_request_duration_milliseconds # latency histogram
istio_tcp_connections_opened_total  # TCP connection tracking

# Check proxy status (is Envoy in sync with istiod?):
istioctl proxy-status               # SYNCED / STALE / NOT SENT
istioctl proxy-status my-pod -n production   # specific pod

# Dump Envoy config (debug traffic routing issues):
istioctl proxy-config all my-pod -n production    # full config dump
istioctl proxy-config route my-pod -n production  # routing table
istioctl proxy-config endpoint my-pod -n production --cluster my-service.production.svc.cluster.local

6. Troubleshooting Common Issues

503 errors, mTLS failures, and sidecar injection problems
# 503 after sidecar injection:
# Likely cause: Envoy starts before the app — app fails health check
# Fix: set holdApplicationUntilProxyStarts: true in IstioOperator
#   OR add an initContainer delay
# Also check: istio-init took too long (DNS issues, resource starvation)

# Connection refused between services:
kubectl exec -it my-pod -n production -- curl http://my-service:8080/health
# If this fails with 503: VirtualService/DestinationRule misconfiguration
istioctl analyze -n production        # often catches the issue immediately

# mTLS authentication failure (503 with "upstream connect error"):
# Check if the destination service has STRICT mTLS but the source isn't in the mesh
kubectl get peerauthentication -A     # find STRICT policies
# Fix: inject sidecar in source namespace OR use PERMISSIVE mode

# EnvoyFilter not applying:
kubectl describe envoyfilter -n production
istioctl proxy-config listeners my-pod -n production  # check if filter shows up

# Sidecar not injecting:
kubectl get ns production -L istio-injection            # check label
kubectl describe pod my-pod | grep istio-proxy          # should show container
# Fix: add label OR add annotation per-pod:
#   sidecar.istio.io/inject: "true"

Track Istio, Kubernetes, and service mesh releases.
ReleaseRun monitors Kubernetes, Docker, and 13+ DevOps technologies.

Related: Kubernetes Networking Reference | OpenTelemetry Reference | Prometheus Reference

🔍 Free tool: K8s YAML Security Linter — check Istio VirtualService, Gateway, and DestinationRule manifests for K8s security misconfigurations.

Founded

2023 in London, UK

Contact

hello@releaserun.com