Cilium Reference: eBPF CNI, NetworkPolicy, L7 HTTP & DNS Policy, Hubble Observability
Cilium is a CNCF-graduated CNI plugin using eBPF for K8s networking, security, and observability. It replaces iptables with kernel-level packet processing — faster and more observable than Calico or flannel, with built-in Hubble network visibility.
1. Cilium vs Calico vs Flannel
When to choose Cilium
| Feature | Cilium (eBPF) | Calico | Flannel |
|---|---|---|---|
| Dataplane | eBPF (kernel bypass, no iptables) | iptables or eBPF (limited) | VXLAN/host-gw (iptables) |
| Performance | Highest — eBPF avoids conntrack overhead | Good (iptables bottleneck at scale) | Lowest — pure overlay |
| Network policy | K8s NetworkPolicy + CiliumNetworkPolicy (L7: HTTP, DNS, Kafka) | GlobalNetworkPolicy (strong L3/L4) | No policy support |
| Observability | Hubble (per-flow metrics, UI, PromQL) | Limited | None |
| Service mesh | Optional: Cilium Service Mesh (sidecar-less) | No | No |
| Complexity | Medium (requires kernel 4.9+, ideally 5.10+) | Low-Medium | Very low |
# Install Cilium with Helm:
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --namespace kube-system --set kubeProxyReplacement=true \ # replace kube-proxy entirely with eBPF
--set hubble.relay.enabled=true \ # Hubble relay for cross-node flow data
--set hubble.ui.enabled=true # Hubble web UI
# Install Cilium CLI:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --fail --remote-name-all "https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz"
tar xzvf cilium-linux-amd64.tar.gz -C /usr/local/bin
cilium status # health check
cilium connectivity test # runs connectivity probes (may take ~5 min)
2. Kubernetes NetworkPolicy with Cilium
Standard L3/L4 NetworkPolicy enforced by eBPF
# Deny all ingress to a namespace (default deny):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # matches all pods in namespace
policyTypes: [Ingress]
# Allow only frontend → backend on port 8080:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels: {role: backend}
policyTypes: [Ingress]
ingress:
- from:
- podSelector:
matchLabels: {role: frontend}
ports:
- protocol: TCP
port: 8080
# Allow specific namespace to access a service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-namespace
namespace: production
spec:
podSelector:
matchLabels: {app: my-service}
policyTypes: [Ingress]
ingress:
- from:
- namespaceSelector:
matchLabels: {kubernetes.io/metadata.name: monitoring}
ports:
- port: 8080
3. CiliumNetworkPolicy — L7 HTTP, DNS & Kafka
Application-layer policies only possible with eBPF
# L7 HTTP policy — allow only GET /api/* (deny all other paths/methods):
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-get-api-only
namespace: production
spec:
endpointSelector:
matchLabels: {app: my-api}
ingress:
- fromEndpoints:
- matchLabels: {role: frontend}
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: /api/.* # regex match
- method: POST
path: /api/v1/users
# DNS egress policy — allow only specific external domains:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-external-dns
namespace: production
spec:
endpointSelector:
matchLabels: {app: my-service}
egress:
- toFQDNs:
- matchName: api.example.com
- matchPattern: "*.internal.example.com"
toPorts:
- ports:
- port: "443"
# Kafka topic-level policy:
egress:
- toPorts:
- ports: [{port: "9092"}]
rules:
kafka:
- role: produce
topic: orders
- role: consume
topic: orders
4. Hubble — Network Observability
Per-flow metrics, packet drops, and DNS visibility
# Install Hubble CLI:
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --fail --remote-name-all "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
tar xzvf hubble-linux-amd64.tar.gz -C /usr/local/bin
# Port-forward Hubble relay:
cilium hubble port-forward &
hubble status # verify relay is reachable
# Observe live traffic:
hubble observe # all flows in real time
hubble observe --namespace production # filter by namespace
hubble observe --from-pod production/frontend --to-pod production/backend
hubble observe --verdict DROPPED # show dropped packets only (policy violations!)
hubble observe --protocol DNS # DNS traffic only
# Aggregate metrics per L4/L7 flow:
hubble observe --output json | jq '{src: .source.namespace, dst: .destination.namespace, verdict: .verdict}'
# Hubble UI (requires hubble.ui.enabled=true in Helm):
cilium hubble ui # opens browser at localhost:12000
# Shows: service topology graph, drop reasons, HTTP endpoints, DNS resolutions
# Hubble Prometheus metrics (if Prometheus is installed):
# cilium-agent exposes: hubble_flows_processed_total, hubble_drop_total, etc.
5. Debugging & Operations
Cilium CLI health, policy inspection, and eBPF troubleshooting
# Overall health: cilium status --verbose # all components, BPF maps, Hubble cilium connectivity test # E2E pod connectivity checks (takes a few min) # Policy inspection: kubectl exec -n kube-system ds/cilium -- cilium policy get # effective policies kubectl exec -n kube-system ds/cilium -- cilium endpoint list # all managed endpoints + policy revision # Identify dropped traffic: hubble observe --verdict DROPPED --last 100 # Common drop reasons: POLICY_DENIED, CT_NEW, AUTH_REQUIRED # Check eBPF maps: kubectl exec -n kube-system ds/cilium -- cilium bpf nat list # NAT table kubectl exec -n kube-system ds/cilium -- cilium bpf ct list global # connection tracking # Node CIDR and routing: kubectl exec -n kube-system ds/cilium -- cilium node list # Upgrade Cilium: helm upgrade cilium cilium/cilium -n kube-system --reuse-values --set cilium.image.tag=v1.15.0 # Rolling update — Cilium nodes restart one at a time; verify with cilium status after # Restart Cilium on a specific node: kubectl delete pod -n kube-system -l k8s-app=cilium --field-selector spec.nodeName=node1
Track Cilium, Kubernetes, and CNI plugin releases.
ReleaseRun monitors Kubernetes, Docker, and 13+ DevOps technologies.
Related: Kubernetes Networking Reference | Kubernetes RBAC Reference | Prometheus Reference | Kubernetes EOL Tracker
🔍 Free tool: K8s YAML Security Linter — check your Cilium NetworkPolicy and K8s manifests for 12 security misconfigurations.
Founded
2023 in London, UK
Contact
hello@releaserun.com