Kubernetes Networking Reference
Kubernetes Networking Reference
Services, Ingress, NetworkPolicy, DNS, load balancing, and the debugging commands you need when traffic stops flowing.
Services — ClusterIP, NodePort, LoadBalancer, ExternalName
# ClusterIP — default, cluster-internal only
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: production
spec:
type: ClusterIP # default
selector:
app: my-app # matches pods with this label
ports:
- name: http
port: 80 # service port (what clients use)
targetPort: 8080 # pod port (what the container listens on)
- name: metrics
port: 9090
targetPort: 9090
---
# NodePort — exposes on each node's IP at static port (30000-32767)
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # optional — K8s assigns one if omitted
---
# LoadBalancer — creates cloud LB (AWS ELB, GCP LB, Azure LB)
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 8443
loadBalancerSourceRanges: # allow-list source IPs
- 10.0.0.0/8
- 203.0.113.0/24
---
# ExternalName — CNAME to external DNS (no proxying)
spec:
type: ExternalName
externalName: database.prod.internal # returns CNAME, no port mapping
---
# Headless service — no ClusterIP, direct pod DNS
spec:
clusterIP: None # headless
selector:
app: cassandra # returns A records for each pod IP
# Access headless: ...svc.cluster.local
| Type | Scope | Use when |
|---|---|---|
| ClusterIP | Cluster-internal | Service-to-service (99% of internal traffic) |
| NodePort | Node IP + port | Dev/test, on-prem without LB controller |
| LoadBalancer | External cloud LB | Exposing a single service to internet |
| ExternalName | DNS alias | Pointing to external database or legacy endpoint |
| Headless | Per-pod DNS | StatefulSets, Cassandra, direct pod addressing |
Ingress — HTTP/S routing with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: production
annotations:
# nginx-ingress specific
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
# cert-manager auto-TLS
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx # which ingress controller handles this
tls:
- hosts:
- app.example.com
- api.example.com
secretName: app-tls # cert-manager creates/manages this Secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service: { name: api-v1, port: { number: 80 } }
- path: /v2
pathType: Prefix
backend:
service: { name: api-v2, port: { number: 80 } }
---
# PathType options:
# Exact — match exactly "/foo" (not "/foo/" or "/foobar")
# Prefix — match "/foo", "/foo/", "/foo/bar"
# ImplementationSpecific — controller-defined
# IngressClass — select which controller handles this Ingress
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true" # default for unspecified
spec:
controller: k8s.io/ingress-nginx
NetworkPolicy — firewall rules for pods
# Default deny all ingress + egress (apply first in namespace)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # all pods in namespace
policyTypes:
- Ingress
- Egress
---
# Allow frontend → API, and API → database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-access
namespace: production
spec:
podSelector:
matchLabels:
app: api # policy applies to api pods
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend # only from frontend pods
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- port: 5432
- to: # allow DNS (always needed)
- namespaceSelector: {}
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
---
# Allow from specific namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring # Prometheus namespace
podSelector:
matchLabels:
app: prometheus # AND specific pod (namespace AND pod selector)
Common mistake: namespaceSelector and podSelector in the same list item means AND (same source must match both). Separate list items means OR (either source).
DNS — cluster DNS and resolution
# Kubernetes DNS (CoreDNS) service discovery
# Full DNS format:
# ..svc.cluster.local
# ..pod.cluster.local
# Within same namespace — just service name works
curl http://my-service
curl http://my-service:8080
# Cross-namespace — use FQDN
curl http://my-service.production.svc.cluster.local
curl http://my-service.monitoring.svc.cluster.local:9090
# StatefulSet pod DNS (headless service required)
# ...svc.cluster.local
# cassandra-0.cassandra.production.svc.cluster.local
# cassandra-1.cassandra.production.svc.cluster.local
# Check DNS from inside a pod
kubectl exec -it debug-pod -- nslookup my-service
kubectl exec -it debug-pod -- nslookup my-service.production.svc.cluster.local
kubectl exec -it debug-pod -- cat /etc/resolv.conf
# resolv.conf in pods:
# search production.svc.cluster.local svc.cluster.local cluster.local
# nameserver 10.96.0.10 (CoreDNS ClusterIP)
# options ndots:5
# ndots:5 means: if name has < 5 dots, try search domains first
# Impact: "api.external.com" → tries api.external.com.production.svc.cluster.local first
# Fix: use trailing dot for external DNS: "api.external.com."
# CoreDNS ConfigMap — customise DNS behaviour
kubectl edit configmap coredns -n kube-system
# Custom DNS entries (stub zones)
# Add to corefile:
# example.internal:53 {
# forward . 10.0.0.10 # internal DNS server
# }
kubectl — network debugging commands
# Inspect services and endpoints
kubectl get svc -A # all services across namespaces
kubectl get endpoints my-service # check pods behind the service
kubectl describe svc my-service # full service details
# Debug: if Endpoints is empty, the selector doesn't match any pods
kubectl get pods -l app=my-app # verify pods exist with selector labels
kubectl get pod my-pod -o yaml | grep -A5 labels # check pod labels
# Port-forward — access any service/pod locally
kubectl port-forward svc/my-service 8080:80 # service
kubectl port-forward pod/my-pod-abc123 8080:8080 # specific pod
kubectl port-forward deploy/my-deployment 8080:80 # deployment
# Run debug pod in cluster
kubectl run debug --image=nicolaka/netshoot --rm -it -- bash
# Or ephemeral container (K8s 1.23+)
kubectl debug -it my-pod --image=busybox --target=my-container
# Test connectivity from inside cluster
kubectl exec -it my-pod -- curl http://other-service
kubectl exec -it my-pod -- wget -qO- http://other-service:8080/health
kubectl exec -it my-pod -- nc -zv other-service 5432 # TCP port check
# Inspect Ingress
kubectl get ingress -A
kubectl describe ingress my-ingress
kubectl get ingressclass
# Check NetworkPolicy
kubectl get networkpolicy -n production
kubectl describe networkpolicy allow-api-access
# Inspect kube-proxy and CoreDNS
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50
kubectl logs -n kube-system -l component=kube-proxy --tail=50
# iptables rules (on a node)
sudo iptables -t nat -L KUBE-SERVICES | head -30
sudo iptables -t nat -L KUBE-SVC-XXXXX # service chain
# Check node-to-pod connectivity
kubectl get nodes -o wide # node IPs
kubectl get pods -o wide # pod IPs
Gateway API — the future of Ingress
# Gateway API (Kubernetes SIG-Network) — more powerful than Ingress
# Install: kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/...
# GatewayClass — what controller handles the Gateway
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: nginx
spec:
controllerName: k8s-gateway.nginx.org/nginx-gateway-controller
---
# Gateway — the actual load balancer / listener
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: prod-gateway
namespace: production
spec:
gatewayClassName: nginx
listeners:
- name: https
port: 443
protocol: HTTPS
tls:
certificateRefs:
- name: prod-cert
- name: http
port: 80
protocol: HTTP
---
# HTTPRoute — attach routing rules to the Gateway
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
namespace: production
spec:
parentRefs:
- name: prod-gateway # attach to gateway
hostnames:
- app.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-service
port: 80
weight: 90 # weighted routing (canary)
- name: api-service-v2
port: 80
weight: 10
- matches:
- headers:
- name: X-User-Group
value: beta-testers
backendRefs:
- name: api-service-v2
port: 80
Gateway API replaces Ingress for new clusters. It supports weighted routing, header-based routing, and cross-namespace references that Ingress can’t do without controller-specific annotations.
Service mesh — Istio and Linkerd basics
# Istio — mTLS, traffic management, observability
# Inject sidecar into namespace
kubectl label namespace production istio-injection=enabled
# VirtualService — Istio traffic routing
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
spec:
hosts:
- my-service
http:
- match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: my-service
subset: v2
- route:
- destination:
host: my-service
subset: v1
weight: 90
- destination:
host: my-service
subset: v2
weight: 10 # 10% canary
# DestinationRule — define subsets and load balancing
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-service
spec:
host: my-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
h2UpgradePolicy: UPGRADE
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
# Istio useful commands
istioctl analyze # check for config issues
istioctl proxy-status # sidecar sync status
istioctl proxy-config cluster my-pod # envoy cluster config
kubectl exec my-pod -c istio-proxy -- pilot-agent request GET stats | grep retry
Common networking issues and fixes
# Issue: Service has no Endpoints
# Fix: check pod label selector matches service selector
kubectl get svc my-svc -o yaml | grep selector -A5
kubectl get pods -l app=my-app # should return pods
# Issue: DNS not resolving
# Fix: check CoreDNS is running and query from pod
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl exec -it my-pod -- nslookup kubernetes.default
kubectl exec -it my-pod -- cat /etc/resolv.conf
# Issue: Connection timeout (NetworkPolicy blocking)
# Fix: check policies, add explicit DNS egress rule
kubectl get networkpolicy -n production
# Test with netshoot:
kubectl run test --image=nicolaka/netshoot --rm -it -- tcptraceroute my-service 80
# Issue: Ingress 502/503 (no healthy backends)
kubectl describe ingress my-ingress # check backend service names
kubectl get endpoints my-service # must have pod IPs listed
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --tail=30
# Issue: Pod can't reach external internet
# Fix: check egress NetworkPolicy and cluster DNS config
kubectl exec -it my-pod -- curl -v https://api.github.com
kubectl exec -it my-pod -- nslookup api.github.com # DNS works?
# Issue: intermittent timeouts on large payloads
# Fix: increase proxy timeouts in Ingress annotations
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
nginx.ingress.kubernetes.io/proxy-send-timeout: "120"
# Issue: service clusterIP changes after recreation
# Pods cache DNS — force restart:
kubectl rollout restart deployment/my-app
5 Kubernetes Security Mistakes That Expose Your Cluster — quick visual guide, 2 min
🔍 Free scanner: K8s YAML Security Linter — check your Kubernetes networking manifests for security gaps — missing NetworkPolicies, overly permissive RBAC, and more.
Founded
2023 in London, UK
Contact
hello@releaserun.com