Skip to content

Kubernetes RBAC Reference

Kubernetes RBAC Reference

Role-Based Access Control in Kubernetes: Roles, ClusterRoles, bindings, ServiceAccounts, and the audit commands to figure out who can do what — including the patterns that catch everyone out.

Core objects — Role, ClusterRole, RoleBinding, ClusterRoleBinding
Object Scope Grants access to
Role Namespace Resources in ONE namespace
ClusterRole Cluster-wide Resources in ALL namespaces OR cluster-level resources (nodes, PVs, CRDs)
RoleBinding Namespace Attaches a Role OR ClusterRole to subjects, but only in that namespace
ClusterRoleBinding Cluster-wide Attaches a ClusterRole to subjects cluster-wide

RoleBinding can reference a ClusterRole — but the access is still restricted to the RoleBinding’s namespace. This is how you define a shared ClusterRole (e.g. “pod-reader”) and reuse it in multiple namespaces.

# Role — access within one namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: production
rules:
  - apiGroups: [""]          # "" means core API group
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]  # subresource
    verbs: ["get"]

---
# ClusterRole — access cluster-wide or reusable across namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: deployment-manager
rules:
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]

---
# RoleBinding — give pod-reader to a user in production namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: production
subjects:
  - kind: User
    name: alice
    apiGroup: rbac.authorization.k8s.io
  - kind: ServiceAccount
    name: my-service-account
    namespace: production
roleRef:
  kind: Role          # or ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

---
# ClusterRoleBinding — give cluster-admin to a group
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: platform-admins
subjects:
  - kind: Group
    name: platform-team
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
API groups and resources
# Find the apiGroup for any resource
kubectl api-resources -o wide
kubectl api-resources --namespaced=true    # only namespaced resources
kubectl api-resources --namespaced=false   # cluster-level resources

# Common apiGroups and their resources
# ""  (core)           pods, services, endpoints, configmaps, secrets, 
#                      serviceaccounts, nodes, namespaces, persistentvolumes,
#                      persistentvolumeclaims, events, resourcequotas
# "apps"               deployments, replicasets, statefulsets, daemonsets
# "batch"              jobs, cronjobs
# "networking.k8s.io"  ingresses, networkpolicies, ingressclasses
# "rbac.authorization.k8s.io"  roles, rolebindings, clusterroles, clusterrolebindings
# "autoscaling"        horizontalpodautoscalers
# "policy"             poddisruptionbudgets
# "storage.k8s.io"     storageclasses, volumeattachments
# "apiextensions.k8s.io" customresourcedefinitions
# "cert-manager.io"    certificates, issuers, clusterissuers  (if installed)

# All verbs
# get, list, watch         — read operations
# create, update, patch    — write operations
# delete, deletecollection — delete operations
# use, bind, escalate      — special verbs (PodSecurityPolicy, RBAC escalation)

# Subresources (specify separately in rules)
# pods/log          — kubectl logs
# pods/exec         — kubectl exec
# pods/portforward  — kubectl port-forward
# pods/attach       — kubectl attach
# pods/status       — update pod status
# deployments/scale — kubectl scale
# nodes/proxy       — proxy to node kubelet API

# Example: allow logs + exec but not pod creation
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["pods/log", "pods/exec"]
    verbs: ["get", "create"]   # exec requires create on the subresource

Forgetting subresources is one of the most common RBAC mistakes. Access to pods doesn’t automatically grant access to pods/log or pods/exec.

ServiceAccounts — identity for pods
# Default ServiceAccount — every pod gets the "default" SA if none specified
# The "default" SA in most clusters has no RBAC roles → minimal permissions
# BUT: it mounts an API token automatically — disable if not needed

# Create a dedicated ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app
  namespace: production
automountServiceAccountToken: false   # disable auto-mount if app doesn't call API

---
# Attach the SA to a pod
spec:
  serviceAccountName: my-app
  automountServiceAccountToken: true   # override per-pod if needed

---
# Full pattern: SA + Role + RoleBinding
apiVersion: v1
kind: ServiceAccount
metadata:
  name: config-reader
  namespace: production

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: configmap-reader
  namespace: production
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: config-reader-binding
  namespace: production
subjects:
  - kind: ServiceAccount
    name: config-reader
    namespace: production
roleRef:
  kind: Role
  name: configmap-reader
  apiGroup: rbac.authorization.k8s.io

---
# Cross-namespace: SA in ns-a accessing resources in ns-b
# RoleBinding must be in ns-b with subject pointing to ns-a:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: allow-monitoring
  namespace: production       # resource namespace
subjects:
  - kind: ServiceAccount
    name: prometheus           # SA in DIFFERENT namespace
    namespace: monitoring      # SA's namespace
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
kubectl auth can-i — audit and debug RBAC
# Check what YOU can do
kubectl auth can-i get pods
kubectl auth can-i create deployments -n production
kubectl auth can-i '*' '*'     # can I do everything? (returns yes/no)

# Impersonate another user/SA
kubectl auth can-i get pods --as alice
kubectl auth can-i get pods --as system:serviceaccount:production:my-app
kubectl auth can-i get pods --as system:serviceaccount:production:my-app -n production

# Check all permissions for a user (slow — iterates all resources)
kubectl auth can-i --list --as alice -n production

# Check all permissions for a ServiceAccount
kubectl auth can-i --list \
  --as system:serviceaccount:production:my-app \
  -n production

# Who can do what? (no built-in, use rakkess or kubectl-who-can)
# kubectl-who-can plugin
kubectl-who-can create secrets -n production
kubectl-who-can exec pods -n kube-system

# rakkess — show permissions matrix for all resources
# kubectl access-matrix -n production --as alice

# Describe roles + bindings
kubectl get roles,rolebindings -n production
kubectl get clusterroles,clusterrolebindings
kubectl describe role pod-reader -n production
kubectl describe rolebinding read-pods -n production

# Find all RoleBindings for a user/SA
kubectl get rolebindings -A -o json | \
  python3 -c "
import json,sys
d=json.load(sys.stdin)
for rb in d['items']:
  for s in rb['spec'].get('subjects', []):
    if s.get('name') in ['alice', 'my-app']:
      print(rb['metadata']['namespace'], rb['metadata']['name'], '→', rb['roleRef']['name'])
"
Built-in ClusterRoles — what they grant
# View all built-in ClusterRoles
kubectl get clusterroles | grep -v system:

# Most important built-ins:
# cluster-admin   — full access to everything. Don't grant unless necessary.
# admin           — full access within a namespace (can create roles/bindings in namespace)
# edit            — read+write most resources (no roles/bindings/secrets)
# view            — read-only for most resources (no secrets)

# System ClusterRoles (prefixed with "system:")
# system:node                    — used by kubelets
# system:kube-proxy              — used by kube-proxy
# system:controller:*            — used by built-in controllers
# system:aggregate-to-view       — aggregated into view role
# system:aggregate-to-edit       — aggregated into edit role
# system:aggregate-to-admin      — aggregated into admin role

# Aggregated ClusterRoles — extend built-ins with labels
# Add rules to the "view" ClusterRole for a CRD:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: myapp-viewer
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"   # auto-added to view
    rbac.authorization.k8s.io/aggregate-to-edit: "true"   # auto-added to edit
    rbac.authorization.k8s.io/aggregate-to-admin: "true"  # auto-added to admin
rules:
  - apiGroups: ["myapp.io"]
    resources: ["myresources"]
    verbs: ["get", "list", "watch"]

# Best practice: use edit for developers, view for read-only CI/CD
# Don't use cluster-admin for application ServiceAccounts — ever

Never use cluster-admin for application ServiceAccounts. If a pod is compromised, the attacker gets full cluster access. Use the minimum verbs on the minimum resources in the minimum namespace.

Least-privilege patterns
# Pattern 1: Read own config only
# App reads its own ConfigMap by name
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["my-app-config"]   # restrict to specific name
    verbs: ["get", "watch"]

# Pattern 2: Read own secret (avoid if possible — prefer envFrom)
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["my-app-db-creds"]
    verbs: ["get"]

# Pattern 3: Operator / controller pattern
# Controller needs to watch resources and update status
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["apps"]
    resources: ["deployments/status"]
    verbs: ["get", "update", "patch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "patch"]

# Pattern 4: CI/CD pipeline
# Deploy to specific namespace only
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  # NO: pods/exec, secrets, clusterroles, rolebindings

# Pattern 5: Monitoring (Prometheus)
# Needs to scrape metrics from pods + nodes
rules:
  - apiGroups: [""]
    resources: ["nodes", "nodes/proxy", "nodes/metrics", "services",
                "endpoints", "pods"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["extensions", "networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "watch"]
  - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
    verbs: ["get"]

# restrictedNames in rules
# resourceNames: ["name1", "name2"]   — exact names only
# resourceNames does NOT work with "list" or "watch" verbs (they list all then filter)
# Use for "get" and "update" on specific named resources only
Workload Identity — OIDC and pod identity
# Kubernetes ServiceAccount tokens → cloud IAM (no static credentials in pods)

# AWS IRSA (IAM Roles for Service Accounts)
# 1. Enable OIDC on EKS cluster
# 2. Create IAM role with trust policy for the SA
# 3. Annotate the ServiceAccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3-reader
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/s3-reader-role

# Trust policy (in AWS IAM)
# {
#   "Effect": "Allow",
#   "Principal": {
#     "Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/XXXXX"
#   },
#   "Action": "sts:AssumeRoleWithWebIdentity",
#   "Condition": {
#     "StringEquals": {
#       "oidc.eks.us-east-1.amazonaws.com/id/XXXXX:sub": "system:serviceaccount:production:s3-reader"
#     }
#   }
# }

# GKE Workload Identity
kubectl annotate serviceaccount s3-reader \
  --namespace=production \
  iam.gke.io/gcp-service-account=gsa-name@project.iam.gserviceaccount.com

gcloud iam service-accounts add-iam-policy-binding \
  gsa-name@project.iam.gserviceaccount.com \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:project.svc.id.goog[production/s3-reader]"

# Azure Workload Identity (AKS)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: workload-identity-sa
  namespace: production
  annotations:
    azure.workload.identity/client-id: "00000000-0000-0000-0000-000000000000"
  labels:
    azure.workload.identity/use: "true"

Workload Identity (all three cloud providers) eliminates the need to store cloud credentials in Kubernetes Secrets. The pod gets a short-lived OIDC token that the cloud provider exchanges for cloud IAM credentials. This is the recommended pattern for all new deployments.

Common RBAC mistakes and how to fix them
# Mistake 1: Using cluster-admin for an app SA
# Fix: identify what the app actually needs and create a minimal Role
kubectl auth can-i --list --as system:serviceaccount:production:my-app -n production
# Remove wildcard ClusterRoleBinding, create targeted Role

# Mistake 2: Granting secrets access cluster-wide
# Fix: namespace-scoped Role with specific secret names
# WRONG:
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]   # lists ALL secrets in namespace

# RIGHT:
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["db-creds", "api-key"]
    verbs: ["get"]

# Mistake 3: Forgetting pods/exec grants terminal access
# If you give pods/exec to an SA, that SA can exec into any pod
# and potentially read secrets from environment variables

# Mistake 4: RoleBinding subject namespace mismatch
# The SA namespace in the binding MUST match the SA's actual namespace
subjects:
  - kind: ServiceAccount
    name: my-app
    namespace: staging    # WRONG if SA is in production
    namespace: production # CORRECT

# Mistake 5: ClusterRoleBinding when RoleBinding was intended
# ClusterRoleBinding → ClusterRole grants access to ALL namespaces
# Use RoleBinding per namespace to restrict scope

# Mistake 6: Using User/Group subjects with static credentials
# Kubernetes doesn't manage users — subject name is whatever the client cert's CN is
# For humans: use OIDC with your identity provider (not static client certs)
# For automation: use ServiceAccounts (not User subjects)

# Mistake 7: Leaving default SA with mounted token
# Every pod gets the default SA token unless explicitly disabled
# Default SA has no roles in a fresh cluster, but third-party tools sometimes bind to it
spec:
  automountServiceAccountToken: false   # disable at SA level
  # or per-pod:
  automountServiceAccountToken: false

Track Kubernetes EOL dates, version history, and upgrade paths at ReleaseRun Kubernetes Releases — free, live data.

📱

Watch as a Web Story
5 Kubernetes Security Mistakes That Expose Your Cluster — quick visual guide, 2 min

🔍 Check your YAML: Use the Kubernetes YAML Security Linter to scan your Deployments and Pods for misconfigurations — running as root, missing resource limits, hardcoded secrets, and more.

Founded

2023 in London, UK

Contact

hello@releaserun.com