Kubernetes

CI/CD Pipelines with Kubernetes: Best Practices and Tools

If you’re studying DevOps or software engineering, you’ve probably heard that Kubernetes is “the standard” for running applications in production. That’s true. But what doesn’t get talked about enough is how code actually gets from a developer’s laptop into a running Kubernetes cluster. That’s where CI/CD pipelines come in, and getting them right with Kubernetes […]

Jamieson Smith February 21, 2026 6 min read

If you’re studying DevOps or software engineering, you’ve probably heard that Kubernetes is “the standard” for running applications in production. That’s true. But what doesn’t get talked about enough is how code actually gets from a developer’s laptop into a running Kubernetes cluster. That’s where CI/CD pipelines come in, and getting them right with Kubernetes is genuinely one of the most practical skills you can build.

This guide covers the tools, patterns, and best practices that real teams use to ship code to Kubernetes clusters. Everything here is based on current adoption data, official documentation, and how these tools are actually used in production environments today.

What CI/CD Actually Means in a Kubernetes Context

CI/CD stands for Continuous Integration and Continuous Delivery (or Continuous Deployment, depending on who you ask). In simple terms:

  • Continuous Integration (CI) means every code change gets automatically built and tested. You push code, and within minutes you know whether it works.
  • Continuous Delivery (CD) means those tested changes are automatically packaged and ready to deploy. One click (or zero clicks) to production.

With Kubernetes, the “delivery” part has a specific shape. Your application gets packaged into a container image, that image gets pushed to a registry, and then Kubernetes pulls that image and runs it as pods in your cluster. A CI/CD pipeline automates every step of that process.

Here’s what a typical pipeline looks like:

Code Push > Build > Test > Container Image > Push to Registry > Deploy to Kubernetes

Each stage runs automatically. If any stage fails, the pipeline stops and you get notified. No broken code reaches production.

The Tools: What Teams Actually Use

According to the CNCF Annual Survey (2024), the most widely used CI/CD platforms for Kubernetes workloads are:

  • GitHub Actions: 51% adoption
  • Argo CD: 45% adoption
  • Jenkins: 44% adoption
  • GitLab CI/CD: 34% adoption
  • Azure Pipelines: 24% adoption

These numbers reflect real-world usage across thousands of organizations. Let’s break down each one.

GitHub Actions

GitHub Actions is the most popular CI/CD platform right now, largely because it’s built directly into GitHub where most code already lives. For students, it’s also the most accessible option: public repositories get unlimited free minutes, and the GitHub Student Developer Pack provides additional benefits.

A basic GitHub Actions workflow that builds a container image and deploys to Kubernetes looks like this:

name: Build and Deploy
on:
  push:
    branches: [main]

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build container image
        run: docker build -t myapp:${{ github.sha }} .

      - name: Push to container registry
        run: |
          echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
          docker tag myapp:${{ github.sha }} ghcr.io/${{ github.repository }}/myapp:${{ github.sha }}
          docker push ghcr.io/${{ github.repository }}/myapp:${{ github.sha }}

      - name: Deploy to Kubernetes
        uses: azure/k8s-deploy@v5
        with:
          manifests: k8s/deployment.yaml
          images: ghcr.io/${{ github.repository }}/myapp:${{ github.sha }}

Note: as of January 2026, GitHub reduced hosted runner prices by up to 39%, though they also introduced a $0.002/minute platform charge for self-hosted runners. For students working on public repos, none of this matters because public repository usage remains free.

Argo CD

Argo CD is a dedicated Kubernetes deployment tool, and it’s been growing fast. A 2025 CNCF end-user survey found that Argo CD runs in nearly 60% of surveyed Kubernetes clusters, with 97% of respondents using it in production (up from 93% in 2023). It achieved a Net Promoter Score of 79, which is remarkably high for infrastructure tooling.

The latest stable release is Argo CD v3.3.1 (February 18, 2026). The three currently supported versions are 3.3, 3.2, and 3.1.

What makes Argo CD different from GitHub Actions or Jenkins is that it follows the GitOps model. Instead of your CI pipeline pushing changes to Kubernetes, Argo CD watches a Git repository and pulls changes into the cluster. The Git repository becomes the single source of truth for what should be running.

Here’s a minimal Argo CD Application manifest:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/your-app-manifests.git
    targetRevision: main
    path: k8s/
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

With selfHeal: true, if someone manually changes something in the cluster, Argo CD automatically reverts it to match what’s in Git. This is one of the strongest arguments for GitOps: your cluster state is always predictable and auditable.

Jenkins

Jenkins is the oldest CI/CD tool still in wide use. It’s open source, self-hosted, and has over 1,800 plugins. The Kubernetes plugin lets Jenkins spin up build agents as pods dynamically, so you’re not paying for idle build servers.

A Jenkinsfile with Kubernetes pod templates looks like this:

pipeline {
  agent {
    kubernetes {
      yaml '''
        apiVersion: v1
        kind: Pod
        spec:
          containers:
          - name: docker
            image: docker:27-dind
            securityContext:
              privileged: true
          - name: kubectl
            image: bitnami/kubectl:1.31
            command: ['sleep', 'infinity']
      '''
    }
  }
  stages {
    stage('Build') {
      steps {
        container('docker') {
          sh 'docker build -t myapp:${BUILD_NUMBER} .'
        }
      }
    }
    stage('Deploy') {
      steps {
        container('kubectl') {
          sh 'kubectl apply -f k8s/deployment.yaml'
        }
      }
    }
  }
}

Jenkins requires more setup and maintenance than GitHub Actions or GitLab CI, but it gives you complete control. For students learning how CI/CD works under the hood, Jenkins is actually a solid educational tool because nothing is abstracted away from you.

GitLab CI/CD

GitLab’s CI/CD is tightly integrated with its Git hosting, similar to how GitHub Actions integrates with GitHub. The GitLab Agent for Kubernetes provides a secure, pull-based connection between GitLab and your cluster, so you don’t need to expose cluster credentials in CI variables.

A basic .gitlab-ci.yml for Kubernetes deployment:

stages:
  - build
  - deploy

build:
  stage: build
  image: docker:27
  services:
    - docker:27-dind
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

deploy:
  stage: deploy
  image: bitnami/kubectl:1.31
  script:
    - kubectl config use-context my-group/my-project:my-agent
    - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  environment:
    name: production

GitLab offers 400 free CI/CD minutes per month on its free tier. For students, that’s usually enough for personal projects, though it can run out quickly on larger builds.

Flux CD

Flux is the other major GitOps tool alongside Argo CD. It’s a CNCF Graduated project (since November 2022) and takes a more composable, controller-based approach. Where Argo CD gives you a web UI and a single Application resource, Flux breaks things down into separate controllers for Git sources, Helm releases, Kustomize overlays, and image automation.

The latest version is Flux v2.7.5 (November 2025). Flux completed its second CNCF security audit with zero CVEs, which is worth noting if you’re working in security-sensitive environments.

A Flux setup uses multiple small resources:

apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: my-app
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/your-org/your-app-manifests
  ref:
    branch: main
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: my-app
  namespace: flux-system
spec:
  interval: 5m
  path: ./k8s
  sourceRef:
    kind: GitRepository
    name: my-app
  prune: true

Flux is generally considered easier to operate at scale because each controller handles one concern. The tradeoff is that it lacks a built-in web dashboard (though Weave GitOps provides one as a separate project).

Dagger

Dagger is the newest tool on this list and worth knowing about. Created by the founder of Docker, Dagger lets you define CI/CD pipelines in real programming languages (Go, Python, TypeScript) instead of YAML. Each pipeline step runs in a container, so your pipeline behaves identically on your laptop and in CI.

import dagger

async def build_and_push():
    async with dagger.Connection() as client:
        src = client.host().directory(".")

        # Build the container
        app = (
            client.container()
            .from_("python:3.12-slim")
            .with_directory("/app", src)
            .with_workdir("/app")
            .with_exec(["pip", "install", "-r", "requirements.txt"])
        )

        # Push to registry
        await app.publish("ghcr.io/your-org/myapp:latest")

Dagger can run on Kubernetes as a pod, job, or DaemonSet using their official Helm chart. It’s still relatively new, but the “pipelines as real code” approach is gaining traction because you get proper IDE support, type checking, and unit testing for your CI/CD logic.

Comparison Table

🔔 Never Miss a Breaking Change

Monthly release roundup — breaking changes, security patches, and upgrade guides across your stack.

✅ You're in! Check your inbox for confirmation.

Tool Best For Free Tier GitOps Learning Curve Key Strength
GitHub Actions Teams already on GitHub Unlimited for public repos No (push-based) Low Marketplace with 20,000+ actions
Argo CD Production Kubernetes CD Free (open source) Yes (pull-based) Medium Web UI, auto-sync, RBAC
Jenkins Full control, complex pipelines Free (open source) No (push-based) High 1,800+ plugins, total flexibility
GitLab CI/CD GitLab-native teams 400 mins/month Partial (with agent) Low All-in-one DevOps platform
Flux CD Composable GitOps at scale Free (open source) Yes (pull-based) Medium Modular controllers, CNCF graduated
Dagger Pipeline portability Free (open source) No (push-based) Medium Pipelines in Go/Python/TS, not YAML

Deployment Strategies You Should Know

Once your pipeline builds and pushes a container image, Kubernetes offers several strategies for rolling out changes. Understanding these is critical.

Rolling Update (Default)

Kubernetes replaces old pods with new ones gradually. If you have 5 replicas, it brings up one new pod, waits for it to pass health checks, then terminates one old pod, and repeats.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # At most 1 extra pod during rollout
      maxUnavailable: 0   # Never reduce below 5 healthy pods
  template:
    spec:
      containers:
      - name: myapp
        image: myapp:v2
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10

Rolling updates are the default because they require zero extra configuration and provide zero-downtime deployments. For most applications, this is all you need.

Blue/Green Deployment

You maintain two identical environments. “Blue” runs the current version, “green” runs the new version. Once green passes all tests, you switch traffic from blue to green by updating the Service selector.

# Service pointing to blue (current)
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
    version: blue    # Change to "green" to switch
  ports:
  - port: 80
    targetPort: 8080

The upside: instant rollback (just flip the selector back). The downside: you need double the resources during the transition. For student projects, this is great to understand conceptually, but rolling updates are more practical for real workloads.

Canary Deployment

You deploy the new version to a small percentage of traffic first (say, 5%), monitor for errors, then gradually increase. Argo Rollouts and Flagger are the two main tools for automating this on Kubernetes.

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: myapp
spec:
  strategy:
    canary:
      steps:
      - setWeight: 5
      - pause: { duration: 5m }
      - setWeight: 25
      - pause: { duration: 5m }
      - setWeight: 50
      - pause: { duration: 5m }
      - setWeight: 100

Canary deployments are standard practice at companies like Google, Netflix, and Spotify. They’re the safest way to ship changes because you catch problems when only a small fraction of users are affected.

Best Practices for Students

These aren’t abstract principles. They’re the specific things that will save you hours of debugging and make your projects look professional.

1. Tag Images with Git SHAs, Not "latest"

Never use image: myapp:latest in production manifests. Use the Git commit SHA instead (myapp:a1b2c3d). The “latest” tag is mutable, meaning you can’t tell which version is actually running, and Kubernetes won’t pull a new image if the tag hasn’t changed.

# Bad
image: myapp:latest

# Good
image: myapp:a1b2c3d4e5f6

2. Store Kubernetes Manifests in a Separate Repository

Keep your application code in one repo and your Kubernetes deployment manifests in another. This separation means your CI pipeline (which builds and tests code) stays independent from your CD pipeline (which deploys to Kubernetes). Argo CD and Flux both work best with this pattern.

3. Use Namespaces to Isolate Environments

Create separate namespaces for staging and production. Your CI/CD pipeline should deploy to staging first, run integration tests, and only promote to production if everything passes.

kubectl create namespace staging
kubectl create namespace production

4. Never Store Secrets in Git

Use Kubernetes Secrets or external secret managers (like HashiCorp Vault, AWS Secrets Manager, or Sealed Secrets) for credentials. In your CI/CD pipeline, store secrets as encrypted environment variables (GitHub Secrets, GitLab CI Variables), never in plain text.

5. Add Health Checks to Every Deployment

Without readiness and liveness probes, Kubernetes can’t tell whether your application is actually working. Rolling updates rely on readiness probes to decide when a new pod is ready to receive traffic.

readinessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10
livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20

6. Use Helm or Kustomize for Configuration Management

Raw YAML files become unmanageable quickly. Helm uses templates with values files for different environments. Kustomize uses overlays to patch base manifests without templating. Both are valid approaches; Helm is more popular (it’s also a CNCF Graduated project), while Kustomize is built into kubectl.

# Helm: install a chart with custom values
helm install myapp ./charts/myapp -f values-production.yaml

# Kustomize: apply an overlay
kubectl apply -k overlays/production/

7. Implement RBAC for CI/CD Service Accounts

Don’t give your CI/CD pipeline cluster-admin access. Create a dedicated ServiceAccount with only the permissions it needs.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ci-deployer
  namespace: production
rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "patch", "update"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list"]

This is a security best practice that also happens to be a common interview question for DevOps roles.

Setting Up a Practice Environment

If you want to try all of this without spending money, here’s what you need:

  1. Local Kubernetes cluster: Install minikube, kind, or enable Kubernetes in Docker Desktop
  2. Container registry: GitHub Container Registry (ghcr.io) is free for public packages
  3. CI/CD: GitHub Actions is free for public repos; Argo CD and Flux are free to install on any cluster
  4. Sample app: Any web application with a Dockerfile works. A simple Flask or Express.js app is enough to start with.

Install Argo CD on your local cluster in under two minutes:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443

Then open https://localhost:8080, grab the initial admin password with kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d, and you have a fully working GitOps platform running locally.

Where to Go From Here

Start simple. Get a GitHub Actions pipeline that builds a Docker image and deploys it to a local kind cluster. Once that works, add Argo CD for the deployment side and experience the GitOps workflow firsthand. Then layer in Helm for configuration management and experiment with canary deployments using Argo Rollouts.

The tools will keep evolving (Argo CD shipped 3.0, 3.1, 3.2, and 3.3 in under a year), but the fundamentals stay the same: automate everything, keep your cluster state in Git, tag your images immutably, and never skip health checks. Those principles will serve you regardless of which specific tool you end up using professionally.

For more Kubernetes tooling, check your manifests for deprecated APIs with the K8s Deprecation Checker. Lint your Dockerfiles before pushing to CI with the Dockerfile Linter. Audit your GitHub Actions workflows with the Actions Auditor. Check your Helm charts for compatibility with the Helm Checker. Full K8s version timeline at our Kubernetes Release Tracker. Docker release history at our Docker Release Tracker.

Official resources:

🛠️ Try These Free Tools

⚠️ K8s Manifest Deprecation Checker

Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

🐳 Dockerfile Security Linter

Paste a Dockerfile for instant security and best-practice analysis.

📦 Dependency EOL Scanner

Paste your dependency file to check for end-of-life packages.

See all free tools →