Skip to content
CI/CD

Container-Native CI/CD Pipelines: Building, Testing, and Deploying with Docker and Kubernetes

If you’ve ever debugged a pipeline failure at 2am only to discover the issue was “it works differently in CI than locally,” you already understand the problem this category of tooling solves. Container-native CI/CD systems make your build, test, and deploy environment reproducible by running every pipeline step inside containers. The runner that executes your […]

Hannah Brooks March 7, 2026 6 min read

If you’ve ever debugged a pipeline failure at 2am only to discover the issue was “it works differently in CI than locally,” you already understand the problem this category of tooling solves. Container-native CI/CD systems make your build, test, and deploy environment reproducible by running every pipeline step inside containers. The runner that executes your tests matches, as closely as possible, the environment where your code will eventually run. That alignment is the entire point.

This article is for engineering teams running workloads on Docker or Kubernetes who want their delivery pipeline to be as reliable as their production infrastructure. We’ll cover the major platforms, compare them honestly, and get into the configuration details you need to make an informed decision.


What "Container-Native" Actually Means

Not all CI systems that support containers are container-native. A lot of older pipelines were bolted-onto virtual machine runners with Docker installed as an afterthought. Container-native means the pipeline is architected around containers from the start:

  • Each step runs in its own isolated container, with no shared state leaking between steps unless you explicitly pass artifacts.
  • The pipeline definition references container images directly, not abstract environments with package lists.
  • The system itself may run on Kubernetes, meaning the CI infrastructure scales and fails the same way your applications do.

This distinction matters for failure modes. When your steps share host state, a flaky dependency install in step 3 can corrupt the workspace for step 7 in ways that are nearly impossible to reproduce. Container isolation makes failures local and deterministic.


The Major Platforms

GitHub Actions

GitHub Actions is currently the most widely adopted CI/CD platform, according to CNCF end-user survey data from 2025. Its container support is deep: you can specify a container image for your entire job, or use service containers to spin up dependencies like Postgres or Redis alongside your tests.

# .github/workflows/ci.yml
jobs:
  test:
    runs-on: ubuntu-latest
    container:
      image: python:3.12-slim
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: testpass
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
    steps:
      - uses: actions/checkout@v4
      - run: pip install -r requirements.txt
      - run: pytest

The pricing changed significantly in early 2026. GitHub-hosted runner prices dropped by up to 39% in January 2026. Starting March 2026, self-hosted runners are subject to a $0.002-per-minute platform charge that counts against your plan’s included minutes. The Free plan includes 2,000 minutes per month; Enterprise plans include 50,000. According to GitHub, 96% of customers saw no change to their bill from these updates.

Where it gets painful: GitHub Actions is tightly coupled to GitHub. If your code lives elsewhere, or you need deep Kubernetes integration, the workflow DSL can feel limiting. The marketplace is enormous but quality varies widely. Debugging failures requires pushing commits, which gets old fast.


GitLab CI/CD

GitLab’s CI/CD is built directly into the platform and is one of the most mature options for teams that also use GitLab for source control and security scanning. Every pipeline step is a job that runs in a container, defined by a image: key.

# .gitlab-ci.yml
stages:
  - test
  - build
  - deploy

test:
  stage: test
  image: node:22-alpine
  services:
    - name: redis:7
  script:
    - npm ci
    - npm test
  cache:
    key: $CI_COMMIT_REF_SLUG
    paths:
      - node_modules/

build-image:
  stage: build
  image: docker:26
  services:
    - docker:26-dind
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

Free-tier compute is limited to 400 minutes per month on shared runners, and GitLab requires a credit card to use shared runners at all (a friction point for open-source projects). Bring your own runner and there’s no limit. Premium jumps to 10,000 minutes and Ultimate to 50,000.

Where it gets painful: The .gitlab-ci.yml schema has grown complex over the years. extends, include, and rules interactions produce surprising behavior that is hard to trace. If you’re not already a GitLab shop, adopting it purely for CI is a large footprint commitment.


Tekton Pipelines

Tekton is the CNCF standard for Kubernetes-native CI/CD. Pipelines are defined as Kubernetes Custom Resources: Task, Pipeline, TaskRun, PipelineRun. Your CI infrastructure is just more Kubernetes objects, versioned in git and managed with the same tools as everything else.

Tekton reached v1.0 in May 2025, which was a significant stability milestone. The current release is v1.9.0 LTS (February 2026). Key capabilities now at GA status include StepActions (reusable step definitions), policy enforcement via OPA and Kyverno, and OpenTelemetry-based observability.

# tekton-task.yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: run-tests
spec:
  params:
    - name: image
      type: string
  steps:
    - name: test
      image: $(params.image)
      script: |
        #!/bin/sh
        cd /workspace/source
        npm ci && npm test
  workspaces:
    - name: source
# tekton-pipeline.yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: ci-pipeline
spec:
  tasks:
    - name: clone
      taskRef:
        name: git-clone
      workspaces:
        - name: output
          workspace: shared-data
    - name: test
      runAfter: [clone]
      taskRef:
        name: run-tests
      params:
        - name: image
          value: node:22-alpine

Where it gets painful: The learning curve is steep. You’re writing Kubernetes YAML for things that other platforms handle with three lines. There’s no hosted offering; you operate Tekton yourself on your cluster. For small teams without dedicated platform engineers, this can be more overhead than it’s worth.


Argo CD and Argo Workflows

The Argo project covers distinct but complementary needs. Argo Workflows is a Kubernetes-native container workflow engine, suitable for CI pipelines. Argo CD handles continuous delivery using the GitOps model: your cluster state is always derived from git, and Argo CD reconciles drift. According to the CNCF 2025 end-user survey, Argo CD is used to manage application delivery in the majority of surveyed Kubernetes clusters.

A common architecture: Tekton or Argo Workflows handles building and testing; Argo CD handles deploying to Kubernetes by watching a git repository for changes to Helm charts or Kustomize manifests.

# argo-workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: ci-pipeline-
spec:
  entrypoint: pipeline
  templates:
    - name: pipeline
      steps:
        - - name: test
            template: run-tests
        - - name: build
            template: build-image
    - name: run-tests
      container:
        image: golang:1.23
        command: [go, test, ./...]
        workingDir: /workspace
    - name: build-image
      container:
        image: gcr.io/kaniko-project/executor:latest
        args:
          - --dockerfile=/workspace/Dockerfile
          - --destination=registry.example.com/myapp:latest

Where it gets painful: The Argo ecosystem is broad enough that teams often misconfigure the boundary between Workflows (CI) and CD (Argo CD). The UI for Argo Workflows requires its own deployment and maintenance. Permissions and RBAC across the Argo suite need careful planning.


Dagger

Dagger is a different type of tool. Instead of writing pipeline configuration in YAML DSL, you write pipeline logic in a real programming language: Go, Python, TypeScript, Java, .NET, Elixir, or Rust. Dagger compiles this into a DAG of container operations backed by BuildKit, so the pipeline runs identically locally and in CI.

# pipeline.py (Dagger SDK for Python)
import dagger

async def build_and_test():
    async with dagger.Connection() as client:
        src = client.host().directory(".")
        
        test_result = await (
            client.container()
            .from_("python:3.12-slim")
            .with_mounted_directory("/app", src)
            .with_workdir("/app")
            .with_exec(["pip", "install", "-r", "requirements.txt"])
            .with_exec(["pytest", "--tb=short"])
            .stdout()
        )
        print(test_result)

Published case studies show teams reducing build times by 5 to 6x through Dagger’s intelligent caching (BuildKit’s layer cache, applied at the function level). Dagger Cloud provides a visualization layer. The project is SOC 2 Type II certified.

Where it gets painful: Dagger requires developers to write and maintain actual code for their pipelines, not just YAML. For organizations where infrastructure teams own CI, this can create ownership questions. The ecosystem is newer and community patterns are still forming.


Harness / Drone CI

Drone CI was a container-native pioneer: each pipeline step runs in a separate Docker container, pipelines are defined in .drone.yml, and the model is simple to reason about. Drone was acquired by Harness and is now maintained as part of the Harness Open Source platform (formerly known as Gitness). Drone 2.x continues to receive maintenance. Harness Open Source, which represents the next evolution of the project, is in active development on the main branch.

# .drone.yml
kind: pipeline
type: docker
name: default

steps:
  - name: test
    image: golang:1.23
    commands:
      - go test ./...
  
  - name: build
    image: plugins/docker
    settings:
      repo: registry.example.com/myapp
      tags: ${DRONE_COMMIT_SHA}

Drone’s plugin ecosystem (standardized container images that handle specific tasks like Docker builds, Slack notifications, S3 uploads) remains one of its strongest features. Harness Enterprise adds secrets management, governance, and a hosted control plane.

Where it gets painful: The transition from Drone to Harness Open Source is ongoing and the migration path is not fully documented. Teams evaluating Drone today should factor in the likelihood of needing to migrate to Harness’s new pipeline model eventually.


Comparison Table

Tool Best For Pricing Open Source? Key Strength
GitHub Actions GitHub-hosted teams, broad ecosystem Free (2,000 min/mo), paid from $4/user/mo Workflow runner: yes Native GitHub integration, massive marketplace
GitLab CI/CD Teams on GitLab, DevSecOps Free (400 min/mo), Premium from $29/user/mo Yes (CE) Full platform integration, security scanning built-in
Tekton Kubernetes-native orgs with platform teams Free (self-hosted) Yes (CNCF) Kubernetes-native, CNCF graduated, v1.0 stability
Argo Workflows + CD GitOps-first Kubernetes delivery Free (self-hosted) Yes (CNCF) GitOps model, strong Kubernetes delivery patterns
Dagger Teams wanting portable, code-first pipelines Free tier + Dagger Cloud paid Yes Run locally and in CI identically, multi-SDK
Harness / Drone Container-native simplicity, plugin reuse Community Edition free, Enterprise paid Yes (CE) Simple container-step model, large plugin library

Practical Recommendations

For startups and small teams on GitHub: Start with GitHub Actions. The free tier is sufficient for most small projects, the YAML syntax is approachable, and you’ll find a community solution for almost anything. Add self-hosted runners if you need GPU access or faster networking to your private registry.

For teams already on GitLab: Use GitLab CI/CD. The integration between source, CI, security scanning, container registry, and deployment is tight enough that the added complexity of the .gitlab-ci.yml schema is usually worth it.

For Kubernetes-native platform teams: Tekton is the right investment if you have the capacity to run it. It reached v1.0 in May 2025, has CNCF backing, and gives you full control over the pipeline infrastructure as Kubernetes-native resources. Pair it with Argo CD for a complete GitOps pipeline.

For teams frustrated by “works in CI but not locally”: Evaluate Dagger seriously. Writing pipelines in Python or Go instead of YAML has a real cost (more code to maintain), but the parity between local and CI execution eliminates a whole class of debugging problems. The 5-6x build time improvements cited in published case studies are worth investigating for slow pipelines.

For teams evaluating Drone: Drone CI Community Edition still works, but understand that the project’s future is the Harness Open Source platform. New projects should consider starting with Harness Open Source directly rather than adopting the older Drone pipeline format, unless you have specific compatibility requirements.


Building for Failure

The best container-native CI pipeline is not the one with the most features. It’s the one your team can debug at 11pm without needing to read three layers of abstraction to understand what went wrong. Before choosing a platform, run a failure scenario: break a test, watch the pipeline fail, then trace the failure back to root cause using only the tooling the platform provides. The platforms that surface clear failure information quickly are worth more than any feature checklist.

Container isolation guarantees that what broke is your code or your configuration, not a shared host state problem. That guarantee is the foundation. Build everything else on top of it.

🔒 Free tool: GitHub Actions Security Checker — scan your workflow YAML for supply chain attack vectors: pull_request_target + checkout, missing permissions blocks, hardcoded credentials, and outdated action versions.

🔍 Audit your pipeline configs: Free browser-based scanners for CI/CD security — Dockerfile Linter (18 security rules, A–F grade), GitHub Actions Security Checker (supply chain attack vectors, hardcoded secrets), and Docker Compose Security Checker. Paste a file, get findings instantly.

🛠️ Try These Free Tools

⚠️ K8s Manifest Deprecation Checker

Paste your Kubernetes YAML to detect deprecated APIs before upgrading.

🐳 Dockerfile Security Linter

Paste a Dockerfile for instant security and best-practice analysis.

🐙 Docker Compose Version Checker

Paste your docker-compose.yml to audit image versions and pinning.

See all free tools →

Stay Updated

Get the best releases delivered monthly. No spam, unsubscribe anytime.

By subscribing you agree to our Privacy Policy.