Kubernetes Adoption in 2026: The Numbers
Kubernetes has moved well past the early-adopter phase. According to the CNCF Annual Survey 2024, 84% of organizations are either using or evaluating containers in production, with Kubernetes as the dominant orchestrator. The Datadog 2024 Container Report found that over 65% of organizations running containers have adopted Kubernetes, up from roughly 50% just two years prior.
What was once a technology associated primarily with Silicon Valley hyperscalers is now standard infrastructure across industries — from banking and healthcare to government agencies and particle physics labs. For a broader look at adoption trends and data, see our detailed Kubernetes statistics and adoption report for 2026.
This article profiles nine organizations that run Kubernetes at significant scale, covering what they run, how big their deployments are, and what lessons other teams can draw from their experience.
Tech and Media Companies
Spotify: 4,000+ Microservices Across 200 Clusters
Spotify is one of the most frequently cited large-scale Kubernetes adopters, and for good reason. The music streaming platform serves over 600 million monthly active users and runs more than 4,000 microservices across approximately 200 Kubernetes clusters.
Spotify migrated from a Helios-based container orchestration system (built in-house) to Kubernetes beginning around 2019. The migration was driven by the desire to reduce the operational burden of maintaining a custom orchestrator and to benefit from the Kubernetes ecosystem’s tooling and community.
Key details of Spotify’s Kubernetes setup:
- Runs on Google Kubernetes Engine (GKE) as the primary platform.
- Uses Backstage — Spotify’s open-source developer portal, now a CNCF incubating project — as the interface for developers to deploy and manage services on Kubernetes without needing deep K8s knowledge.
- Operates a multi-cluster architecture with separate clusters for different teams and environments.
- Handles over 10 million requests per second across its microservices mesh.
Lesson: Spotify’s experience shows that a strong developer platform layer on top of Kubernetes (like Backstage) is critical for adoption at scale. Most developers at Spotify do not write Kubernetes YAML directly — the platform abstracts it away.
Reddit: From Bare Metal to Kubernetes
Reddit’s migration story is notable because the company moved from a traditional bare-metal infrastructure to Kubernetes. For years, Reddit ran its services on physical servers managed with configuration management tools. The limitations of this approach — slow deployments, manual scaling, and hardware procurement lead times — drove the shift to Kubernetes on AWS.
Reddit now runs its core platform on Amazon EKS, including the services that power the front page, comment threads, voting, and real-time features. At peak traffic, Reddit serves hundreds of millions of page views per day, with traffic spikes that can be sudden and massive (viral posts, breaking news events, AMA sessions). The migration was gradual, taking several years to move all production workloads.
The engineering team invested heavily in building a custom Kubernetes deployment platform that integrated with their existing tooling. They adopted a “paved road” approach, providing standardized Helm charts and CI/CD pipelines that made it easy for service teams to migrate without becoming Kubernetes experts.
Lesson: Large-scale bare-metal-to-Kubernetes migrations are possible but require patience. Reddit’s team emphasized the importance of running old and new infrastructure in parallel during the transition, and investing heavily in CI/CD pipelines to support the new deployment model. They also found that the cost savings from moving away from owned hardware to cloud-based Kubernetes were significant, even accounting for the cloud provider costs.
The New York Times: News on GKE
The New York Times moved its digital infrastructure to Google Kubernetes Engine (GKE) to support the rapid iteration required by a modern digital newsroom. The migration consolidated a patchwork of deployment systems into a unified Kubernetes-based platform.
The NYT runs content delivery, search, personalization, and subscription services on GKE. Their engineering team built an internal delivery platform that lets developers deploy services through a simplified interface, abstracting away Kubernetes complexity for reporters and editors who work on interactive projects.
The NYT engineering team has spoken publicly about the benefits of Kubernetes for their newsroom’s technical projects. During major news events — elections, breaking stories, live events — traffic can spike by 5-10x within minutes. Kubernetes’ horizontal pod autoscaling lets them handle these spikes automatically, which was difficult to achieve on their previous infrastructure.
Lesson: Kubernetes adoption is not just for tech companies. Media organizations with demanding content delivery requirements benefit from the scalability and rapid deployment cycles that Kubernetes provides. The NYT also demonstrates the value of having a platform engineering team that shields content-focused developers from infrastructure complexity.
Pinterest: 30,000+ Pods at Scale
Pinterest runs one of the larger Kubernetes deployments in the consumer technology space. The visual discovery platform operates over 30,000 pods across multiple clusters, supporting a user base of more than 450 million monthly active users.
Pinterest’s infrastructure handles computationally intensive workloads including image processing, recommendation algorithms, and search indexing. The company has been public about the challenges of running machine learning training and inference workloads on Kubernetes, contributing to upstream projects around GPU scheduling and resource management.
Key aspects of Pinterest’s setup:
- Multi-cluster architecture running on AWS EKS.
- Custom autoscaling policies tuned for workloads with bursty traffic patterns (e.g., holiday shopping seasons).
- Heavy use of Kubernetes for batch processing and ML training alongside serving workloads.
Lesson: Running both serving and batch/ML workloads on Kubernetes is feasible but requires careful attention to scheduling, resource isolation, and autoscaling. Pinterest’s multi-cluster strategy helps isolate failures and manage upgrades safely.
E-Commerce and Consumer Brands
🔔 Never Miss a Breaking Change
Monthly release roundup — breaking changes, security patches, and upgrade guides across your stack.
✅ You're in! Check your inbox for confirmation.
Airbnb: EKS After the Monolith
Airbnb’s Kubernetes journey began as part of a broader effort to decompose its Ruby on Rails monolith into microservices. The company migrated to Amazon EKS and built a service-oriented architecture where hundreds of services run independently on Kubernetes.
Airbnb’s engineering team developed a significant amount of internal tooling around Kubernetes, including:
- A service configuration system that generates Kubernetes manifests from a higher-level service definition.
- Custom admission controllers for enforcing organizational policies (resource limits, security contexts, labeling requirements).
- Integration with their experimentation platform, allowing A/B tests to be deployed as separate Kubernetes rollouts.
Airbnb processes millions of searches and bookings daily, with each request touching dozens of downstream services. Their Kubernetes deployment handles significant computational workloads including search ranking, pricing algorithms, and real-time availability checks. The company has shared that their migration to Kubernetes reduced deployment times from hours to minutes and significantly improved developer velocity.
Lesson: Breaking up a monolith and moving to Kubernetes are often done together, but they are separate concerns. Airbnb found that the microservices decomposition was the harder problem — Kubernetes provided the runtime, but the architectural decisions around service boundaries were what determined success. Their custom admission controllers are worth noting as well — enforcing organizational standards at the cluster level prevents configuration drift and security gaps as the number of services grows.
Adidas: On-Prem to Cloud-Native
Adidas migrated its e-commerce platform from traditional on-premises infrastructure to Kubernetes on AWS. The sports brand was one of the earlier enterprise adopters in the retail space, driven by the need to handle massive traffic spikes during product launches (particularly limited-edition sneaker drops, which generate extreme burst traffic).
After the migration, Adidas reported a significant reduction in deployment lead time — from weeks to minutes — and improved ability to scale for peak traffic events. The platform team standardized on Kubernetes across development, staging, and production environments, creating consistency across the software delivery lifecycle.
Lesson: Retail companies with extreme traffic variability benefit enormously from Kubernetes’ horizontal pod autoscaling and cluster autoscaling. The ability to scale up for a product launch and scale down afterward translates directly into cost savings compared to provisioning for peak capacity.
Financial Services
Capital One: Kubernetes in Banking
Capital One has been one of the most visible proponents of Kubernetes adoption in the financial services industry. The bank runs a large-scale Kubernetes platform on AWS and has contributed to several open-source projects in the Kubernetes ecosystem, including Critical Stack (a Kubernetes management platform they later open-sourced).
Running Kubernetes in financial services comes with additional constraints that do not apply to most technology companies:
- Regulatory compliance: Financial regulators require strict controls around data access, encryption, and audit logging. Capital One’s Kubernetes platform integrates with their compliance and governance systems.
- Security requirements: Multi-tenancy is enforced through namespace isolation, network policies, and OPA/Gatekeeper admission policies.
- Change management: Deployments follow formal change management processes, with Kubernetes rollouts integrated into the bank’s change advisory board workflows.
Lesson: Kubernetes adoption in regulated industries is absolutely possible but requires upfront investment in policy enforcement, audit logging, and integration with existing compliance frameworks. Tools like OPA Gatekeeper and Kubernetes RBAC are essential building blocks.
Government and Research
US Department of Defense: Platform One
The US Department of Defense (DoD) operates Platform One, a Kubernetes-based DevSecOps platform that provides a standardized, security-hardened software delivery environment for defense applications. Platform One is built on top of a DoD-hardened Kubernetes distribution and includes a curated set of tools for CI/CD, monitoring, logging, and security scanning.
Platform One serves as the foundation for Big Bang, a Helm-based deployment package that installs a complete DevSecOps stack on any Kubernetes cluster. Components include Istio for service mesh, Prometheus and Grafana for monitoring, Elasticsearch and Kibana for logging, and various security scanning tools that meet DoD security requirements (STIG compliance).
Key aspects of Platform One:
- Designed to run on any infrastructure: cloud, on-premises, or air-gapped environments.
- All container images are scanned and signed through the DoD’s Iron Bank registry.
- Supports multiple classification levels with appropriate network isolation.
- Used by multiple branches of the military and defense agencies.
Lesson: If the US Department of Defense can run Kubernetes with its extreme security requirements, most organizations can too. The key is a standardized platform approach (Platform One/Big Bang) rather than letting every team build their own Kubernetes setup. For context on how Kubernetes compares to simpler container runtimes in different scenarios, see our Docker vs Kubernetes production decision rubric.
CERN: Kubernetes for Particle Physics
CERN, the European Organization for Nuclear Research, uses Kubernetes to manage the massive data processing pipelines required to analyze data from the Large Hadron Collider (LHC). CERN’s computing infrastructure processes petabytes of physics data, and Kubernetes helps orchestrate the batch processing jobs and analysis workflows.
CERN’s Kubernetes deployment is notable for several reasons:
- Runs on on-premises infrastructure in CERN’s data centers, not on public cloud.
- Manages workloads that are heavily batch-oriented, using Kubernetes alongside HTCondor and other HPC schedulers.
- Uses OpenStack Magnum for provisioning Kubernetes clusters on their private cloud infrastructure.
- Contributes to upstream Kubernetes development, particularly around batch scheduling and multi-cluster federation.
Lesson: Kubernetes is not just for web services. Batch processing, scientific computing, and data pipelines are legitimate Kubernetes workloads, especially when combined with tools like Kubernetes Jobs, CronJobs, and the emerging Kubernetes Batch/HPC features. For more on how different Kubernetes distributions serve these varied use cases, see our Kubernetes distributions comparison.
Industry Breakdown: Where Kubernetes Runs
Looking across the companies profiled above and the broader ecosystem, Kubernetes adoption follows clear patterns by industry.
Technology and Media
Technology companies were the earliest adopters and run the largest deployments. Adoption is near-universal among companies with more than 500 engineers. Kubernetes is typically managed by a dedicated platform engineering team that provides an internal developer platform. The tech sector also leads in multi-cluster adoption, with companies routinely running dozens or hundreds of clusters segmented by team, region, or workload type.
Financial Services
Banks, insurance companies, and fintech firms have adopted Kubernetes aggressively over the past five years. The main drivers are faster time-to-market for financial products and the ability to scale trading and payment processing systems dynamically. Compliance and security overhead is significant but manageable with the right tooling.
E-Commerce and Retail
Retail companies with seasonal traffic patterns (Black Friday, product launches, holiday shopping) benefit from Kubernetes’ autoscaling capabilities. Companies like Adidas, Target, and Zalando have all migrated to Kubernetes-based platforms.
Healthcare and Life Sciences
Healthcare organizations are increasingly adopting Kubernetes for electronic health record (EHR) systems, genomics processing, and medical imaging workloads. HIPAA compliance requirements add complexity, similar to financial services, but Kubernetes’ namespace isolation and network policies provide the necessary building blocks. Companies like Philips and Kaiser Permanente have invested significantly in Kubernetes platforms for both clinical and research workloads.
Government and Defense
Government adoption has accelerated significantly, led by the US DoD’s Platform One initiative. Other agencies, including the IRS and VA, have Kubernetes initiatives. Government adoption emphasizes security hardening, air-gapped deployment capabilities, and FedRAMP compliance. The US Census Bureau and NHS Digital (UK) have also adopted Kubernetes for citizen-facing services, showing that government use extends beyond defense into civilian applications.
Adoption by Company Size
Startups (1-50 Engineers)
For early-stage startups, managed Kubernetes services (EKS, GKE, AKS) have reduced the barrier to entry significantly. However, the operational overhead of Kubernetes can be substantial for small teams. Many startups start with simpler alternatives (AWS ECS, Google Cloud Run, Railway) and migrate to Kubernetes as they grow. The decision depends on team expertise and workload complexity. That said, startups building infrastructure-heavy products (developer tools, data platforms, security tools) often adopt Kubernetes early because their customers expect Kubernetes-native deployment options.
Mid-Market (50-500 Engineers)
This is the fastest-growing adoption segment. Companies in this range typically have enough engineering capacity to justify a small platform team (2-5 engineers) dedicated to running Kubernetes. Managed services and platform-as-a-service layers like Humanitec, Upbound, or internal Backstage portals help make Kubernetes accessible to the broader engineering organization.
Enterprise (500+ Engineers)
Large enterprises overwhelmingly run Kubernetes, often across multiple cloud providers and on-premises data centers. Multi-cluster management, federation, and governance at scale are the primary challenges. These organizations typically run dedicated platform engineering organizations (not just teams) with 10-50+ engineers focused on Kubernetes infrastructure. At this scale, the focus shifts from “how do we run Kubernetes” to “how do we govern, secure, and provide self-service access to Kubernetes across hundreds of teams.” Tools like Rancher, Tanzu, and OpenShift are common in this segment because they provide the multi-cluster management and enterprise governance features that large organizations require.
Common Patterns and Lessons Learned
Across all the companies profiled here, several patterns emerge consistently:
- Platform engineering is non-negotiable. Every successful large-scale Kubernetes deployment has a dedicated platform team that abstracts Kubernetes complexity from application developers. Without this, adoption stalls because developers spend too much time fighting with YAML and cluster configuration.
- Managed Kubernetes is the default. Even companies with deep infrastructure expertise (Spotify, Reddit, Airbnb) run on managed services like GKE, EKS, or AKS. The operational overhead of running your own control plane is rarely justified.
- Multi-cluster is the norm at scale. No company running thousands of services uses a single Kubernetes cluster. Multi-cluster strategies provide blast radius isolation, allow independent upgrade schedules, and enable different security boundaries for different workloads.
- Migrations are gradual. Every company that moved to Kubernetes did so incrementally, running old and new infrastructure in parallel for months or years. Big-bang migrations are rarely successful.
- Developer experience determines adoption speed. Companies that invested in internal developer platforms, service templates, and self-service tooling saw faster adoption. Companies that asked developers to learn raw Kubernetes saw resistance and slow rollouts.
- Security and compliance are solvable. Financial services, healthcare, and defense organizations have all proven that Kubernetes can meet strict regulatory requirements. The tools (OPA, network policies, RBAC, image signing) exist — the work is in integrating them into your specific compliance framework.
Summary
Kubernetes adoption in 2026 spans virtually every industry and company size. From Spotify’s 200 clusters powering music streaming to CERN’s on-premises deployment analyzing particle physics data to the US DoD’s security-hardened Platform One, Kubernetes has proven adaptable to radically different requirements.
The common thread across all successful adopters is not the technology itself but the organizational investment around it: platform engineering teams, developer experience tooling, and gradual migration strategies. Kubernetes provides the runtime foundation, but it is the platform built on top of it — and the team operating it — that determines success.
Check your Kubernetes health
If you are running Kubernetes, here are commands every operator should know for a quick health check. These work across all the environments profiled in this article, whether you are running EKS, GKE, AKS, or bare-metal.
# Check cluster version and compatibility
kubectl version --short
# Client Version: v1.35.1
# Server Version: v1.35.0
# Quick cluster health overview
kubectl get nodes -o wide
kubectl get componentstatuses 2>/dev/null || kubectl get --raw /healthz
# Check for deprecated APIs (critical before upgrades)
kubectl api-versions | sort
For teams managing multiple clusters, tracking which version each cluster runs becomes critical. The companies in this article running 50+ clusters use automated version tracking to prevent drift:
# Check all contexts for version drift
for ctx in $(kubectl config get-contexts -o name); do
echo -n "$ctx: "
kubectl --context="$ctx" version --short 2>/dev/null | grep Server || echo "unreachable"
done
# Example output for a multi-cluster setup:
# prod-us-east: Server Version: v1.35.0
# prod-eu-west: Server Version: v1.35.0
# staging: Server Version: v1.35.1
# dev: Server Version: v1.34.2 <-- 1 minor behind
If you find version drift, our free Kubernetes deprecation checker can scan your manifests before upgrading. Paste your YAML and it flags deprecated APIs for your target version.
Kubernetes resources and tools
Whether you are a startup evaluating Kubernetes for the first time or an enterprise managing hundreds of clusters, tracking the release lifecycle is essential. Here are tools and references from ReleaseRun that help:
- Kubernetes release tracker with version timelines and support windows
- Kubernetes EOL dates and support lifecycle for upgrade planning
- Cloud K8s version tracker showing EKS vs GKE vs AKS version support
- Upgrade planner for mapping your path from current to target version
- Stack health scorecard to check the overall health of your tech stack
For the cloud-managed clusters that dominate this list (EKS, GKE, AKS), our cloud matrix tool tracks exactly which Kubernetes versions each provider supports right now, so you can plan upgrades with real data instead of guessing.
🛠️ Try These Free Tools
Paste your Kubernetes YAML to detect deprecated APIs before upgrading.
Paste a Dockerfile for instant security and best-practice analysis.
Paste your dependency file to check for end-of-life packages.