Skip to content
Database

etcd Releases

Track every etcd release. Version timelines, Kubernetes dependency context, cluster upgrade guidance, and performance tuning.

Total Versions

Supported

Latest

Version Timeline

All tracked releases with lifecycle status and EOL dates.

Loading version data…

Lifecycle Timeline

Visual overview of active support and maintenance windows.

3.4
3.5
2022 2023 2024 2025 2026 2027 2028
Active
Maint
Active
Maint
Active / LTS
Maintenance
Today

Upgrade Paths

Migration guidance between major versions — breaking changes, effort estimates, and tips.

3.4 3.5 Medium Difficulty
Est. 1-2 hours per cluster

Breaking Changes

  • Backend database changed (bolt → bbolt)
  • Experimental distributed tracing added
  • Client balancer completely rewritten
  • Lease revoke during leader election fixed
  • Some deprecated flags removed

Migration Notes

Take a snapshot backup first. Upgrade one cluster member at a time. The 3.5 release had several critical bug fixes in early patches (3.5.0-3.5.4 had known issues). Use 3.5.10+ for production. For Kubernetes: upgrade Kubernetes, which upgrades etcd automatically via kubeadm.

Version Risk Assessment

Evaluate risk factors before choosing a version for production.

Version EOL Risk CVE Risk Ecosystem Cloud Support Overall Recommended Action
etcd 3.3 and below Critical High EOL Dropped Critical Way past EOL — blocks K8s upgrades
etcd 3.4 High Low Maintenance Legacy High Approaching EOL — upgrade to 3.5 (backup first!)
etcd 3.5 (< 3.5.10) Low Medium Active Full Medium Early 3.5 had bugs — patch to 3.5.10+
etcd 3.5.10+ None Low Active Full Low Current — recommended

Risk combines Kubernetes version alignment, known data safety issues, and patch currency. Assessed as of March 2026.

etcd Version Comparison

Side-by-side feature differences across major versions.

Feature 3.4 3.5
Backend bolt bbolt (maintained fork)
K8s compatibility 1.13-1.28 1.22+
Distributed tracing No Experimental
Downgrade support No Experimental
Client balancer v2 v3 (rewritten)
Lease performance Known issues Fixed
Log format capnslog zap
gRPC gateway v1 v2

Embed Badges

Add live etcd status badges to your README, docs, or dashboard.

Health Status

Overall support health

etcd Health Status
![etcd Health Status](https://img.releaserun.com/badge/health/etcd.svg)

EOL Countdown

Next end-of-life date

etcd EOL Countdown
![etcd EOL Countdown](https://img.releaserun.com/badge/eol/etcd.svg)

Latest Version

Current stable release

etcd Latest Version
![etcd Latest Version](https://img.releaserun.com/badge/v/etcd.svg)

CVE Status

Known vulnerabilities

etcd CVE Status
![etcd CVE Status](https://img.releaserun.com/badge/cve/etcd.svg)

Frequently Asked Questions

Common questions about etcd releases and lifecycle.

What is etcd used for?
etcd is a distributed key-value store used as the backing store for Kubernetes cluster state (all API objects, secrets, configs). It is also used independently for service discovery, distributed locking, and configuration management. If etcd goes down, your Kubernetes cluster cannot function.
Which etcd version does my Kubernetes version need?
Kubernetes 1.29+ uses etcd 3.5.x. Kubernetes 1.22-1.28 recommends etcd 3.5.x (3.4.x also supported). Each Kubernetes release documents its tested etcd version. Using a mismatched version can cause data corruption or cluster instability.
How hard is it to upgrade etcd?
etcd supports rolling upgrades within a cluster. Upgrade one member at a time. Always back up before upgrading (etcdctl snapshot save). You can only upgrade one minor version at a time (3.4→3.5, not 3.3→3.5). For Kubernetes clusters, kubeadm handles etcd upgrades automatically.
How do I back up etcd?
Use etcdctl snapshot save /path/to/backup.db for point-in-time backups. For Kubernetes, this backs up your entire cluster state. Automate daily backups. Test restores regularly. A backup/restore can recover a completely dead Kubernetes cluster.
What is the etcd database size limit?
Default: 2 GB (--quota-backend-bytes). Maximum recommended: 8 GB. Kubernetes clusters rarely exceed 4 GB. If you hit the limit, etcd goes into alarm mode and rejects writes. Monitor database size, defragment regularly, and increase the quota if needed.

Related Tools