Back to Blog
Best Practices

Kubernetes End of Life and Extended Support: What Happens When Your Version Expires [2026]

Kubernetes versions lose support faster than most teams realize. Learn the release cycle, what extended support means on EKS, GKE, and AKS, and how to plan upgrades before your cluster becomes a liability.

KorPro Team
March 23, 2026
11 min read
KubernetesExtended SupportEnd of LifeEKSGKEAKSUpgradesDevOps

Every Kubernetes minor version has a shelf life. When it expires, your cluster stops getting security patches — and most teams find out too late.

Kubernetes moves fast. The project ships three minor releases per year, and each release is supported for approximately 14 months. That sounds like plenty of time, but in practice the upgrade window is shorter than it appears. By the time a version reaches end of life, teams that delayed upgrading face a choice between a rushed migration and running unpatched infrastructure.

This is not a theoretical problem. In a recent r/kubernetes discussion about extended support costs, the thread drew over 55,000 views and dozens of responses from engineers dealing with exactly this issue. The most common themes: teams are blindsided by the cost jump when extended support kicks in, most still rely on manual tracking for version status, and the upgrade cycle feels like a full-time job when you fall behind. The tools most teams mentioned for staying ahead were Pluto, kubent, and KorPro for detecting deprecated APIs and version gaps.

This guide covers the Kubernetes release lifecycle, what extended support actually means across EKS, GKE, and AKS, what breaks when you run an expired version, and how to plan upgrades so you never end up in that position.

The Kubernetes Release Cycle

The Kubernetes project follows a predictable release cadence. A new minor version (1.x) ships roughly every four months. Each minor version enters a support window during which it receives patch releases for bug fixes and security vulnerabilities.

As of 2026, the support timeline works like this:

  • Active patch support: ~12 months from release. The version receives regular patch releases for bugs and CVEs.
  • End of life: ~14 months from release. After this, no further patches are issued by the upstream Kubernetes project. The version is considered unsupported.

Here is the current status of recent Kubernetes versions:

VersionRelease DateEnd of Patch SupportStatus
1.28August 2023October 2024End of life
1.29December 2023February 2025End of life
1.30April 2024June 2025Active
1.31August 2024October 2025Active
1.32December 2024February 2026Active
1.33April 2025June 2026Active

If your cluster is running 1.29 or earlier, you are already past upstream end of life. That does not mean your cluster will stop working. It means the community will not patch it if a critical vulnerability is discovered tomorrow.

What "Extended Support" Means on Managed Kubernetes

Cloud providers recognized that most organizations cannot upgrade every 14 months, so each major provider now offers some form of extended support. But the details, costs, and limitations vary significantly.

Amazon EKS Extended Support

EKS automatically enrolls clusters in extended support once the standard support window ends. Extended support continues for an additional 12 months beyond the standard end-of-support date, giving you up to 26 months of total support per version.

The catch: extended support costs $0.60 per cluster per hour, compared to $0.10 per cluster per hour during standard support. That is a 6x price increase, or roughly $432/month per cluster versus $72/month.

For a company running 10 clusters, the jump from standard to extended support adds $3,600/month in control plane costs alone. That number surprises most teams when they see it on the bill for the first time.

To check your EKS cluster version and support status:

bash
aws eks describe-cluster --name my-cluster --query 'cluster.{Version:version,PlatformVersion:platformVersion}' --output table

To see available upgrade targets:

bash
aws eks describe-addon-versions --kubernetes-version 1.30 --query 'addons[0].addonVersions[0].compatibilities[0].defaultVersion'

Google GKE Extended Support

GKE calls its program "Extended channel" support. When a minor version reaches end of standard support, GKE continues to provide patches in the Extended channel for an additional period. The exact duration depends on the version, but it is typically 6 to 12 additional months.

GKE charges a per-node fee for extended support. The cost varies by machine type and region, but expect to pay a premium on top of your normal GKE management fee.

To check your GKE cluster version:

bash
gcloud container clusters describe my-cluster --zone us-central1-a --format="table(name,currentMasterVersion,currentNodeVersion)"

To see available upgrades:

bash
gcloud container get-server-config --zone us-central1-a --format="table(channels)"

Azure AKS Long Term Support

AKS takes a different approach. Instead of extending support on every version, Microsoft designates specific versions as Long Term Support (LTS) releases. LTS versions receive 2 years of support compared to the standard 12 months.

Currently designated LTS versions include 1.30 and 1.32. If you are running a non-LTS version on AKS, you get standard support only, and AKS will eventually auto-upgrade your cluster when it falls too far behind.

To check your AKS cluster version:

bash
az aks show --resource-group my-rg --name my-cluster --query '{Name:name,Version:kubernetesVersion,ProvisioningState:provisioningState}' --output table

To see available versions:

bash
az aks get-versions --location eastus --output table

Cloud Provider Extended Support Comparison

EKSGKEAKS
ModelAuto-enroll all versionsExtended channel enrollmentLTS on select versions
Extra duration+12 months+6–12 months (varies)+12 months (LTS only)
Total support~26 months~20–26 months~24 months (LTS)
Pricing$0.60/cluster/hour (6x standard)Per-node premiumIncluded for LTS versions
Auto-upgradeNo forced upgrade during extendedCan force-upgrade after extendedAuto-upgrade when unsupported
Opt-outCan opt out (falls to unsupported)Channel-based selectionN/A for non-LTS

What Breaks When You Run an Unsupported Version

Running an expired Kubernetes version does not cause an immediate outage. Your workloads keep running. But the risks compound over time.

No Security Patches

This is the most critical issue. When a CVE is discovered in the Kubernetes API server, kubelet, or any core component, patches are only backported to supported versions. If you are on an end-of-life version, you are exposed to every vulnerability discovered after your last patch, with no fix available.

In 2024, CVE-2024-3177 affected the way Kubernetes handled projected service account tokens — a vulnerability that could allow privilege escalation. Patches were released for supported versions within days. Clusters on end-of-life versions had no official remediation path.

API Deprecations and Removals

Kubernetes regularly deprecates and removes APIs. When you eventually upgrade — either voluntarily or because your provider forces it — any manifests, Helm charts, or operators using removed APIs will break.

For example, the jump from 1.25 to 1.29 removed several beta APIs that many popular tools depended on. Teams that skipped multiple versions had to fix dozens of manifests simultaneously instead of handling deprecations incrementally.

Check for deprecated API usage in your cluster:

bash
kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis

Or use a dedicated tool like pluto to scan your manifests:

bash
# Install pluto brew install FairwindsOps/tap/pluto # Scan live cluster pluto detect-helm -o wide # Scan manifest files pluto detect-files -d ./manifests/

Forced Upgrades With Limited Notice

Cloud providers do not let clusters stay on unsupported versions indefinitely. After the extended support window closes:

  • EKS will eventually auto-upgrade your cluster to the oldest supported version. AWS provides notice, but the upgrade happens whether you are ready or not.
  • GKE will auto-upgrade clusters that fall outside the supported version window, typically with 30 days notice.
  • AKS will auto-upgrade non-LTS clusters once they are 2 minor versions behind the latest stable release.

A forced upgrade on a cluster you have not prepared is significantly riskier than a planned upgrade. Deprecated APIs, incompatible operators, and untested workload behavior can all surface at once.

Compliance Exposure

For organizations subject to SOC 2, HIPAA, PCI-DSS, or similar frameworks, running unsupported software is an audit finding. Auditors check whether your infrastructure receives security patches within defined SLAs. An end-of-life Kubernetes version with known unpatched CVEs is a documented control failure.

How to Plan Upgrades

Step 1: Know Your Current State

Before planning an upgrade, get a clear picture of what you are running and how far behind you are.

bash
# Check server version kubectl version --short # On EKS aws eks describe-cluster --name my-cluster --query 'cluster.version' --output text # On GKE gcloud container clusters list --format="table(name,currentMasterVersion)" # On AKS az aks list --query '[].{Name:name,Version:kubernetesVersion}' --output table

If you manage multiple clusters, automate this check. A monthly report that lists every cluster and its version status prevents surprises.

Step 2: Review the Changelog and Deprecations

Every Kubernetes release includes a changelog with deprecations, removals, and breaking changes. Before upgrading, review:

  • Kubernetes changelog for each version you are skipping
  • Your cloud provider's upgrade guide for provider-specific notes
  • pluto detect-helm output to catch deprecated APIs in your Helm releases

Step 3: Upgrade One Minor Version at a Time

Kubernetes supports upgrading one minor version at a time (1.29 → 1.30, not 1.29 → 1.32). Skipping versions is not supported and can cause unpredictable behavior.

For each version hop:

  1. Upgrade the control plane first
  2. Upgrade node pools one at a time
  3. Run your test suite and smoke tests after each step
  4. Verify workload health before proceeding to the next version

Step 4: Upgrade Non-Production First

Always upgrade development and staging clusters before production. This catches issues with deprecated APIs, changed defaults, and incompatible operators in a safe environment.

A typical sequence:

  1. Dev cluster → validate for 1–2 days
  2. Staging cluster → validate with integration tests
  3. Production cluster → upgrade during a maintenance window

Step 5: Set a Recurring Upgrade Schedule

The best way to avoid end-of-life emergencies is to never fall more than one version behind. Set a quarterly upgrade cadence. Every quarter, check whether a new Kubernetes version is available and plan the upgrade.

Teams that upgrade regularly find the process routine and low-risk. Teams that delay for a year find it painful and high-risk because they have to jump multiple versions.

The Hidden Cost of Deferred Upgrades

Beyond the direct risks, deferred upgrades carry an indirect cost that is easy to overlook: cluster hygiene degrades on older versions.

Clusters that have been running for 12+ months without a major maintenance cycle tend to accumulate orphaned resources, stale configurations, and unused workloads. Teams avoid cleaning up because they know an upgrade is coming and they do not want to make changes to a cluster they plan to replace. But the upgrade keeps getting deferred, and the waste keeps growing.

This is where operational debt compounds. An old Kubernetes version running old workloads with orphaned resources is simultaneously a security risk, a cost sink, and an upgrade headache.

Running regular cluster audits — checking for orphaned Services, PVCs, ConfigMaps, Secrets, and idle Deployments — should be part of your upgrade preparation. A clean cluster is easier and safer to upgrade than a cluttered one.

Checking Cluster Health Before an Upgrade

Before starting an upgrade, assess your cluster health:

bash
# Check for pods not in Running state kubectl get pods --all-namespaces --field-selector status.phase!=Running # Check for Deployments with 0 ready replicas kubectl get deployments --all-namespaces -o json | \ jq '.items[] | select(.status.readyReplicas == 0 or .status.readyReplicas == null) | "\(.metadata.namespace)/\(.metadata.name)"' # Check for pending PVCs kubectl get pvc --all-namespaces --field-selector status.phase!=Bound

If your cluster has dozens of broken or orphaned resources, clean those up before upgrading. Carrying dead weight into a new version just means you carry the same problems forward.

Conclusion

Kubernetes extended support buys you time, but it does not eliminate the need to upgrade. Every version eventually reaches a hard end of life, and the longer you wait, the more expensive and risky the upgrade becomes — both in direct extended support costs and in accumulated operational debt.

The teams that handle Kubernetes versioning well share two habits: they upgrade regularly on a predictable schedule, and they keep their clusters clean between upgrades. Both practices reduce risk, reduce cost, and make the next upgrade easier than the last one.


Keep Your Clusters Clean and Upgrade-Ready

Orphaned resources and stale workloads make Kubernetes upgrades riskier and slower. Get started with KorPro to automatically detect unused resources across all your clusters — so when upgrade day comes, your clusters are clean and ready. Managing multiple clusters across providers? Contact our team for a guided assessment.

Stop Wasting Kubernetes Resources

Ready to Clean Up Your Clusters?

KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.

Written by

KorPro Team

View All Posts