How to Find Orphaned Kubernetes Resources That Are Still Costing You Money
Orphaned Services, PVCs, ConfigMaps, and Secrets silently inflate your cloud bill. This step-by-step audit guide shows you how to find them, calculate the cost, and safely remove them.
Every Kubernetes cluster older than a few months has resources that no longer serve any purpose but still show up on your cloud bill.
They are not obvious. They do not trigger alerts. They sit quietly in namespaces that nobody has reviewed since the last team offsite. A LoadBalancer Service from a decommissioned API. A PersistentVolumeClaim from a database that was migrated six months ago. Secrets holding credentials for services that no longer exist. Each one costs a few dollars a month. Together, across multiple namespaces and clusters, they add up to hundreds or thousands of dollars per month in pure waste.
This guide is a hands-on audit. It walks through each type of commonly orphaned resource, shows you how to find them with kubectl, explains what each one costs, and provides a safe process for cleaning them up.
Why Resources Become Orphaned
Kubernetes makes it easy to create resources and hard to track whether they are still needed. The most common paths to orphaned resources are:
- Partial deletions. A team deletes a Deployment but forgets the Service, ConfigMap, and PVC that were created alongside it. Kubernetes does not automatically delete related resources unless owner references are explicitly set.
- Helm upgrades that change names. When a Helm chart generates resource names with a hash or version suffix, upgrading the chart creates new resources but does not always remove the old ones.
- Abandoned experiments. A developer creates a Service and Deployment for testing, finishes the test, and forgets to clean up. Multiply this by every developer on the team, every sprint, for a year.
- Scaled-to-zero Deployments. A workload is scaled to zero replicas instead of being deleted. The Deployment, its ConfigMaps, Secrets, and Service all remain, consuming billing resources even though no pods are running.
- Namespace sprawl. Feature branch namespaces, staging environments, and one-off debugging namespaces that were never torn down.
None of these are mistakes born from carelessness. They are natural consequences of how Kubernetes works. The problem is that nothing in Kubernetes warns you when a resource becomes orphaned. You have to go looking.
The Real Cost of Orphaned Resources
Not all orphaned resources cost the same. Some are free to keep but create operational noise. Others directly inflate your cloud bill every month.
LoadBalancer Services: $16–$18/month each
A Kubernetes Service of type LoadBalancer provisions a cloud load balancer. On AWS with a Network Load Balancer, that costs roughly $16/month before data transfer. On GKE, roughly $18/month. On AKS, roughly $18/month.
The load balancer exists and bills you whether or not any pods are behind it. A Service that used to front a Deployment that has since been deleted still maintains its cloud load balancer, its public IP, and its monthly charge.
On clusters with active development, it is common to find 5 to 15 orphaned LoadBalancer Services. At $18 each, that is $90 to $270/month from load balancers alone.
PersistentVolumeClaims: $0.04–$0.17/GB/month
PVCs are the most expensive category of orphaned resources by total volume. When a StatefulSet or Deployment that used persistent storage is deleted, the PVC usually survives. This is by design — most StorageClasses default to a Retain reclaim policy on managed Kubernetes services, precisely to prevent accidental data loss.
The problem is that "prevent accidental data loss" becomes "pay indefinitely for data nobody needs" when the workload is gone and nobody remembers the PVC exists.
A 100 GB gp3 EBS volume on AWS costs roughly $8/month. A 100 GB SSD persistent disk on GKE costs roughly $17/month. A cluster with 20 orphaned PVCs averaging 50 GB each adds $80 to $170/month in storage costs for volumes that serve no application.
ConfigMaps and Secrets: $0/month (but high indirect cost)
ConfigMaps and Secrets are stored in etcd and do not have a direct cloud billing line item. However, orphaned Secrets carry a significant indirect cost.
An orphaned Secret containing a database password, an API key, or a TLS certificate for a decommissioned service is an unnecessary attack surface. In environments subject to SOC 2, HIPAA, or PCI-DSS compliance, undocumented credentials are an audit finding. The remediation cost — incident response, policy updates, audit fees — can far exceed any cloud bill savings.
Beyond security, hundreds of orphaned ConfigMaps and Secrets create operational noise. They clutter namespace views, make it harder to find what is actually in use, and increase the surface area for human error during maintenance.
Idle Deployments and ReplicaSets: $10–$50/month each
A Deployment with zero traffic but non-zero replicas still reserves CPU and memory on its nodes. A Deployment with requests of 200m CPU and 256Mi memory running 3 replicas reserves 600m CPU and 768Mi RAM. That capacity costs roughly $35/month on typical node pricing, even if the application handles zero requests.
Old ReplicaSets left behind after rolling updates are less expensive individually but can accumulate to dozens per namespace.
Step-by-Step Audit: Finding Orphaned Resources
Finding Orphaned LoadBalancer Services
A LoadBalancer Service is orphaned if no pods match its selector. Here is how to find them.
List all LoadBalancer Services:
bashkubectl get services --all-namespaces -o json | \ jq -r '.items[] | select(.spec.type == "LoadBalancer") | "\(.metadata.namespace)/\(.metadata.name) selector=\(.spec.selector)"'
For each Service, check whether any running pods match its selector:
bash# Replace <namespace>, <key>, and <value> with the selector from the output above kubectl get pods -n <namespace> -l <key>=<value> --no-headers
If the command returns no pods, the Service is orphaned. Its cloud load balancer is running and billing you with nothing behind it.
To automate this across all LoadBalancer Services:
bashkubectl get services --all-namespaces -o json | \ jq -r '.items[] | select(.spec.type == "LoadBalancer") | "\(.metadata.namespace) \(.metadata.name) \(.spec.selector | to_entries | map("\(.key)=\(.value)") | join(","))"' | \ while read ns name selector; do if [ -z "$selector" ] || [ "$selector" = "" ]; then echo "ORPHANED (no selector): $ns/$name" else count=$(kubectl get pods -n "$ns" -l "$selector" --no-headers 2>/dev/null | wc -l) if [ "$count" -eq 0 ]; then echo "ORPHANED (no matching pods): $ns/$name" fi fi done
Finding Orphaned PVCs
A PVC is orphaned if no pod is currently mounting it. List all PVCs and check their mount status:
bash# Get all PVC names kubectl get pvc --all-namespaces -o json | \ jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name) \(.status.phase) \(.spec.resources.requests.storage)"'
Check if any pod mounts a specific PVC:
bashkubectl get pods --all-namespaces -o json | \ jq -r --arg pvc "my-pvc-name" \ '.items[] | select(.spec.volumes[]? | .persistentVolumeClaim.claimName == $pvc) | "\(.metadata.namespace)/\(.metadata.name)"'
To find all PVCs that no running pod is mounting:
bash# Get all mounted PVCs kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | . as $pod | .spec.volumes[]? | select(.persistentVolumeClaim) | "\($pod.metadata.namespace)/\(.persistentVolumeClaim.claimName)"' | \ sort -u > /tmp/mounted-pvcs.txt # Get all PVCs kubectl get pvc --all-namespaces -o json | \ jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name)"' | \ sort > /tmp/all-pvcs.txt # Find unmounted PVCs comm -23 /tmp/all-pvcs.txt /tmp/mounted-pvcs.txt
Important caveat: A PVC with no current pod mount might still be needed. Check whether any CronJob, Job, or suspended StatefulSet references it before deleting:
bash# Check CronJobs for PVC references kubectl get cronjobs --all-namespaces -o json | \ jq -r --arg pvc "my-pvc-name" \ '.items[] | select(.spec.jobTemplate.spec.template.spec.volumes[]? | .persistentVolumeClaim.claimName == $pvc) | "\(.metadata.namespace)/\(.metadata.name)"'
Finding Orphaned Secrets
Secrets can be referenced from volume mounts, environment variables, image pull credentials, service accounts, and TLS configurations. A thorough check covers all of these.
Find Secrets not referenced by any running pod:
bash# Get all Secret names (excluding default service account tokens and system secrets) kubectl get secrets --all-namespaces -o json | \ jq -r '.items[] | select(.type != "kubernetes.io/service-account-token") | select(.metadata.namespace != "kube-system") | "\(.metadata.namespace)/\(.metadata.name)"' | \ sort > /tmp/all-secrets.txt # Get Secrets referenced by running pods (volumes) kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | . as $pod | .spec.volumes[]? | select(.secret) | "\($pod.metadata.namespace)/\(.secret.secretName)"' | \ sort -u > /tmp/referenced-secrets.txt # Get Secrets referenced by env vars kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | . as $pod | .spec.containers[].env[]? | select(.valueFrom.secretKeyRef) | "\($pod.metadata.namespace)/\(.valueFrom.secretKeyRef.name)"' | \ sort -u >> /tmp/referenced-secrets.txt # Get Secrets referenced by envFrom kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | . as $pod | .spec.containers[].envFrom[]? | select(.secretRef) | "\($pod.metadata.namespace)/\(.secretRef.name)"' | \ sort -u >> /tmp/referenced-secrets.txt # Get TLS Secrets referenced by Ingresses kubectl get ingress --all-namespaces -o json | \ jq -r '.items[] | . as $ing | .spec.tls[]? | "\($ing.metadata.namespace)/\(.secretName)"' | \ sort -u >> /tmp/referenced-secrets.txt # Deduplicate and compare sort -u /tmp/referenced-secrets.txt -o /tmp/referenced-secrets.txt comm -23 /tmp/all-secrets.txt /tmp/referenced-secrets.txt
This approach still misses references from Deployments, StatefulSets, and CronJobs that are not currently running pods. For a complete picture, you need to check workload specs directly — or use a tool that builds a full dependency graph.
Finding Idle Deployments
Deployments that are scaled to zero or have zero ready pods for an extended period are candidates for cleanup:
bash# Deployments with 0 replicas kubectl get deployments --all-namespaces -o json | \ jq -r '.items[] | select(.spec.replicas == 0) | "\(.metadata.namespace)/\(.metadata.name)"' # Deployments with replicas > 0 but 0 ready pods (potentially broken) kubectl get deployments --all-namespaces -o json | \ jq -r '.items[] | select(.spec.replicas > 0) | select(.status.readyReplicas == 0 or .status.readyReplicas == null) | "\(.metadata.namespace)/\(.metadata.name)"'
A Deployment with zero replicas is not necessarily orphaned — it might be intentionally scaled down for off-hours or maintenance. Check with the owning team before deleting.
Calculating the Cost
Once you have identified orphaned resources, calculate the monthly cost to build a case for cleanup:
Orphaned LoadBalancer Services: 8 × $18/month = $144/month
Orphaned PVCs: 15 × avg 40 GB × $0.10/GB = $60/month
Idle Deployments (3): 3 × avg $30/month = $90/month
───────────────────────────────────────────────────────────
Total recoverable: = $294/month
Annual: = $3,528/year
For a single cluster, $3,500/year is significant. For organizations running 5 to 20 clusters, multiply that figure accordingly. Teams that have never run a cleanup audit often find $10,000 to $50,000 in annual recoverable waste on their first pass.
Safe Deletion Process
Finding orphaned resources is half the job. Deleting them safely is the other half.
Step 1: Label Before Deleting
Do not delete resources immediately. Label them as cleanup candidates first. This gives other team members visibility and a chance to object.
bashkubectl label service orphaned-api-service -n production cleanup-candidate=true kubectl label pvc old-database-pvc -n staging cleanup-candidate=true
Wait 3 to 5 business days. If nobody raises a concern, proceed with deletion.
Step 2: Delete in Dependency Order
If multiple orphaned resources are related (a Deployment, its ConfigMap, its Secret, and its Service), delete the parent workload first, then the supporting resources:
- Deployment or StatefulSet
- Service
- ConfigMaps and Secrets
- PVCs (last, because they hold data)
Step 3: Verify Cloud Resource Deprovisioning
After deleting a LoadBalancer Service, verify that the cloud load balancer was actually removed:
bash# AWS — check for orphaned ELBs aws elbv2 describe-load-balancers --query 'LoadBalancers[*].{Name:LoadBalancerName,DNSName:DNSName,State:State.Code}' --output table # GKE — check forwarding rules gcloud compute forwarding-rules list --format="table(name,IPAddress,target)" # AKS — check public IPs az network public-ip list --resource-group my-rg --query '[].{Name:name,Address:ipAddress,Associated:ipConfiguration.id}' --output table
Occasionally, cloud load balancers or public IPs persist even after the Kubernetes Service is deleted. This happens when the cloud controller loses track of the resource. Verify and clean up at the cloud provider level as well.
Step 4: Verify PVC and PV Cleanup
After deleting a PVC, check that the underlying PersistentVolume transitions correctly based on the reclaim policy:
bashkubectl get pv | grep Released
PVs with status Released and a Retain policy will persist until manually deleted. If you are certain the data is no longer needed:
bashkubectl delete pv <pv-name>
Then verify the cloud disk was removed:
bash# AWS aws ec2 describe-volumes --filters "Name=status,Values=available" --query 'Volumes[*].{ID:VolumeId,Size:Size,Created:CreateTime}' --output table # GKE gcloud compute disks list --filter="NOT users:*" --format="table(name,sizeGb,zone,status)"
Preventing Orphaned Resources
Cleaning up is necessary, but preventing orphans from accumulating in the first place is better.
Use Owner References
When creating supporting resources (ConfigMaps, Secrets, Services) for a Deployment, set owner references so Kubernetes garbage collection automatically deletes them when the parent is deleted:
yamlmetadata: ownerReferences: - apiVersion: apps/v1 kind: Deployment name: my-app uid: <deployment-uid>
Helm does this automatically for resources within a chart when you use helm uninstall.
Label Everything
Every resource should have at minimum:
yamlmetadata: labels: app: my-app team: platform managed-by: helm
Labels make it possible to trace orphaned resources back to their origin and determine whether they are still needed.
Automate Namespace Cleanup
For feature branch or ephemeral environments, automate namespace deletion. If your CI/CD pipeline creates a namespace for each pull request, it should delete that namespace when the PR is merged or closed.
Run Regular Audits
A one-time cleanup does not stay clean. New resources get created, new workloads get decommissioned, and within three to six months you are back where you started. Schedule regular audits — monthly at minimum — to catch orphans before they accumulate.
Scaling Beyond Manual Audits
The manual process described in this guide works for a single cluster with a few namespaces. It does not scale. The kubectl scripts above only check running pods, missing references from workload specs that are not currently running. They do not detect cascading orphans — resources that appear "in use" because they are referenced by a Deployment that is itself orphaned. And running these commands across 5 or 10 clusters, checking every resource type, and cross-referencing every dependency is a full-time job.
KorPro automates this entire process. The Inspector agent runs inside each cluster as a CronJob, builds a full dependency graph across all resource types, evaluates workload liveness, and identifies both direct and transitive orphans. Every finding includes a cost estimate, a risk classification, and a safe deletion order. The dashboard provides a single view across all clusters, all providers, with scan history so you can track whether waste is growing or shrinking over time.
Conclusion
Orphaned resources are the most common and most preventable source of waste in Kubernetes. They do not require complex optimization strategies or workload changes to fix. They just need to be found and removed. The audit process in this guide gives you the commands to do that manually for any cluster. For teams managing multiple clusters across providers, automating the process ensures orphans are caught continuously rather than discovered during an annual cost review when the damage is already done.
The best time to run your first orphan audit was six months ago. The second best time is today.
Scan All Your Clusters in Minutes
How much are orphaned resources costing you right now? Create your free KorPro account to run an automated scan across all your Kubernetes clusters — including cascading orphan detection, cost estimates, and a prioritized cleanup list. Want a guided walkthrough? Contact our team for a personalized assessment.
Ready to Clean Up Your Clusters?
KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.
Related Articles
Extended Kubernetes Support: How Kor Pro Helps Teams Reduce Risk, Optimize Cost, and Modernize Safely
Extended Kubernetes support helps teams manage aging clusters safely. Learn how Kor Pro improves visibility into workloads, pods, ingress, and cost to reduce risk and plan modernization.
Kor: The Open-Source Kubernetes Cleanup Tool (and How KorPro Extends It)
Kor is an open-source CLI that finds unused Kubernetes resources in your cluster. Learn how to install and use Kor, what it detects, and how KorPro extends it to multi-cloud with cost analysis.
Kubernetes End of Life and Extended Support: What Happens When Your Version Expires [2026]
Kubernetes versions lose support faster than most teams realize. Learn the release cycle, what extended support means on EKS, GKE, and AKS, and how to plan upgrades before your cluster becomes a liability.
Written by
KorPro Team