How to Find Orphaned PVCs and PVs Before They Inflate Your Cloud Bill
Orphaned PersistentVolumeClaims and PersistentVolumes are one of the most common sources of hidden Kubernetes storage cost. Here is how to find them, validate they are safe to delete, and estimate your savings.
Kubernetes makes it easy to provision storage. It does not make it easy to clean it up. When a Deployment or StatefulSet is deleted, the PersistentVolumeClaim it used is not automatically removed. The underlying cloud disk — EBS on AWS, Persistent Disk on GCP, Managed Disk on Azure — continues to bill at the full provisioned size, whether or not anything is reading or writing to it.
In clusters with active development, frequent environment teardowns, or long-lived CI/CD pipelines, orphaned PVCs and PVs accumulate quickly. A thorough audit of just the storage layer often surfaces meaningful recoverable spend — especially in clusters that have been running for more than six months.
This post covers the mechanics of how PVCs and PVs become orphaned, how to detect them using kubectl, how to validate findings before deleting, and how to estimate what cleanup is worth.
How PVCs and PVs Become Orphaned
Understanding the lifecycle helps explain why orphaned storage is so common.
The normal lifecycle:
- A pod or StatefulSet declares a
volumeClaimTemplateor references aPersistentVolumeClaimby name. - Kubernetes binds the PVC to a
PersistentVolume— either an existing one or a dynamically provisioned one from aStorageClass. - The cloud provider provisions a disk and the PV enters
Boundstate. - When the pod is deleted, the PVC remains unless explicitly deleted.
- When the PVC is deleted, the PV transitions to
Releasedif the reclaim policy isRetain, or is automatically deleted if the policy isDelete.
Where orphans appear:
- StatefulSet teardown without PVC cleanup: When you delete a StatefulSet, the PVCs created by its
volumeClaimTemplatesare intentionally not deleted — Kubernetes does this to protect data. Those PVCs become orphaned if the StatefulSet is not recreated. - Failed migrations: Moving a workload from one namespace or cluster to another often means the original PVC is left behind.
- Test and staging environment teardowns: Dev environments frequently use PVC-backed databases or object stores. When the environment is torn down but the namespace is not fully deleted, PVCs remain.
- Retained reclaim policies: A
StorageClasswithreclaimPolicy: Retainmeans deleted PVCs leave behindReleasedPVs that bill until manually removed. - Helm release deletions: Helm deletes the resources in its release chart, but if PVCs were created by the application after deploy time (not templated in the chart), they may not be tracked and not deleted on
helm uninstall.
Storage Pricing by Cloud Provider
Understanding the cost per GB helps prioritize which orphaned volumes are worth finding first.
| Cloud Provider | Storage Class | Approximate Cost Per GB/Month |
|---|---|---|
| AWS (EKS) | gp3 (SSD) | ~$0.08 |
| AWS (EKS) | gp2 (SSD, legacy) | ~$0.10 |
| AWS (EKS) | st1 (throughput HDD) | ~$0.045 |
| GCP (GKE) | pd-ssd | ~$0.17 |
| GCP (GKE) | pd-balanced | ~$0.10 |
| GCP (GKE) | pd-standard (HDD) | ~$0.04 |
| Azure (AKS) | Premium SSD (P-series) | ~$0.135 |
| Azure (AKS) | Standard SSD | ~$0.075 |
| Azure (AKS) | Standard HDD | ~$0.04 |
These are approximate list prices. Actual costs vary by region and any committed-use discounts.
A single forgotten 500 GB SSD volume on EKS costs roughly $50/month. Across a cluster with dozens of such volumes, storage waste compounds fast.
Step-by-Step Detection
Step 1: Find PVCs Not in Bound State
A PVC in Pending state either failed to bind or is waiting for a pod that no longer exists. A PVC in Lost state had its PV deleted out of band.
bashkubectl get pvc --all-namespaces -o json | \ jq '.items[] | select(.status.phase != "Bound") | {name: .metadata.name, namespace: .metadata.namespace, phase: .status.phase, size: .spec.resources.requests.storage, storageClass: .spec.storageClassName}'
Step 2: Find PVCs Bound But Not Mounted by Any Running Pod
A PVC can be Bound but still orphaned — the PV is allocated and the disk is billing, but no pod is using it.
bash# Step 2a: Collect all PVC names currently referenced by pods kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | .metadata.namespace as $ns | .spec.volumes[]? | select(.persistentVolumeClaim != null) | "\($ns)/\(.persistentVolumeClaim.claimName)"' | sort -u > /tmp/mounted_pvcs.txt # Step 2b: List all Bound PVCs and their namespaces kubectl get pvc --all-namespaces -o json | \ jq -r '.items[] | select(.status.phase == "Bound") | "\(.metadata.namespace)/\(.metadata.name)"' | sort > /tmp/all_pvcs.txt # Step 2c: Find PVCs that exist but are not mounted comm -23 /tmp/all_pvcs.txt /tmp/mounted_pvcs.txt
Step 3: Find Released PVs
Released PVs are the most clear-cut orphaned storage — the claim is gone, the data may or may not still be needed, but the disk is billing.
bashkubectl get pv -o json | \ jq '.items[] | select(.status.phase == "Released") | {name: .metadata.name, capacity: .spec.capacity.storage, storageClass: .spec.storageClassName, reclaimPolicy: .spec.persistentVolumeReclaimPolicy, claimRef: .spec.claimRef}'
The claimRef field shows the namespace and name of the claim that originally used this volume — useful for tracing who created it.
Step 4: Estimate Storage Waste
Once you have the list of orphaned PVCs and Released PVs, total the provisioned storage by storage class and apply the pricing table above.
bash# Total orphaned PVC storage in GiB by storage class kubectl get pvc --all-namespaces -o json | \ jq '.items[] | select(.status.phase != "Bound") | {storageClass: .spec.storageClassName, size: .spec.resources.requests.storage}' | \ jq -s 'group_by(.storageClass) | .[] | {storageClass: .[0].storageClass, count: length}'
How to Validate Before Deleting
Finding a PVC is not the same as confirming it is safe to delete. Before acting on any orphaned storage finding, validate:
1. Check for recent pod events
A PVC that appears unmounted may belong to a pod that is crash-looping and cannot start because its storage was accidentally removed in a previous cleanup pass. Check for recent pod events in the same namespace.
bashkubectl get events -n <namespace> --sort-by='.lastTimestamp' | grep <pvc-name>
2. Check for StatefulSet association
If the PVC name matches the pattern <volume-name>-<statefulset-name>-<ordinal>, it was created by a StatefulSet. Check if the StatefulSet still exists or if it was intentionally deleted.
bashkubectl get statefulsets -n <namespace>
3. Check for VolumeSnapshot references
If VolumeSnapshots exist for the PVC, the snapshot may reference the volume. Deleting the PV before the snapshot is confirmed may affect recovery options.
bashkubectl get volumesnapshots --all-namespaces -o json | \ jq '.items[] | select(.spec.source.persistentVolumeClaimName == "<pvc-name>") | .metadata.name'
4. Check with the owning team
For any PVC without a clear owner label, check Git history or deployment records for who created the namespace or workload. A quick async confirmation prevents incidents.
Validating Reclaim Policy Before Cleanup
The StorageClass reclaim policy determines what happens to the underlying disk when the PVC is deleted:
bashkubectl get storageclasses -o json | \ jq '.items[] | {name: .metadata.name, reclaimPolicy: .reclaimPolicy, volumeBindingMode: .volumeBindingMode}'
Delete: Deleting the PVC deletes the PV and the underlying cloud disk. Fast cleanup, but irreversible.Retain: Deleting the PVC leaves the PV inReleasedstate — the disk stays until the PV is also manually deleted. Safer, but requires a second step.
If you are operating in a cluster where the reclaim policy is Retain, cleaning up storage means deleting the PVC and then deleting the Released PV afterward.
Scaling This Across Multiple Clusters
Running these commands manually is practical for a one-time audit of a single cluster. For teams managing multiple clusters — or wanting this as a recurring hygiene check — manual scripting becomes a maintenance burden.
KorPro detects orphaned PVCs, Released PVs, and unmounted-but-bound claims across all clusters in a single scan, without requiring cloud credentials or write access. Findings are grouped by namespace, storage class, and estimated monthly cost. See the orphaned PVC and PV use case for details on how the detection and reporting works.
For broader context on how orphaned resources cascade across resource types in Kubernetes, the cascading orphans post covers the dependency patterns that make storage waste harder to catch than it looks.
See Storage Waste in Your Cluster
KorPro surfaces orphaned PVCs, Released PVs, and unmounted storage across your clusters — read-only, no agents required.
Ready to Clean Up Your Clusters?
KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.
Related Articles
Read-Only Kubernetes Cost Optimization: How to Find Waste Without Installing Agents
Security-conscious platform teams can discover significant Kubernetes waste using only read-only cluster access — no agents, no cloud credentials, no write permissions required. Here is how the audit-first model works.
How MSPs Recover Margin from Unused Kubernetes Resources Across Customer Clusters
MSPs and cloud service providers managing Kubernetes for customers absorb infrastructure waste that erodes margin and complicates billing. Here is how to identify and recover that waste across customer clusters without creating operational risk.
Kubernetes Cost Audit Checklist for EKS, GKE, and AKS
A practical Kubernetes cost audit checklist covering idle workloads, orphaned storage, stale namespaces, and ownership gaps across EKS, GKE, and AKS. Built for platform teams who need to recover real spend.
Written by
KorPro Team