Back to Blog
Cost Optimization

How to Find Orphaned PVCs and PVs Before They Inflate Your Cloud Bill

Orphaned PersistentVolumeClaims and PersistentVolumes are one of the most common sources of hidden Kubernetes storage cost. Here is how to find them, validate they are safe to delete, and estimate your savings.

KorPro Team
May 6, 2026
7 min read
KubernetesPVCPersistentVolumeClaimOrphaned ResourcesStorageCost OptimizationEKSGKEAKSCloud Spend

Kubernetes makes it easy to provision storage. It does not make it easy to clean it up. When a Deployment or StatefulSet is deleted, the PersistentVolumeClaim it used is not automatically removed. The underlying cloud disk — EBS on AWS, Persistent Disk on GCP, Managed Disk on Azure — continues to bill at the full provisioned size, whether or not anything is reading or writing to it.

In clusters with active development, frequent environment teardowns, or long-lived CI/CD pipelines, orphaned PVCs and PVs accumulate quickly. A thorough audit of just the storage layer often surfaces meaningful recoverable spend — especially in clusters that have been running for more than six months.

This post covers the mechanics of how PVCs and PVs become orphaned, how to detect them using kubectl, how to validate findings before deleting, and how to estimate what cleanup is worth.

How PVCs and PVs Become Orphaned

Understanding the lifecycle helps explain why orphaned storage is so common.

The normal lifecycle:

  1. A pod or StatefulSet declares a volumeClaimTemplate or references a PersistentVolumeClaim by name.
  2. Kubernetes binds the PVC to a PersistentVolume — either an existing one or a dynamically provisioned one from a StorageClass.
  3. The cloud provider provisions a disk and the PV enters Bound state.
  4. When the pod is deleted, the PVC remains unless explicitly deleted.
  5. When the PVC is deleted, the PV transitions to Released if the reclaim policy is Retain, or is automatically deleted if the policy is Delete.

Where orphans appear:

  • StatefulSet teardown without PVC cleanup: When you delete a StatefulSet, the PVCs created by its volumeClaimTemplates are intentionally not deleted — Kubernetes does this to protect data. Those PVCs become orphaned if the StatefulSet is not recreated.
  • Failed migrations: Moving a workload from one namespace or cluster to another often means the original PVC is left behind.
  • Test and staging environment teardowns: Dev environments frequently use PVC-backed databases or object stores. When the environment is torn down but the namespace is not fully deleted, PVCs remain.
  • Retained reclaim policies: A StorageClass with reclaimPolicy: Retain means deleted PVCs leave behind Released PVs that bill until manually removed.
  • Helm release deletions: Helm deletes the resources in its release chart, but if PVCs were created by the application after deploy time (not templated in the chart), they may not be tracked and not deleted on helm uninstall.

Storage Pricing by Cloud Provider

Understanding the cost per GB helps prioritize which orphaned volumes are worth finding first.

Cloud ProviderStorage ClassApproximate Cost Per GB/Month
AWS (EKS)gp3 (SSD)~$0.08
AWS (EKS)gp2 (SSD, legacy)~$0.10
AWS (EKS)st1 (throughput HDD)~$0.045
GCP (GKE)pd-ssd~$0.17
GCP (GKE)pd-balanced~$0.10
GCP (GKE)pd-standard (HDD)~$0.04
Azure (AKS)Premium SSD (P-series)~$0.135
Azure (AKS)Standard SSD~$0.075
Azure (AKS)Standard HDD~$0.04

These are approximate list prices. Actual costs vary by region and any committed-use discounts.

A single forgotten 500 GB SSD volume on EKS costs roughly $50/month. Across a cluster with dozens of such volumes, storage waste compounds fast.

Step-by-Step Detection

Step 1: Find PVCs Not in Bound State

A PVC in Pending state either failed to bind or is waiting for a pod that no longer exists. A PVC in Lost state had its PV deleted out of band.

bash
kubectl get pvc --all-namespaces -o json | \ jq '.items[] | select(.status.phase != "Bound") | {name: .metadata.name, namespace: .metadata.namespace, phase: .status.phase, size: .spec.resources.requests.storage, storageClass: .spec.storageClassName}'

Step 2: Find PVCs Bound But Not Mounted by Any Running Pod

A PVC can be Bound but still orphaned — the PV is allocated and the disk is billing, but no pod is using it.

bash
# Step 2a: Collect all PVC names currently referenced by pods kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | .metadata.namespace as $ns | .spec.volumes[]? | select(.persistentVolumeClaim != null) | "\($ns)/\(.persistentVolumeClaim.claimName)"' | sort -u > /tmp/mounted_pvcs.txt # Step 2b: List all Bound PVCs and their namespaces kubectl get pvc --all-namespaces -o json | \ jq -r '.items[] | select(.status.phase == "Bound") | "\(.metadata.namespace)/\(.metadata.name)"' | sort > /tmp/all_pvcs.txt # Step 2c: Find PVCs that exist but are not mounted comm -23 /tmp/all_pvcs.txt /tmp/mounted_pvcs.txt

Step 3: Find Released PVs

Released PVs are the most clear-cut orphaned storage — the claim is gone, the data may or may not still be needed, but the disk is billing.

bash
kubectl get pv -o json | \ jq '.items[] | select(.status.phase == "Released") | {name: .metadata.name, capacity: .spec.capacity.storage, storageClass: .spec.storageClassName, reclaimPolicy: .spec.persistentVolumeReclaimPolicy, claimRef: .spec.claimRef}'

The claimRef field shows the namespace and name of the claim that originally used this volume — useful for tracing who created it.

Step 4: Estimate Storage Waste

Once you have the list of orphaned PVCs and Released PVs, total the provisioned storage by storage class and apply the pricing table above.

bash
# Total orphaned PVC storage in GiB by storage class kubectl get pvc --all-namespaces -o json | \ jq '.items[] | select(.status.phase != "Bound") | {storageClass: .spec.storageClassName, size: .spec.resources.requests.storage}' | \ jq -s 'group_by(.storageClass) | .[] | {storageClass: .[0].storageClass, count: length}'

How to Validate Before Deleting

Finding a PVC is not the same as confirming it is safe to delete. Before acting on any orphaned storage finding, validate:

1. Check for recent pod events

A PVC that appears unmounted may belong to a pod that is crash-looping and cannot start because its storage was accidentally removed in a previous cleanup pass. Check for recent pod events in the same namespace.

bash
kubectl get events -n <namespace> --sort-by='.lastTimestamp' | grep <pvc-name>

2. Check for StatefulSet association

If the PVC name matches the pattern <volume-name>-<statefulset-name>-<ordinal>, it was created by a StatefulSet. Check if the StatefulSet still exists or if it was intentionally deleted.

bash
kubectl get statefulsets -n <namespace>

3. Check for VolumeSnapshot references

If VolumeSnapshots exist for the PVC, the snapshot may reference the volume. Deleting the PV before the snapshot is confirmed may affect recovery options.

bash
kubectl get volumesnapshots --all-namespaces -o json | \ jq '.items[] | select(.spec.source.persistentVolumeClaimName == "<pvc-name>") | .metadata.name'

4. Check with the owning team

For any PVC without a clear owner label, check Git history or deployment records for who created the namespace or workload. A quick async confirmation prevents incidents.

Validating Reclaim Policy Before Cleanup

The StorageClass reclaim policy determines what happens to the underlying disk when the PVC is deleted:

bash
kubectl get storageclasses -o json | \ jq '.items[] | {name: .metadata.name, reclaimPolicy: .reclaimPolicy, volumeBindingMode: .volumeBindingMode}'
  • Delete: Deleting the PVC deletes the PV and the underlying cloud disk. Fast cleanup, but irreversible.
  • Retain: Deleting the PVC leaves the PV in Released state — the disk stays until the PV is also manually deleted. Safer, but requires a second step.

If you are operating in a cluster where the reclaim policy is Retain, cleaning up storage means deleting the PVC and then deleting the Released PV afterward.

Scaling This Across Multiple Clusters

Running these commands manually is practical for a one-time audit of a single cluster. For teams managing multiple clusters — or wanting this as a recurring hygiene check — manual scripting becomes a maintenance burden.

KorPro detects orphaned PVCs, Released PVs, and unmounted-but-bound claims across all clusters in a single scan, without requiring cloud credentials or write access. Findings are grouped by namespace, storage class, and estimated monthly cost. See the orphaned PVC and PV use case for details on how the detection and reporting works.

For broader context on how orphaned resources cascade across resource types in Kubernetes, the cascading orphans post covers the dependency patterns that make storage waste harder to catch than it looks.


See Storage Waste in Your Cluster

KorPro surfaces orphaned PVCs, Released PVs, and unmounted storage across your clusters — read-only, no agents required.

Create your free KorPro account | Contact our team

Stop Wasting Kubernetes Resources

Ready to Clean Up Your Clusters?

KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.

Written by

KorPro Team

View All Posts