How to Audit Kubernetes Costs by Namespace (Step-by-Step Guide)
Most teams track total cluster spend but never see which namespaces are driving waste. This step-by-step guide shows platform engineers how to audit Kubernetes costs by namespace using kubectl — from idle workloads to orphaned PVCs and stale secrets.
Your cloud bill shows a cluster spending $8,000 a month. But which namespaces are responsible? Is it the production namespace running well-tuned workloads, or the staging namespace nobody has touched in three months? The team namespace from a contractor engagement that ended in Q3? The feature-branch namespace a developer created for a one-day experiment and never deleted?
Most Kubernetes teams track total cluster spend but have no idea how that cost breaks down by namespace. That is where waste hides — in namespaces with no active owners, running pods nobody monitors, holding PVCs from workloads that were decommissioned long ago.
A namespace-level cost audit is the starting point for bringing order to K8s spend. This guide walks through the exact kubectl commands to run, what to look for at each step, and how to calculate the cost impact of what you find.
If you are new to the namespace concept itself, see our primer on what a namespace is in Kubernetes before continuing.
Step 1: Get a Full Namespace Inventory
Start with a complete list of all namespaces in the cluster.
bashkubectl get namespaces
Example output:
NAME STATUS AGE
default Active 423d
kube-system Active 423d
kube-public Active 423d
kube-node-lease Active 423d
production Active 423d
staging Active 387d
feature-payments Active 61d
contractor-work Active 212d
monitoring Active 410d
dev-sandbox Active 8d
What to flag immediately:
- System namespaces (
kube-system,kube-public,kube-node-lease,default) — do not touch these during a cost audit. They run control plane components. - Age vs. activity mismatch —
contractor-workat 212 days old is worth investigating.feature-paymentsat 61 days might be an active feature branch or might be abandoned. - How many namespaces do you actually use? If your team manages 5 applications and you have 20 namespaces, at least some of those are candidates for cleanup.
Count your non-system namespaces:
bashkubectl get namespaces --no-headers | grep -v -E '^(kube-system|kube-public|kube-node-lease|default)\s' | wc -l
If the number surprises you, that is a signal to keep going.
Step 2: Resource Inventory Per Namespace
For each namespace you want to audit, pull a full inventory of what is running.
bashkubectl get all -n <namespace>
This returns Deployments, ReplicaSets, StatefulSets, DaemonSets, Jobs, CronJobs, Services, and Pods in one output. Look for:
- ReplicaSets with 0 pods — these are leftover from Deployment rollouts and are usually safe to delete, but confirm no rollbacks are in progress.
- Services with no associated pods — a Service that selects pods matching a label that no longer exists routes traffic to nothing and, if it is type LoadBalancer, is billing you for a cloud load balancer.
- Jobs in Failed or Completed state older than 7 days — these are not costing compute, but they clutter the namespace and can indicate a broken automation pipeline.
For a more targeted view that includes storage and configuration:
bashkubectl get pods,svc,configmaps,secrets,pvc -n <namespace>
Pay attention to the AGE column. A PVC created 180 days ago in a namespace where all pods are from the last sprint is a strong signal that the PVC is orphaned.
Also check for resources with no owner references — these are objects that were created manually or by a Helm release that no longer exists:
bashkubectl get pods -n <namespace> -o json | jq '.items[] | select(.metadata.ownerReferences == null) | .metadata.name'
Any pod with no owner reference is not managed by a controller and will not be restarted if it crashes. These are almost always leftovers from kubectl run commands or manual deployments that were never properly decommissioned.
Step 3: Find Idle and Unused Namespaces
Get a pod count per namespace across the entire cluster:
bashkubectl get pods -A --no-headers | awk '{print $1}' | sort | uniq -c | sort -rn
Example output:
47 production
18 monitoring
12 staging
3 feature-payments
1 dev-sandbox
0 contractor-work
Any namespace with zero running pods is a strong candidate for full teardown. But "zero running pods" does not tell the whole story — the namespace might still have active PVCs, LoadBalancer Services, or other billable resources.
For namespaces that do show pods, check what state those pods are actually in:
bashkubectl get pods -n <namespace> --no-headers | awk '{print $4}' | sort | uniq -c
A namespace where every pod is in Completed or Error state has no active workload. The namespace is effectively abandoned. The team who owned it may have moved the application elsewhere and never torn down the old environment.
For each candidate namespace, check whether any LoadBalancer Services are still provisioned:
bashkubectl get svc -n <namespace> --field-selector spec.type=LoadBalancer
A LoadBalancer Service in an otherwise empty namespace is billing you $16–$18/month for a cloud load balancer that routes traffic to nothing.
Step 4: CPU and Memory Usage by Namespace
kubectl top requires the Metrics Server to be running in your cluster. Verify it first:
bashkubectl get deployment metrics-server -n kube-system
If it is running, pull current CPU and memory usage per pod:
bashkubectl top pods -n <namespace> --sort-by=memory
Then compare usage to what the pods have requested:
bashkubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $name | .spec.containers[] | [$name, .name, (.resources.requests.cpu // "none"), (.resources.requests.memory // "none")] | @tsv ' | column -t
What to look for:
- Pods requesting
2000mCPU and using50m— 97% over-provisioned. - Pods requesting
4Gimemory and using200Mi— this pod is reserving node capacity that nothing else can use.
Over-provisioned pods are one of the most common sources of K8s waste. The pod counts against your resource quota, it holds a slot on a node, it may be triggering a node scale-out — but it is doing a fraction of the work it was sized for. Our guide to Kubernetes resource waste covers right-sizing in more depth.
If you do not have Metrics Server, you can still get a rough picture by reviewing resource requests alone:
bashkubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $name | .spec.containers[] | [$name, .name, (.resources.requests.cpu // "none"), (.resources.requests.memory // "none"), (.resources.limits.cpu // "none"), (.resources.limits.memory // "none")] | @tsv ' | column -t
Pods with no resource requests at all (none) are scheduled on best-effort QoS class — they can be evicted at any time and make capacity planning impossible.
Step 5: Storage Waste Per Namespace
PersistentVolumeClaims are the most expensive orphaned resource by total dollar impact. A 100 GB SSD-backed PVC costs $8–$17/month depending on cloud provider, and it bills you whether or not any pod is using it.
List all PVCs in a namespace:
bashkubectl get pvc -n <namespace>
Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-data Bound pvc-abc123 100Gi RWO gp3 387d
redis-cache Bound pvc-def456 20Gi RWO gp3 61d
old-uploads Released pvc-ghi789 500Gi RWO gp3 212d
scratch-data Pending <unset> 50Gi RWO gp3 3d
Status meanings for cost purposes:
Released— the pod that used this PVC is gone, but the PVC still exists. You are paying full price for a volume that serves no application. This is a guaranteed waste finding.Pending— the PVC was created but no storage was ever successfully provisioned. Something is misconfigured. No cost yet, but an operational issue to fix.Bound— the PVC is claimed. Check whether the pod mounting it is still running and doing meaningful work.
For Bound PVCs, verify a running pod is actually using them:
bashkubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $pod | .spec.volumes[]? | select(.persistentVolumeClaim != null) | [$pod, .persistentVolumeClaim.claimName] | @tsv '
Cross-reference the claim names in this output against your kubectl get pvc output. Any PVC in Bound status that does not appear in this list is bound to a PVC claim that references no running pod — which means it is effectively orphaned even though Kubernetes reports it as Bound.
See our full guide on orphaned PV and PVC cost for remediation steps including safe deletion with data verification.
Step 6: Orphaned ConfigMaps and Secrets
ConfigMaps and Secrets do not have a direct billing cost, but orphaned Secrets create real security and compliance exposure — and discovering which ones are orphaned takes the same investigative work as finding orphaned PVCs. Our guide to finding and removing orphaned ConfigMaps and Kubernetes Secrets detection go deep on both.
For the purposes of a namespace audit, here is how to flag candidates quickly.
ConfigMaps not referenced by any pod:
List all ConfigMaps in a namespace:
bashkubectl get configmaps -n <namespace> --no-headers | awk '{print $1}' | grep -v kube-root-ca.crt
Then check which ConfigMaps are actually mounted in running pods:
bashkubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $pod | .spec.volumes[]? | select(.configMap != null) | [$pod, .volumes[].configMap.name] | @tsv ' 2>/dev/null # Also check env references: kubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $pod | .spec.containers[].env[]? | select(.valueFrom.configMapKeyRef != null) | [$pod, .valueFrom.configMapKeyRef.name] | @tsv ' 2>/dev/null
Any ConfigMap name that does not appear in either output is a candidate for deletion.
Secrets not referenced by any pod or ServiceAccount:
bash# List secrets referenced by running pods kubectl get pods -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $pod | (.spec.volumes[]? | select(.secret != null) | [$pod, .secret.secretName]) , (.spec.containers[].env[]? | select(.valueFrom.secretKeyRef != null) | [$pod, .valueFrom.secretKeyRef.name]) | @tsv ' 2>/dev/null # List secrets referenced by ServiceAccounts kubectl get serviceaccounts -n <namespace> -o json | jq -r ' .items[] | .metadata.name as $sa | .secrets[]? | [$sa, .name] | @tsv ' 2>/dev/null
Cross-reference both outputs against kubectl get secrets -n <namespace>. Any Secret of type Opaque or kubernetes.io/tls that is not referenced is worth reviewing for deletion.
For clusters where Secrets security matters — and it should — see our post on orphaned resources and Kubernetes security.
Step 7: Automate the Audit
Running this audit manually across 5 namespaces on a single cluster is a reasonable afternoon's work. Running it across 15 namespaces on 4 clusters is a multi-day project. Running it continuously, every week, across every cluster in your organization — that is not something you do manually.
There are a few approaches to automating namespace-level cost audits:
Option 1: Cron job with kubectl outputs to Slack or a file. Script the commands from Steps 1–6 into a shell script, run it on a schedule in-cluster, and pipe the results somewhere your team will actually read them. This works for small teams with one or two clusters and tolerates a certain amount of false positives.
Option 2: Prometheus + Grafana with kube-state-metrics. kube-state-metrics exposes resource metadata as Prometheus metrics, including PVC status, pod owner references, and container resource requests. You can build dashboards that show per-namespace resource requests vs. usage. This requires a monitoring stack and someone to build and maintain the dashboards.
Option 3: KorPro. KorPro runs the equivalent of this audit automatically across all your namespaces in all your clusters, on a continuous schedule. It surfaces orphaned PVCs, idle workloads, over-provisioned pods, stale Secrets, and unused ConfigMaps — with cost estimates attached to each finding. It does not require cloud provider credentials because it inspects the cluster state directly through the Kubernetes API, the same way kubectl does. You can learn more about why that model is more secure in our post on the no-cloud-credentials inspector approach.
Running the Full Audit: Quick Reference
Here are all the essential commands from this guide in one place:
bash# 1. Namespace inventory kubectl get namespaces # 2. Full resource inventory in a namespace kubectl get all -n <namespace> kubectl get pods,svc,configmaps,secrets,pvc -n <namespace> # 3. Pod count per namespace (find idle/unused) kubectl get pods -A --no-headers | awk '{print $1}' | sort | uniq -c | sort -rn # 3b. Pod states in a specific namespace kubectl get pods -n <namespace> --no-headers | awk '{print $4}' | sort | uniq -c # 3c. LoadBalancer services in a namespace kubectl get svc -n <namespace> --field-selector spec.type=LoadBalancer # 4. CPU and memory usage kubectl top pods -n <namespace> --sort-by=memory # 5. PVC status kubectl get pvc -n <namespace> # 6. Secrets and ConfigMaps (then cross-reference with pod mounts) kubectl get configmaps -n <namespace> kubectl get secrets -n <namespace>
What to Do With Your Findings
After running this audit, you will typically find:
- 1–3 namespaces that are fully idle and can be deleted entirely (after confirming no active dependencies).
- 5–20 orphaned PVCs across all namespaces, representing $50–$500/month in unnecessary storage spend.
- 2–10 over-provisioned workloads where CPU or memory requests are 3–10x actual usage.
- Dozens of ConfigMaps and Secrets that are no longer referenced by any workload.
The remediation order that minimizes risk: start with clearly unused namespaces (zero pods, zero PVCs), then orphaned PVCs in active namespaces, then right-sizing over-provisioned workloads, then stale configs and secrets.
For the broader picture of what orphaned resources cost and how to remove them safely, see our guide to finding orphaned Kubernetes resources that are costing you money.
Skip the Manual Work
The audit in this guide takes 30–60 minutes per cluster when you run it manually. It surfaces real waste the first time. But namespaces are created and abandoned continuously — a one-time audit stales out within weeks.
KorPro runs this entire audit automatically, across all namespaces in all your clusters, on a continuous schedule. It identifies orphaned PVCs, idle namespaces, over-provisioned workloads, stale Secrets, and unused ConfigMaps — with dollar estimates on every finding.
It works through the Kubernetes API. No cloud provider credentials required.
Run a free KorPro scan at app.korpro.io — results in minutes, not an afternoon.
Ready to Clean Up Your Clusters?
KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.
Related Articles
Kubernetes in Production: Real Use Cases and Their Hidden Cost Implications
Every Kubernetes use case generates waste differently. This post maps the specific orphan patterns — GPUs left running, per-tenant namespace debt, ephemeral CI namespaces that never cleaned up — so you know what to look for in your own clusters.
Kubernetes Glossary: 28 Essential Terms Explained [2026]
Complete Kubernetes glossary — 28 essential terms from Pod to PersistentVolume, defined clearly for developers and platform engineers.
Kubernetes Cost Recovery: How to Find and Reclaim Wasted Cloud Spend
Most Kubernetes teams are overspending by 20–40% and don't know it. This guide shows you exactly where the waste hides, how to quantify it, and how to recover it — with real kubectl commands and real cost estimates.
Written by
KorPro Team