How to Find and Remove Orphaned ConfigMaps in Kubernetes (2025 Guide)
A step-by-step guide to identifying unused ConfigMaps in your Kubernetes clusters — manually with kubectl, with open-source Kor, and at scale with KorPro.
ConfigMaps are one of the most commonly orphaned resources in Kubernetes. Every Helm install, every deployment pipeline, and every developer experiment leaves ConfigMaps behind. Over time, they pile up — cluttering your cluster, obscuring what's actually in use, and in some cases referencing outdated or sensitive configuration you'd rather not have lying around.
This guide walks through three approaches to finding and removing orphaned ConfigMaps: manual kubectl commands, the open-source Kor CLI, and automated scanning with KorPro.
Why ConfigMaps Become Orphaned
ConfigMaps become orphaned through several common patterns:
- Helm upgrades that change ConfigMap names (e.g., appending a hash) without cleaning up old versions
- Deleted Deployments where the ConfigMap was created separately and not removed with the workload
- Failed or abandoned CI/CD runs that created ConfigMaps for test environments
- Manual
kubectl create configmapduring debugging that was never cleaned up - Renamed or refactored apps that started using a new ConfigMap but left the old one
Method 1: Manual Detection with kubectl
Step 1: List All ConfigMaps
bashkubectl get configmaps --all-namespaces -o custom-columns=\ NAMESPACE:.metadata.namespace,\ NAME:.metadata.name,\ AGE:.metadata.creationTimestamp
Exclude system ConfigMaps to reduce noise:
bashkubectl get configmaps --all-namespaces --field-selector metadata.namespace!=kube-system \ -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name
Step 2: Find All ConfigMap References
ConfigMaps can be referenced in several places within a Pod spec:
Volume mounts:
bashkubectl get pods --all-namespaces -o json | \ jq -r '.items[].spec.volumes[]? | select(.configMap) | .configMap.name' | \ sort -u
Environment variables (valueFrom):
bashkubectl get pods --all-namespaces -o json | \ jq -r '.items[].spec.containers[].env[]? | select(.valueFrom.configMapKeyRef) | .valueFrom.configMapKeyRef.name' | \ sort -u
envFrom:
bashkubectl get pods --all-namespaces -o json | \ jq -r '.items[].spec.containers[].envFrom[]? | select(.configMapRef) | .configMapRef.name' | \ sort -u
Step 3: Compare the Lists
bash# Get all ConfigMap names (excluding kube-system) kubectl get configmaps --all-namespaces \ --field-selector metadata.namespace!=kube-system \ -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}' | \ sort > /tmp/all-configmaps.txt # Get all referenced ConfigMaps from running pods kubectl get pods --all-namespaces -o json | \ jq -r '.items[] | . as $pod | ((.spec.volumes[]? | select(.configMap) | "\($pod.metadata.namespace)/\(.configMap.name)"), (.spec.containers[].env[]? | select(.valueFrom.configMapKeyRef) | "\($pod.metadata.namespace)/\(.valueFrom.configMapKeyRef.name)"), (.spec.containers[].envFrom[]? | select(.configMapRef) | "\($pod.metadata.namespace)/\(.configMapRef.name)"))' | \ sort -u > /tmp/referenced-configmaps.txt # Find orphans comm -23 /tmp/all-configmaps.txt /tmp/referenced-configmaps.txt
Limitations of the Manual Approach
This method has significant blind spots:
- Only checks running Pods — misses references from Deployments, StatefulSets, CronJobs, and DaemonSets that aren't currently running
- Doesn't check CRDs — custom resources may reference ConfigMaps in non-standard fields
- No dependency chain analysis — a ConfigMap referenced by a dead Deployment looks "used"
- Namespace-by-namespace — tedious across many clusters
- No cost context — you know it's orphaned but not whether it matters
Method 2: Automated Detection with Kor (Open Source)
Kor is an open-source CLI tool that automates orphan detection for ConfigMaps and many other resource types.
Installation
bash# Using Go go install github.com/yonahd/kor@latest # Using Homebrew brew install kor # Using Docker docker run --rm -v ~/.kube/config:/root/.kube/config ghcr.io/yonahd/kor:latest
Scan for Orphaned ConfigMaps
bash# Scan all namespaces kor configmap --all # Scan a specific namespace kor configmap -n production # Output as JSON kor configmap --all -o json
Scan All Resource Types
bashkor all --all
Kor checks references across Deployments, StatefulSets, DaemonSets, Jobs, CronJobs, and Pods — significantly more thorough than manual kubectl scripting.
Limitations of Kor
- Single cluster — you need to run it separately for each cluster
- Direct orphans only — no cascading/transitive orphan detection
- No cost estimates — flags orphans but doesn't tell you what they cost
- No history — each run is independent, no trend tracking
- CLI only — no web UI or API for team collaboration
Method 3: Automated Multi-Cluster Detection with KorPro
KorPro extends Kor with enterprise capabilities for teams managing multiple clusters.
How It Works
- Install the KorPro Inspector into each cluster via Helm (30 seconds per cluster)
- The Inspector runs as a CronJob, scanning every 6 hours by default
- Results are sent to the KorPro dashboard with cost estimates and health scores
- Review findings in Audit Mode before taking any action
What KorPro Adds Over Kor
- Cascading orphan detection — finds ConfigMaps referenced only by dead workloads
- Cost analysis — monthly and yearly cost estimates per resource
- Multi-cloud dashboard — single view across GKE, EKS, and AKS clusters
- Health scores — cluster-level efficiency and security metrics
- Scan history — track trends over time, see if waste is growing or shrinking
- REST API — integrate findings into CI/CD pipelines and Slack alerts
- Safe-to-prune checklist — exportable list for team review before cleanup
Example: Cascading ConfigMap Detection
A standard scanner sees this ConfigMap as "in use":
ConfigMap "feature-flags-v2"
└── referenced by Deployment "recommendation-engine" (0 replicas since January)
KorPro flags it as a transitive orphan with context:
⚠ ConfigMap "feature-flags-v2" (namespace: production)
Status: Transitive orphan
Parent: Deployment "recommendation-engine" (0 replicas, last active 2025-01-03)
Other references: None
Risk: Low (no sensitive data detected)
Action: Safe to remove after parent Deployment
Safe Deletion
Once you've identified orphaned ConfigMaps, delete them carefully:
Single ConfigMap
bashkubectl delete configmap <name> -n <namespace>
Bulk Deletion (from Kor output)
bash# Dry run first — review the list kor configmap --all -o json | jq -r '.[] | .resources[]' # Delete with confirmation kor configmap --all --delete
Best Practice: Label Before Deleting
If you're unsure, label ConfigMaps as candidates before deleting:
bashkubectl label configmap <name> -n <namespace> cleanup-candidate=true # Later, delete all candidates kubectl delete configmaps -l cleanup-candidate=true -n <namespace>
Prevention: Stop Creating Orphans
- Use Helm
--cleanup-on-fail— removes resources from failed installs - Set owner references — let Kubernetes garbage collection handle child resources when parents are deleted
- Namespace-per-feature-branch — delete the entire namespace when the branch is merged
- Label everything — add
app,team, andmanaged-bylabels so orphans can be traced back to their origin - Schedule regular scans — run Kor or KorPro on a schedule to catch orphans before they accumulate
Conclusion
Orphaned ConfigMaps are the canary in the coal mine for Kubernetes resource waste. If you have orphaned ConfigMaps, you almost certainly have orphaned Secrets, Services, PVCs, and more. Start with ConfigMaps as your first cleanup pass, then expand to a full resource audit.
Scan Your Clusters in Minutes
How many orphaned ConfigMaps are in your clusters right now? Create your free KorPro account to scan across all your Kubernetes clusters and get a full orphan report — including cascading orphans, cost estimates, and a safe-to-prune checklist. Prefer to start with the CLI? Try the open-source Kor tool first.
Ready to Clean Up Your Clusters?
KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.
Related Articles
Extended Kubernetes Support: How Kor Pro Helps Teams Reduce Risk, Optimize Cost, and Modernize Safely
Extended Kubernetes support helps teams manage aging clusters safely. Learn how Kor Pro improves visibility into workloads, pods, ingress, and cost to reduce risk and plan modernization.
Kor: The Open-Source Kubernetes Cleanup Tool (and How KorPro Extends It)
Kor is an open-source CLI that finds unused Kubernetes resources in your cluster. Learn how to install and use Kor, what it detects, and how KorPro extends it to multi-cloud with cost analysis.
Kubernetes End of Life and Extended Support: What Happens When Your Version Expires [2026]
Kubernetes versions lose support faster than most teams realize. Learn the release cycle, what extended support means on EKS, GKE, and AKS, and how to plan upgrades before your cluster becomes a liability.
Written by
KorPro Team