Taints and Tolerations
Node taints repel Pods from scheduling on a node; tolerations in a Pod spec allow it to be scheduled on tainted nodes.
What is Taints and Tolerations?
Taints and tolerations are Kubernetes's mechanism for repelling Pods from nodes unless explicitly permitted. A taint is applied to a node (kubectl taint nodes my-node key=value:NoSchedule) and has three effects: NoSchedule (Pods without a matching toleration will not be scheduled there), PreferNoSchedule (the scheduler avoids placing Pods there but will if necessary), and NoExecute (Pods without a toleration are evicted from an already-tainted node). A toleration is declared in a Pod spec and declares that the Pod can tolerate a specific taint, allowing it to be scheduled on the tainted node.
Taints and tolerations are commonly used for workload isolation: GPU nodes are tainted with nvidia.com/gpu:NoSchedule so only Pods specifically requesting GPU resources (with matching tolerations) run on them. Spot/preemptible nodes are often tainted so only fault-tolerant workloads tolerate them. Dedicated infrastructure nodes (running monitoring, logging, or ingress) are tainted to prevent application Pods from consuming their resources.
Taints and tolerations work together with node affinity for precise workload placement. A toleration only allows a Pod to be scheduled on a tainted node — it doesn't guarantee it will be. Node affinity expressions (requiredDuringSchedulingIgnoredDuringExecution) combined with tolerations both permit and direct scheduling to specific node types.
Example
# Taint a node to repel all non-tolerating pods
kubectl taint nodes gpu-node-1 dedicated=gpu:NoSchedule
# A pod spec with the matching toleration
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
nodeSelector:
dedicated: gpu
containers:
- name: ml-training
image: my-org/ml-trainer:v1
resources:
limits:
nvidia.com/gpu: 1Cost & Waste Implications
Incorrectly configured taints can leave expensive nodes (particularly GPU nodes at $2–$30/hour) unschedulable by any Pod, running idle while billing at full rate. A taint applied to a node group with a typo in the key will block all workloads from that group without error, requiring manual investigation to diagnose the silent waste.
How KorPro Helps
KorPro detects nodes that are Running but have zero scheduled Pods due to taint mismatches, flagging the idle node cost and the taint configuration likely causing the scheduling blockage.
Scan Your Cluster FreeRelated Terms
Node
Core ConceptsA physical or virtual machine in a Kubernetes cluster that runs Pods under the direction of the control plane.
Read definitionNode and Pod Affinity
OperationsScheduling rules that attract Pods to specific nodes (node affinity) or co-locate/separate Pods from each other (pod affinity/anti-affinity).
Read definitionCluster Autoscaler
ScalingA component that automatically adds nodes when Pods are unschedulable and removes nodes when they are underutilized.
Read definitionKarpenter
ScalingAn open-source Kubernetes node provisioner that launches the optimal nodes for pending Pods in seconds, without pre-configured node groups.
Read definitionStop Wasting Money on Orphaned Kubernetes Resources
KorPro connects to your clusters across GCP, AWS, and Azure — no agents, no installation — and surfaces every orphaned resource with its monthly cost estimate.