Back to Glossary
Operations

Node and Pod Affinity

Scheduling rules that attract Pods to specific nodes (node affinity) or co-locate/separate Pods from each other (pod affinity/anti-affinity).

What is Node and Pod Affinity?

Affinity rules are Kubernetes scheduling constraints that go beyond simple nodeSelector label matching. Node affinity (spec.affinity.nodeAffinity) expresses preferences or requirements for which nodes a Pod can be scheduled on based on node labels. Required rules (requiredDuringSchedulingIgnoredDuringExecution) are hard constraints — the Pod will not be scheduled if no node matches. Preferred rules (preferredDuringSchedulingIgnoredDuringExecution) are soft constraints — the scheduler tries to satisfy them but will place the Pod elsewhere if necessary.

Pod affinity (spec.affinity.podAffinity) co-locates a Pod with other Pods sharing specific labels — useful for latency-sensitive components that benefit from being on the same node or in the same availability zone. Pod anti-affinity (spec.affinity.podAntiAffinity) spreads Pods away from each other, the standard way to ensure multiple replicas of a Deployment land on different nodes (and thus survive a node failure) or different availability zones.

TopologySpreadConstraints (a more modern alternative to pod anti-affinity for spreading) let you specify how evenly Pods should be spread across topology domains (nodes, zones, regions) with a maxSkew parameter defining the maximum imbalance allowed. This is the recommended approach for zone-balanced deployments in multi-zone clusters.

Example

spec:
  affinity:
    # Require scheduling on nodes in us-central1-a or us-central1-b
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values: ["us-central1-a", "us-central1-b"]
    # Spread replicas across different nodes
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: web-api
        topologyKey: kubernetes.io/hostname

Cost & Waste Implications

Required affinity rules that target specific node labels can cause Pods to remain Pending if no matching nodes are available, wasting cluster capacity on other nodes while specific nodes are needed. Overly strict anti-affinity rules can prevent HPA from scaling Pods when all eligible nodes already have a replica of the Deployment, blocking scale-out even under high load. Cross-zone data transfer incurred by pods in different zones incurs egress charges of $0.01/GB on most cloud providers.

KorPro— Kubernetes Cost Optimization

How KorPro Helps

KorPro identifies Pods stuck in Pending state due to unsatisfiable affinity rules and nodes sitting idle because their labels or taints don't match any pending Pod's affinity requirements.

Scan Your Cluster Free

Stop Wasting Money on Orphaned Kubernetes Resources

KorPro connects to your clusters across GCP, AWS, and Azure — no agents, no installation — and surfaces every orphaned resource with its monthly cost estimate.