Liveness and Readiness Probes
Health checks that tell Kubernetes when to restart a container (liveness) or remove it from load balancing (readiness).
What is Liveness and Readiness Probes?
Kubernetes probes are periodic health checks run by the kubelet against each container. The liveness probe determines whether a container is alive: if it fails consecutively (failureThreshold times), the kubelet kills the container and restarts it according to the Pod's restartPolicy. Liveness probes detect deadlocks and infinite loops — scenarios where the process is still running but no longer doing useful work.
The readiness probe determines whether a container is ready to serve traffic: if it fails, the Pod's IP is removed from all matching Service endpoints, stopping new requests from being routed to it until the probe recovers. This enables graceful handling of startup time, dependency availability, and temporary overload — the Pod stays alive but is temporarily taken out of rotation. A third probe type, the startup probe, delays liveness and readiness checks until initial startup completes, preventing premature restarts of slow-starting applications.
Probes can use three mechanisms: httpGet (HTTP GET to a path and port), tcpSocket (TCP connection to a port), or exec (run a command inside the container). Configuration parameters — initialDelaySeconds, periodSeconds, timeoutSeconds, successThreshold, failureThreshold — must be carefully tuned to the actual startup and response time characteristics of the application to avoid false positives that cause unnecessary restarts.
Example
containers:
- name: api
image: my-org/api:v3
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3Cost & Waste Implications
Missing readiness probes cause traffic to be sent to Pods before they are ready, increasing error rates during deployments and requiring additional replicas for buffer capacity. Misconfigured liveness probes with thresholds too aggressive cause CrashLoopBackOff spirals that waste compute on unnecessary container restarts and can cascade to node pressure if many Pods crash-loop simultaneously.
How KorPro Helps
KorPro flags Deployments and StatefulSets with missing or misconfigured probes as part of its configuration health analysis, correlating probe absence with high restart counts in the cluster.
Scan Your Cluster FreeRelated Terms
Pod
Core ConceptsThe smallest deployable unit in Kubernetes — one or more containers that share a network namespace and storage volumes.
Read definitionDeployment
WorkloadsA controller that manages a ReplicaSet to keep a specified number of identical Pod replicas running and handles rolling updates.
Read definitionService
NetworkingA stable network endpoint that load-balances traffic to a dynamic set of Pods selected by label.
Read definitionResource Requests and Limits
ConfigurationPer-container declarations of guaranteed CPU/memory (requests) and hard maximums (limits) that drive scheduling and enforcement.
Read definitionStop Wasting Money on Orphaned Kubernetes Resources
KorPro connects to your clusters across GCP, AWS, and Azure — no agents, no installation — and surfaces every orphaned resource with its monthly cost estimate.