Back to Blog
Kubernetes Fundamentals

What is a Pod in Kubernetes? Complete Guide [With Examples]

Kubernetes Pods explained: how they work, single vs multi-container Pods, the Pod lifecycle, and practical examples you can run in your own cluster.

KorPro Team
February 9, 2026
9 min read
KubernetesPodsContainersArchitectureBeginners

A Pod in Kubernetes is the smallest deployable unit — a wrapper around one or more containers that share the same network IP, storage volumes, and lifecycle. Kubernetes schedules, scales, and manages Pods, not individual containers. Every workload you run on Kubernetes runs inside a Pod.

What is a Pod

A Pod is the smallest and most basic deployable unit in Kubernetes. It is not a container itself. It is a wrapper around one or more containers that share the same network identity, storage volumes, and lifecycle. When Kubernetes schedules, scales, or restarts a workload, it operates on Pods, not on individual containers.

Think of a Pod as a logical host. The containers inside a Pod share a localhost network interface, which means they can communicate with each other over 127.0.0.1 without any special configuration. They also share the same IP address as seen by the rest of the cluster. Any storage volumes attached to the Pod are accessible to every container within it.

This design exists because some workloads are made up of tightly coupled processes that need to run together on the same machine and share resources directly. Kubernetes groups those processes into a single schedulable unit rather than forcing you to manage each container independently.

Every Pod gets a unique IP address within the cluster. Other Pods and Services communicate with it using that address. When a Pod is deleted and recreated, it gets a new IP, which is why Kubernetes Services exist to provide stable endpoints.

Single-Container Pods

The most common pattern is a Pod that runs exactly one container. This is the standard model for the vast majority of Kubernetes workloads. When people say they are deploying a container to Kubernetes, they are actually deploying a Pod that contains a single container.

In this pattern, the Pod is essentially a thin wrapper. It provides the scheduling, networking, and lifecycle management that Kubernetes needs, while the container inside it runs your application. A web server, an API backend, a worker process, or a database instance would each run as a single container inside its own Pod.

You manage these Pods through higher-level objects like Deployments, StatefulSets, or Jobs. Those controllers create and manage Pods on your behalf, handling replication, updates, and failure recovery. You rarely create single-container Pods directly in production. Instead, you define a Deployment that specifies the container image, resource requirements, and desired replica count, and Kubernetes creates the Pods for you.

Multi-Container Pods

Sometimes a workload requires two or more processes that are tightly coupled and must run together. Kubernetes handles this with multi-container Pods. The containers inside a multi-container Pod are co-scheduled on the same node, share the same network namespace and IP address, and can access the same storage volumes.

There are several well-established patterns for multi-container Pods.

Sidecar Pattern

A sidecar container runs alongside the main application container and provides a supporting function. Common examples include a logging agent that collects and forwards logs from the main container, a proxy container that handles TLS termination or service mesh communication, or a configuration watcher that reloads settings when a ConfigMap changes.

The sidecar does not serve traffic on its own. It exists to enhance or extend the main container without modifying its code.

Init Container Pattern

Init containers run before the main application containers start. They execute sequentially, and each must complete successfully before the next one begins. Only after all init containers finish does Kubernetes start the main containers.

Init containers are used for setup tasks like waiting for a database to become available, populating a shared volume with configuration files, or running database migrations. They ensure that preconditions are met before the application starts.

Ambassador Pattern

An ambassador container acts as a proxy between the main container and external services. The main container connects to localhost, and the ambassador handles the complexity of connecting to the actual external endpoint, managing retries, connection pooling, or protocol translation.

This pattern keeps the main application container simple and unaware of the network topology around it.

Pod Lifecycle

Every Pod moves through a defined set of phases from creation to termination. Understanding these phases helps you diagnose issues and design applications that behave correctly in a Kubernetes environment.

Pending

When a Pod is first created, it enters the Pending phase. During this phase, Kubernetes is working to schedule the Pod onto a node. The scheduler evaluates resource requirements, node affinity rules, taints, and tolerations to find a suitable node.

A Pod can remain in Pending for several reasons. The cluster may not have enough available CPU or memory to satisfy the Pod's resource requests. The Pod may have a node selector or affinity rule that no current node matches. A required PersistentVolume may not be available. If a Pod stays in Pending, these are the first things to check.

Once a node is assigned, Kubernetes begins pulling the container images. If the images are large or the registry is slow, this step can take time. The Pod remains in Pending until all images are pulled and the containers are ready to start.

Running

A Pod enters the Running phase once at least one of its containers has started. This does not necessarily mean the application inside the container is ready to serve traffic. The container process is running, but it may still be initializing, loading data, or waiting for dependencies.

This is where readiness probes become important. A readiness probe tells Kubernetes when the container is actually ready to receive requests. Until the readiness probe passes, Kubernetes will not route traffic to the Pod through a Service. A liveness probe, on the other hand, tells Kubernetes whether the container is still healthy. If a liveness probe fails, Kubernetes restarts the container.

During the Running phase, containers may restart if they crash or if a liveness probe fails. The Pod itself remains in the Running phase as long as Kubernetes is actively trying to keep its containers alive.

Succeeded

A Pod enters the Succeeded phase when all of its containers have terminated with an exit code of zero and will not be restarted. This phase is most relevant for Pods created by Jobs or CronJobs, which are designed to run a task to completion and then stop.

For long-running workloads like web servers or APIs, you will rarely see the Succeeded phase because those containers are not designed to exit.

Failed

A Pod enters the Failed phase when all of its containers have terminated and at least one container exited with a non-zero exit code. This indicates that something went wrong. The container may have crashed due to an unhandled exception, a misconfiguration, or a missing dependency.

When a Pod fails, the controller that manages it, such as a Deployment or Job, decides what to do next. A Deployment will create a replacement Pod. A Job will retry up to a configured number of times.

Termination

When a Pod is deleted, Kubernetes sends a SIGTERM signal to every container in the Pod. This gives the application a chance to shut down gracefully, close connections, and finish in-progress work. By default, Kubernetes waits 30 seconds for the containers to exit. If they are still running after that grace period, Kubernetes sends a SIGKILL to force termination.

You can configure the grace period using the terminationGracePeriodSeconds field in the Pod spec. Applications that need more time to drain connections or complete transactions should set a longer grace period.

How Pods Relate to Other Kubernetes Objects

Pods do not exist in isolation. They are created and managed by higher-level controllers that add replication, update strategies, and failure handling.

A Deployment manages a set of identical Pods and handles rolling updates and rollbacks. A StatefulSet manages Pods that need stable identities and persistent storage, such as databases. A DaemonSet ensures that a copy of a Pod runs on every node in the cluster, which is useful for logging agents and monitoring tools. A Job creates Pods that run to completion for batch processing tasks. All of these controllers operate within Namespaces, which provide logical isolation between teams and environments.

In all of these cases, you define a Pod template inside the controller's specification. The controller uses that template to create Pods as needed. You interact with the controller, and it manages the Pods for you.

Common Mistakes With Pods

When you are starting out, there are a few mistakes that are easy to make and important to avoid.

Creating Pods directly instead of using a Deployment or other controller. A standalone Pod will not be rescheduled if the node it runs on fails. Always use a controller for any workload that needs to stay running.

Putting unrelated containers in the same Pod. Multi-container Pods are for tightly coupled processes that must share resources. If two containers do not need to share a network namespace or storage volume, they should be in separate Pods. Putting them together creates unnecessary coupling and makes scaling and updating harder.

Ignoring resource requests and limits. Every container in a Pod should specify CPU and memory requests so that the scheduler can make informed placement decisions. Without them, Pods may be scheduled onto nodes that cannot support them, leading to performance problems or evictions.

Not configuring health checks. Without liveness and readiness probes, Kubernetes has no way to know if your application is actually working. It will continue sending traffic to a Pod that is running but not functioning correctly. For a hands-on guide to diagnosing Pod issues, see How to Monitor Pods in Kubernetes.

Conclusion

A Pod is the fundamental building block of every Kubernetes workload. It wraps one or more containers into a single schedulable unit with a shared network identity and storage. Most workloads use single-container Pods, while tightly coupled processes use multi-container patterns like sidecars and init containers. Understanding the Pod lifecycle, from Pending through Running to Succeeded or Failed, gives you the foundation to diagnose issues and design applications that run reliably on Kubernetes. Master Pods first, and every other Kubernetes concept will make more sense.


Optimize Your Pod Resources Automatically

Over-provisioned Pods waste money. Under-provisioned Pods crash. Get started with KorPro to get continuous visibility into Pod resource usage across all your clusters and identify optimization opportunities instantly. Questions about your Kubernetes setup? Contact our team for expert guidance.

Stop Wasting Kubernetes Resources

Ready to Clean Up Your Clusters?

KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.

Written by

KorPro Team

View All Posts