Back to Blog
Kubernetes Fundamentals

What is Ingress in Kubernetes? How It Works + Examples

Kubernetes Ingress explained: how it routes external traffic, why Services alone aren't enough, and how to set up Nginx Ingress Controller step by step.

KorPro Team
February 9, 2026
10 min read
KubernetesIngressNetworkingNginxLoad BalancingDevOps

Ingress in Kubernetes is an API resource that routes external HTTP/HTTPS traffic to Services inside the cluster based on hostnames and URL paths. It replaces the need for one LoadBalancer per service by providing a single entry point with rule-based routing, TLS termination, and centralized traffic management.

Introduction

Every application running in Kubernetes eventually needs to be reachable from outside the cluster. Internal service discovery handles communication between Pods, but users, APIs, and external systems need a way in. Kubernetes provides several mechanisms for this, but not all of them scale well or give you the routing control that production workloads demand.

This guide explains the problem with basic external access methods, introduces Ingress as the solution, and walks through how Ingress Controllers and Ingress Rules work together to give you fine-grained control over traffic entering your cluster. If you are new to Kubernetes, start with What is Kubernetes? for the foundational concepts.

The Problem With NodePort and LoadBalancer Services

Kubernetes Services are the standard way to expose pods to network traffic. The two most common types for external access are NodePort and LoadBalancer. Both work, but both have significant limitations when you move beyond simple use cases.

NodePort

A NodePort service opens a specific port on every node in the cluster and forwards traffic on that port to the target pods. If your node IP is 192.168.1.10 and the NodePort is 30080, external clients reach your application at 192.168.1.10:30080.

The problems with NodePort become clear quickly. Port numbers are limited to the range 30000 to 32767, which means your users need to remember non-standard ports. You cannot run two services on the same port. There is no built-in TLS termination, so you need to handle HTTPS elsewhere. And there is no host-based or path-based routing, so every service needs its own port. For a single test service this is fine. For a production environment with dozens of services, it becomes unmanageable.

LoadBalancer

A LoadBalancer service asks the underlying cloud provider to provision an external load balancer and assign it a public IP address. Traffic hits the load balancer, which forwards it to the correct pods. This solves the port problem because each service gets its own IP on port 80 or 443.

The issue is cost and waste. Every LoadBalancer service provisions a separate cloud load balancer. If you have 20 services that need external access, you get 20 load balancers, each with its own public IP and its own monthly charge. On AWS, each Application Load Balancer costs roughly 20 dollars per month before data transfer. Twenty services means 400 dollars per month just for load balancers, and that number grows linearly with every new service. There is also no way to share routing logic between them. Each load balancer is independent, so you cannot route app.example.com and api.example.com through the same entry point.

The Gap

What production environments actually need is a single entry point that can route traffic to many backend services based on the hostname, URL path, or other request attributes. That entry point should handle TLS termination in one place, support rate limiting and authentication, and not require a new cloud resource for every service you deploy. This is exactly what Ingress provides.

How Ingress Works

Ingress is a Kubernetes API resource that defines rules for routing external HTTP and HTTPS traffic to services inside the cluster. It is not a service type. It is a separate abstraction that sits in front of your services and acts as a smart router.

An Ingress resource on its own does nothing. It is a declaration of intent: route traffic matching these rules to these services. To make that declaration real, you need an Ingress Controller.

Ingress Controllers

An Ingress Controller is a pod running inside your cluster that watches for Ingress resources and configures itself to implement the routing rules they define. When you create, update, or delete an Ingress resource, the controller picks up the change and reconfigures its routing table.

The controller typically runs behind a single LoadBalancer service or NodePort, which serves as the single entry point for all external traffic. From there, the controller inspects each incoming request and routes it to the correct backend service based on the Ingress rules. This means you pay for one load balancer instead of one per service.

There are many Ingress Controllers available. Some of the most widely used include Nginx Ingress Controller, Traefik, HAProxy, and cloud-native options like AWS ALB Ingress Controller and GCE Ingress Controller. The choice depends on your environment and requirements, but the concepts are the same across all of them.

Ingress Rules

An Ingress resource contains one or more rules. Each rule matches incoming requests based on the hostname and URL path, then directs them to a specific Kubernetes service and port.

Here is a basic example that routes traffic for two different hostnames to two different services:

yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress spec: rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend-service port: number: 80 - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 80

With this single Ingress resource, requests to app.example.com go to the frontend service and requests to api.example.com go to the API service. Both share the same entry point and the same load balancer.

You can also route based on URL paths within the same hostname:

yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: path-based-ingress spec: rules: - host: example.com http: paths: - path: /app pathType: Prefix backend: service: name: frontend-service port: number: 80 - path: /api pathType: Prefix backend: service: name: api-service port: number: 80 - path: /docs pathType: Prefix backend: service: name: docs-service port: number: 80

This routes example.com/app, example.com/api, and example.com/docs to three separate backend services through a single entry point.

The Nginx Ingress Controller

The Nginx Ingress Controller is the most widely deployed Ingress Controller in the Kubernetes ecosystem. It is maintained by the Kubernetes community and uses Nginx as the underlying reverse proxy and load balancer.

Installing the Nginx Ingress Controller

The simplest way to install it is with a single kubectl command:

bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml

This creates a namespace called ingress-nginx, deploys the controller, and provisions a LoadBalancer service as the entry point. For local development with Minikube (see our Kubernetes homelab guide), you can enable it as an add-on instead:

bash
minikube addons enable ingress

After installation, verify the controller is running:

bash
kubectl get pods -n ingress-nginx

You should see the controller pod in a Running state.

TLS Termination

One of the most valuable features of Ingress is centralized TLS termination. Instead of configuring HTTPS on every individual service, you configure it once at the Ingress level.

First, create a Kubernetes Secret containing your TLS certificate and private key:

bash
kubectl create secret tls example-tls --cert=tls.crt --key=tls.key

Then reference it in your Ingress resource:

yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tls-ingress spec: tls: - hosts: - app.example.com secretName: example-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend-service port: number: 80

The Nginx Ingress Controller terminates TLS at the edge and forwards unencrypted traffic to the backend service. This simplifies certificate management and reduces the operational burden on individual service teams.

Annotations for Advanced Configuration

The Nginx Ingress Controller supports a wide range of annotations that let you customize behavior without modifying the controller itself. Some commonly used annotations include:

Rate limiting to protect backend services from traffic spikes:

yaml
metadata: annotations: nginx.ingress.kubernetes.io/limit-rps: "10"

URL rewriting to modify the path before forwarding to the backend:

yaml
metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /

Proxy timeouts for services that need longer processing times:

yaml
metadata: annotations: nginx.ingress.kubernetes.io/proxy-read-timeout: "120" nginx.ingress.kubernetes.io/proxy-send-timeout: "120"

These annotations give you fine-grained control over traffic handling without deploying additional infrastructure.

Default Backends and Error Handling

When a request arrives that does not match any Ingress rule, the controller needs to know what to do with it. You can configure a default backend that catches all unmatched traffic:

yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: default-ingress spec: defaultBackend: service: name: default-service port: number: 80 rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend-service port: number: 80

This is useful for serving custom 404 pages or redirecting unknown traffic to a landing page.

Ingress vs Gateway API

The Kubernetes community has been developing the Gateway API as a more expressive successor to Ingress. Gateway API introduces resources like Gateway, HTTPRoute, and GRPCRoute that provide richer routing capabilities, better role separation, and support for protocols beyond HTTP.

Ingress is not going away. It remains stable, widely supported, and sufficient for the majority of use cases. If your routing needs are primarily HTTP-based with host and path matching, Ingress is the right choice today. If you need advanced traffic management like header-based routing, traffic splitting, or protocol-level control, the Gateway API is worth evaluating.

For most DevOps teams, starting with Ingress and migrating to Gateway API when specific needs arise is a practical approach.

Common Mistakes to Avoid

Running Ingress in production surfaces a few recurring issues that are worth knowing about upfront.

Forgetting to install an Ingress Controller is the most common mistake. Creating an Ingress resource without a controller means nothing happens. The resource exists in the API but no component is watching for it or acting on it.

Using multiple Ingress Controllers without specifying an IngressClass can cause conflicts. If you have both Nginx and Traefik running, annotate your Ingress resources with the correct IngressClassName so each controller only processes the rules intended for it:

yaml
spec: ingressClassName: nginx

Ignoring health checks on backend services leads to traffic being routed to unhealthy pods. Make sure your services have proper readiness probes so the Ingress Controller only sends traffic to pods that are ready to handle it.

Not monitoring the Ingress Controller itself is another gap. The controller is a critical piece of infrastructure. If it goes down, all external traffic stops. Set up monitoring and alerting for the controller pods, and run at least two replicas for high availability.

Conclusion

Ingress solves the problem of managing external access to Kubernetes services at scale. Where NodePort forces non-standard ports and LoadBalancer creates cost sprawl, Ingress provides a single entry point with flexible, rule-based routing. Combined with an Ingress Controller like Nginx, you get TLS termination, path-based and host-based routing, rate limiting, and centralized traffic management without provisioning a new load balancer for every service.

For DevOps engineers setting up application routing, Ingress is one of the first pieces of infrastructure to get right. It directly impacts cost, security, and the developer experience of deploying new services. Understanding how Ingress Controllers and Ingress Rules work together gives you the foundation to build a routing layer that scales with your organization.


Stop Paying for Unused Ingresses and LoadBalancers

Orphaned Ingress resources and idle LoadBalancers silently drain your cloud budget. Get started with KorPro to automatically detect unused networking resources across all your clusters and see exactly how much they cost. Need help optimizing your Kubernetes infrastructure? Contact us for a consultation.

Stop Wasting Kubernetes Resources

Ready to Clean Up Your Clusters?

KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.

Written by

KorPro Team

View All Posts