Back to Blog
Security

How to Secure Your Kubernetes Cluster: Essential Steps

A practical guide to securing your Kubernetes cluster covering RBAC, Network Policies, Secrets Management, and Image Scanning. Built for DevSecOps teams and Enterprise Architects who need actionable steps, not just checklists.

KorPro Team
February 9, 2026
12 min read
KubernetesSecurityRBACNetwork PoliciesSecrets ManagementImage ScanningDevSecOps

Introduction

A Kubernetes cluster that is not deliberately secured is an open target. Default configurations are designed for ease of use, not for safety. Pods can communicate with any other pod. Service accounts receive tokens whether they need them or not. Secrets are stored with base64 encoding that provides no real protection. Containers often run as root with images pulled from public registries that no one has verified.

To secure a Kubernetes cluster, you need to address these gaps systematically. This guide covers the four areas that matter most: Role-Based Access Control, Network Policies, Secrets Management, and Image Scanning. For each area, the focus is on the underlying security principle and the concrete steps you need to take. The goal is not a checklist of tools but a clear understanding of why each measure exists and how to implement it correctly.

Every recommendation in this guide is grounded in a single idea: the Principle of Least Privilege. Every user, service account, pod, and container should have the minimum permissions required to do its job and nothing more. When you apply this principle consistently across access control, networking, secrets, and supply chain, you build a cluster that is resilient to both external attacks and internal mistakes.

Role-Based Access Control

RBAC is the mechanism Kubernetes uses to control who can do what inside the cluster. It determines which users and service accounts can create, read, update, or delete resources, and in which namespaces. A properly configured RBAC policy is the first line of defense in any secure Kubernetes cluster.

Why Default Access Is Dangerous

Many clusters start with overly broad permissions. Developers are given cluster-admin access because it is the fastest way to unblock them. Service accounts are left with default tokens that grant more access than the workload needs. In some cases, the system:anonymous user retains permissions that allow unauthenticated access to cluster resources.

Every unnecessary permission is an attack surface. If a pod is compromised and its service account has cluster-wide read access, the attacker can enumerate every secret, configmap, and deployment in the cluster. If a developer account has delete permissions across all namespaces, a single compromised credential can take down production workloads.

Applying Least Privilege to RBAC

Start by auditing what permissions currently exist. List all ClusterRoleBindings and RoleBindings to understand who has access to what:

bash
kubectl get clusterrolebindings -o wide kubectl get rolebindings --all-namespaces -o wide

Remove any bindings to cluster-admin that are not strictly necessary. In most organizations, only a small number of platform administrators need cluster-wide admin access.

Create namespace-scoped Roles instead of ClusterRoles whenever possible. A Role limits permissions to a single namespace, which contains the blast radius if credentials are compromised. Here is an example Role that allows a developer to manage deployments and pods in a specific namespace but nothing else:

yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: app-team name: deployment-manager rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"]

Bind this Role to a specific user or group:

yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: app-team name: dev-deployment-manager subjects: - kind: User name: jane@example.com apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: deployment-manager apiGroup: rbac.authorization.k8s.io

Jane can now manage deployments in the app-team namespace but cannot access secrets, modify RBAC policies, or touch resources in other namespaces.

Service Account Discipline

Every pod in Kubernetes runs with a service account. If you do not specify one, it uses the default service account in the namespace, which may have more permissions than the workload needs.

Create dedicated service accounts for each workload and bind only the permissions that workload requires. Disable automatic token mounting for service accounts that do not need to call the Kubernetes API:

yaml
apiVersion: v1 kind: ServiceAccount metadata: name: frontend-app namespace: app-team automountServiceAccountToken: false

This prevents a compromised frontend pod from using the service account token to query the Kubernetes API.

Network Policies

By default, every pod in a Kubernetes cluster can communicate with every other pod. There are no firewalls, no segmentation, and no restrictions. This means that if an attacker compromises a single pod, they can reach every other service in the cluster, including databases, internal APIs, and control plane components.

Network Policies are Kubernetes resources that define which pods can communicate with which other pods. They act as firewall rules at the pod level and are essential for securing a Kubernetes cluster against lateral movement.

The Default-Deny Foundation

The most important Network Policy you can create is a default-deny rule. This blocks all ingress and egress traffic for pods in a namespace unless explicitly allowed by another policy. It flips the security model from open-by-default to closed-by-default:

yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: app-team spec: podSelector: {} policyTypes: - Ingress - Egress

With this in place, no pod in the app-team namespace can send or receive traffic until you create policies that explicitly permit it. This is the Principle of Least Privilege applied to networking.

Allowing Only What Is Needed

After establishing default-deny, create policies that permit the specific communication paths your application requires. For example, allow the frontend pods to receive traffic from the ingress controller and to send traffic to the API service:

yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: frontend-policy namespace: app-team spec: podSelector: matchLabels: app: frontend policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx egress: - to: - podSelector: matchLabels: app: api-service ports: - protocol: TCP port: 8080 - to: - namespaceSelector: {} ports: - protocol: UDP port: 53

The DNS egress rule on port 53 is important. Without it, pods cannot resolve service names, which breaks virtually all inter-service communication.

CNI Requirements

Network Policies are only enforced if your cluster runs a CNI plugin that supports them. Calico, Cilium, and Weave Net all support Network Policies. The default kubenet CNI does not. If your CNI does not enforce policies, the NetworkPolicy resources will exist in the API but have no effect. Verify your CNI supports enforcement before relying on Network Policies as a security control.

Secrets Management

Kubernetes Secrets are the built-in mechanism for storing sensitive data like API keys, database passwords, TLS certificates, and tokens. However, the default behavior of Kubernetes Secrets has significant security limitations that you must address to secure your Kubernetes cluster.

The Problem With Default Secrets

By default, Kubernetes Secrets are stored in etcd with only base64 encoding. Base64 is not encryption. Anyone with read access to etcd or with permission to get secrets from the Kubernetes API can decode them instantly. In many clusters, the default service account in every namespace has permission to read secrets, which means every pod can potentially access every secret in its namespace.

Encrypting Secrets at Rest

The first step is enabling encryption at rest for etcd. This ensures that even if someone gains access to the etcd data files, they cannot read the secret values without the encryption key.

Create an EncryptionConfiguration file:

yaml
apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: <base64-encoded-32-byte-key> - identity: {}

Configure the API server to use this file by adding the encryption-provider-config flag. On managed Kubernetes services like EKS, AKS, and GKE, envelope encryption with a cloud KMS is available and should be enabled.

External Secrets Management

For production environments, storing secrets directly in Kubernetes is often insufficient. External secrets managers like HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, and Google Cloud Secret Manager provide stronger access controls, audit logging, automatic rotation, and centralized management.

The External Secrets Operator is a Kubernetes controller that synchronizes secrets from external providers into Kubernetes Secrets automatically. This lets your pods consume secrets through the standard Kubernetes API while the actual secret values are managed and audited in a dedicated secrets platform:

yaml
apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: database-credentials namespace: app-team spec: refreshInterval: 1h secretStoreRef: name: vault-backend kind: ClusterSecretStore target: name: db-credentials data: - secretKey: username remoteRef: key: secret/data/database property: username - secretKey: password remoteRef: key: secret/data/database property: password

Limiting Secret Access

Apply RBAC to restrict which users and service accounts can read secrets. Most workloads do not need access to all secrets in a namespace. Create Roles that grant access only to the specific secrets a workload requires:

yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: app-team name: db-secret-reader rules: - apiGroups: [""] resources: ["secrets"] resourceNames: ["db-credentials"] verbs: ["get"]

The resourceNames field is critical. Without it, the role grants access to every secret in the namespace. With it, access is limited to the single secret the workload needs. This is the Principle of Least Privilege applied to secrets.

Detecting Unused and Orphaned Secrets

Over time, clusters accumulate secrets that are no longer referenced by any workload. These orphaned secrets are a security risk because they expand the attack surface without providing any value. An attacker who gains secret-read access can harvest credentials that may still be valid even though no application uses them.

Regularly audit your secrets to identify which ones are actively mounted or referenced by pods and which ones are orphaned. KorPro automates this detection across clusters and cloud providers, flagging unused secrets and helping teams clean them up before they become a liability.

Image Scanning

Every container image you deploy is part of your attack surface. Images can contain known vulnerabilities in operating system packages, application dependencies, or base layers. They can also contain malware, hardcoded credentials, or misconfigurations that weaken your security posture.

Image scanning analyzes container images for these issues before they reach your cluster. It is a critical step in securing your Kubernetes cluster because it shifts security left, catching problems before deployment rather than after.

Scanning in the CI/CD Pipeline

The most effective place to scan images is in your CI/CD pipeline, immediately after the image is built and before it is pushed to a registry. This ensures that no image with known critical vulnerabilities ever reaches your cluster.

Tools like Trivy, Grype, and Snyk Container can be integrated into your pipeline. Trivy is open source and widely adopted:

bash
trivy image --severity HIGH,CRITICAL your-registry/your-app:latest

Configure your pipeline to fail the build if critical or high-severity vulnerabilities are found. This creates a hard gate that prevents vulnerable images from being deployed.

Admission Control

Pipeline scanning catches issues at build time, but it does not prevent someone from deploying an unscanned image directly with kubectl. Admission controllers close this gap by enforcing policies at the cluster level.

Kyverno and OPA Gatekeeper are the two most common policy engines for Kubernetes. Here is a Kyverno policy that blocks any pod using an image from an untrusted registry:

yaml
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: restrict-image-registries spec: validationFailureAction: Enforce rules: - name: validate-registries match: any: - resources: kinds: - Pod validate: message: "Images must come from the trusted registry." pattern: spec: containers: - image: "your-registry.example.com/*"

This ensures that even if a developer bypasses the CI/CD pipeline, the cluster itself rejects images from unauthorized sources.

Minimizing the Image Attack Surface

Beyond scanning, reduce the attack surface of your images by using minimal base images. Distroless images from Google and Alpine-based images contain far fewer packages than full Ubuntu or Debian images, which means fewer potential vulnerabilities.

Run containers as a non-root user whenever possible. Many vulnerabilities require root access to exploit. Setting a non-root security context eliminates an entire class of attacks:

yaml
spec: containers: - name: app image: your-registry.example.com/app:latest securityContext: runAsNonRoot: true runAsUser: 1000 readOnlyRootFilesystem: true allowPrivilegeEscalation: false

The readOnlyRootFilesystem setting prevents the container from writing to its filesystem, which blocks many post-exploitation techniques. The allowPrivilegeEscalation setting prevents a process from gaining more privileges than its parent, which stops container escape attacks that rely on privilege escalation.

Putting It All Together

Securing a Kubernetes cluster is not about deploying a single tool or flipping a single switch. It is a layered approach where each control reinforces the others.

RBAC controls who can access the cluster and what they can do. Network Policies control which pods can communicate with each other. Secrets Management controls how sensitive data is stored, accessed, and rotated. Image Scanning controls what software is allowed to run in the cluster.

When all four layers are in place, a compromised component is contained. A breached pod cannot reach other services because Network Policies block lateral movement. It cannot read secrets it does not need because RBAC restricts access. The image it runs has been scanned for known vulnerabilities. And the user or service account that deployed it had only the minimum permissions required.

This is the Principle of Least Privilege applied across every layer of the stack. It does not make your cluster invulnerable, but it dramatically reduces the blast radius of any incident and makes your Kubernetes environment defensible.

Conclusion

To secure a Kubernetes cluster, you need deliberate action across access control, networking, secrets, and supply chain. Default configurations leave too many doors open. RBAC with namespace-scoped roles and dedicated service accounts limits who can do what. Default-deny Network Policies stop lateral movement. Encrypted and externally managed secrets protect sensitive data. Image scanning and admission control ensure that only verified, minimal images run in your cluster.

Each of these measures is an application of the same principle: grant only what is needed, deny everything else, and verify continuously. For DevSecOps teams and Enterprise Architects, this is the foundation of a Kubernetes security posture that scales with your organization.


Strengthen Your Security Posture With KorPro

Orphaned secrets and unused credentials are attack vectors hiding in your clusters. Create your free KorPro account to automatically detect unused Secrets, orphaned ServiceAccounts, and exposed resources across all your clusters and cloud providers. Need a security review? Contact us to schedule an assessment with our team.

Stop Wasting Kubernetes Resources

Ready to Clean Up Your Clusters?

KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.

Written by

KorPro Team

View All Posts