How to Use a Homelab for Kubernetes Practice (Free Setup Guide)
Set up a free Kubernetes homelab in under 5 minutes with Minikube, K3s, or Kind. Step-by-step commands, multi-node configs, and 6 hands-on exercises included.
Why a Homelab Beats Cloud Learning
Cloud-based Kubernetes services like EKS, AKS, and GKE are excellent for production workloads, but they are not the best place to learn. Every minute a managed cluster runs costs money. Experimenting with configurations, breaking things on purpose, and rebuilding from scratch are essential parts of the learning process, and doing that on a cloud provider adds up fast.
A homelab eliminates that cost entirely. You run Kubernetes on your own machine, whether that is a laptop, a desktop, or a spare server. There is no billing meter. You can create and destroy clusters as many times as you want without worrying about a surprise invoice at the end of the month.
Beyond cost, a homelab gives you deeper control and understanding. Managed Kubernetes services abstract away the control plane, networking, and storage layers. That abstraction is valuable in production, but it hides the mechanics that you need to understand as a DevOps engineer. When you run Kubernetes locally, you see everything: how the API server starts, how networking is configured, how storage is provisioned, and what happens when components fail. That visibility builds the kind of intuition that separates someone who can use Kubernetes from someone who truly understands it.
A homelab also lets you work offline. You can practice on a plane, in a coffee shop without reliable Wi-Fi, or anywhere else. Your learning environment is always available because it lives on your own hardware.
Choosing the Right Tool
There are several tools designed to run Kubernetes locally. Each has strengths that make it a better fit for different situations. Here are the three most popular options.
Minikube
Minikube is the official Kubernetes local development tool maintained by the Kubernetes project itself. It creates a single-node cluster inside a virtual machine or container on your local machine. Minikube supports multiple container runtimes and hypervisors, and it includes add-ons for common features like the Kubernetes dashboard, metrics server, and ingress controller.
Minikube is the best starting point if you are completely new to Kubernetes. It closely mirrors a standard Kubernetes cluster and has extensive documentation. The add-on system makes it easy to enable features without manually installing Helm charts or YAML manifests.
K3s
K3s is a lightweight Kubernetes distribution created by Rancher Labs. It packages the entire Kubernetes control plane into a single binary that is less than 100 megabytes. K3s strips out cloud-provider-specific code and replaces etcd with a lighter datastore by default, which makes it fast to start and easy to run on resource-constrained hardware.
K3s is an excellent choice if you want to simulate a more realistic multi-node cluster on limited hardware. You can run a K3s server node and multiple agent nodes on a single machine using virtual machines or even on a cluster of Raspberry Pi boards. It is also the tool of choice if you plan to explore edge computing or IoT use cases with Kubernetes.
Kind (Kubernetes in Docker)
Kind runs Kubernetes clusters inside Docker containers. Each Kubernetes node is a Docker container, which makes it extremely fast to create and destroy clusters. Kind was originally built for testing Kubernetes itself, and it remains the preferred tool for CI pipelines and automated testing scenarios.
Kind is the best option if you want to spin up clusters quickly for testing specific configurations, or if you need to run multiple clusters simultaneously on the same machine. It is also a strong choice if you are already comfortable with Docker and want minimal setup overhead.
Prerequisites
Before setting up your homelab cluster, make sure you have the following on your machine.
A computer with at least 4 GB of RAM and 2 CPU cores available for the cluster. 8 GB of RAM and 4 cores will give you a much more comfortable experience, especially if you plan to run multiple services.
A container runtime installed. Docker Desktop is the most common choice for macOS and Windows. On Linux, Docker Engine or containerd works well.
kubectl, the Kubernetes command-line tool, installed and available in your terminal. This is how you will interact with your cluster regardless of which tool you use to create it.
Setting Up a Cluster With Minikube
Install Minikube using your system package manager or by downloading the binary directly.
On macOS with Homebrew:
bashbrew install minikube
On Linux:
bashcurl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube
Start a cluster with the default settings:
bashminikube start
This creates a single-node Kubernetes cluster. Minikube automatically configures kubectl to point at the new cluster. Verify the cluster is running:
bashkubectl get nodes
You should see one node with a Ready status. To enable the Kubernetes dashboard:
bashminikube addons enable dashboard minikube dashboard
This opens the dashboard in your browser, giving you a visual interface to explore your cluster.
Setting Up a Cluster With K3s
K3s installs with a single command on Linux:
bashcurl -sfL https://get.k3s.io | sh -
After installation, K3s starts automatically. The kubeconfig file is written to a specific location. To use kubectl with it:
bashexport KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl get nodes
If you want to add a second node to simulate a multi-node cluster, run the following on another machine or virtual machine, replacing the server URL and token with values from your server node:
bashcurl -sfL https://get.k3s.io | K3S_URL=https://your-server-ip:6443 K3S_TOKEN=your-node-token sh -
The node token is stored at /var/lib/rancher/k3s/server/node-token on the server node.
Setting Up a Cluster With Kind
Install Kind using Go or a package manager:
bashbrew install kind
Or with Go:
bashgo install sigs.k8s.io/kind@latest
Create a cluster:
bashkind create cluster --name homelab
Kind creates the cluster in seconds. To create a multi-node cluster, define a configuration file. Save the following as kind-config.yaml:
yamlkind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker
Then create the cluster using that configuration:
bashkind create cluster --name homelab-multi --config kind-config.yaml
Verify the nodes:
bashkubectl get nodes
You should see one control-plane node and two worker nodes.
Your First Deployment
Once your cluster is running, deploy a simple application to confirm everything works. This process is the same regardless of which tool you used to create the cluster.
Create a deployment running the nginx web server:
bashkubectl create deployment nginx --image=nginx
Expose it as a service:
bashkubectl expose deployment nginx --port=80 --type=NodePort
Check that the pod is running:
bashkubectl get pods
If you are using Minikube, access the service with:
bashminikube service nginx
For Kind, you will need to use port forwarding:
bashkubectl port-forward service/nginx 8080:80
Then open http://localhost:8080 in your browser. You should see the default nginx welcome page. Congratulations, you have a working Kubernetes cluster and your first deployed application.
Practice Exercises to Build Real Skills
Having a cluster is just the beginning. Here are practical exercises that will build the skills you need as a DevOps engineer, ordered from foundational to advanced.
Deployments and Scaling
Create a deployment with three replicas. Scale it up to five, then back down to two. Delete a pod manually and watch Kubernetes recreate it automatically. This teaches you how the Deployment controller maintains desired state.
ConfigMaps and Secrets
Create a ConfigMap with application settings and mount it into a pod as a file or environment variable. Do the same with a Secret. Update the ConfigMap and observe how the change propagates. This is how real applications manage configuration in Kubernetes.
Services and Networking
Create multiple deployments and expose them with different service types: ClusterIP, NodePort, and LoadBalancer. Set up an ingress controller and route traffic to different services based on the URL path. Understanding Kubernetes networking is one of the most valuable skills you can develop.
Resource Requests and Limits
Set CPU and memory requests and limits on your pods. Deploy a workload that exceeds its limits and observe what happens. This teaches you how Kubernetes schedules workloads and enforces resource boundaries, which is critical for production clusters.
Namespaces and RBAC
Create multiple namespaces and deploy applications into each one. Set up RBAC roles and role bindings to restrict access. Try to perform actions that your role does not allow and observe the errors. This is essential knowledge for any multi-team Kubernetes environment.
Helm Charts
Install Helm and deploy a community chart like Prometheus or Grafana. Customize the values file and upgrade the release. This teaches you how most real-world Kubernetes applications are packaged and distributed.
From Homelab to Production Thinking
A homelab is a safe space to experiment, but the goal is to build habits and skills that transfer to production environments. As you work through the exercises above, start thinking about the operational concerns that matter in real clusters.
How would you monitor this application? Install Prometheus and Grafana in your homelab and set up dashboards and alerts. How would you handle persistent data? Create PersistentVolumeClaims and test what happens when a pod using persistent storage is rescheduled. How would you manage secrets securely? Explore tools like Sealed Secrets or External Secrets Operator.
These are the questions that come up in every production Kubernetes environment. Practicing them in your homelab means you will have answers ready when they matter.
Comparing Your Options at a Glance
| Feature | Minikube | K3s | Kind |
|---|---|---|---|
| Best for | Beginners | Multi-node and edge | Fast testing and CI |
| Setup speed | Moderate | Fast | Very fast |
| Multi-node | Limited | Native | Native via config |
| Resource usage | Medium | Low | Low |
| Add-on ecosystem | Built-in add-ons | Helm-based | Manual |
| Offline support | Yes | Yes | Yes |
Conclusion
Building a Kubernetes homelab is one of the most effective ways to develop real DevOps skills. It costs nothing beyond the hardware you already own, it gives you complete control over every layer of the stack, and it lets you experiment freely without risk. Whether you choose Minikube for its beginner-friendly approach, K3s for its lightweight multi-node capabilities, or Kind for its speed and simplicity, the important thing is to start building, breaking, and rebuilding.
The engineers who understand Kubernetes deeply are the ones who have spent time running it themselves, not just reading about it. Your homelab is where that understanding begins. Once you are running a cluster, explore the core concepts: Pods, Namespaces, Ingress, and what Kubernetes is actually used for in production.
Ready to Manage Production Clusters?
Once you move from homelab to production, keeping clusters clean and cost-efficient becomes critical. Get started with KorPro to automatically detect unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Have questions about optimizing your environment? Contact our team for a personalized walkthrough.
Ready to Clean Up Your Clusters?
KorPro automatically detects unused resources, orphaned secrets, and wasted spend across all your Kubernetes clusters. Start optimizing in minutes.
Related Articles
Extended Kubernetes Support: How Kor Pro Helps Teams Reduce Risk, Optimize Cost, and Modernize Safely
Extended Kubernetes support helps teams manage aging clusters safely. Learn how Kor Pro improves visibility into workloads, pods, ingress, and cost to reduce risk and plan modernization.
Kor: The Open-Source Kubernetes Cleanup Tool (and How KorPro Extends It)
Kor is an open-source CLI that finds unused Kubernetes resources in your cluster. Learn how to install and use Kor, what it detects, and how KorPro extends it to multi-cloud with cost analysis.
Kubernetes End of Life and Extended Support: What Happens When Your Version Expires [2026]
Kubernetes versions lose support faster than most teams realize. Learn the release cycle, what extended support means on EKS, GKE, and AKS, and how to plan upgrades before your cluster becomes a liability.
Written by
KorPro Team