What You’ll Need
- At least two nodes (one master, one worker) with Linux installed (e.g., Ubuntu 20.04 or newer).
- 2 GB RAM and 2 CPUs per node (minimum recommended).
- Basic networking knowledge.
Step 1: Prepare the Nodes
-
Update each node:
sudo apt update && sudo apt upgrade -y
-
Install required dependencies:
sudo apt install -y curl apt-transport-https
-
Set unique hostnames for each node:
-
On the master node:
sudo hostnamectl set-hostname master-node
-
On the worker node(s):
sudo hostnamectl set-hostname worker-node-1
-
-
Disable swap on all nodes:
sudo swapoff -a
To make it permanent, comment out the swap line in
/etc/fstab
.
Step 2: Install K3s on the Master Node
-
Download and install K3s:
curl -sfL https://get.k3s.io | sh -
-
Verify the installation:
kubectl get nodes
You should see the master node listed as
Ready
. -
Retrieve the K3s join token:
sudo cat /var/lib/rancher/k3s/server/node-token
Save the token, as you’ll need it to connect the worker nodes.
Step 3: Install K3s on Worker Nodes
-
Download and install K3s:
Replace<master_ip>
with the IP address of the master node.curl -sfL https://get.k3s.io | K3S_URL=https://<master_ip>:6443 K3S_TOKEN=<node-token> sh -
-
Verify the worker node is connected:
-
On the master node:
kubectl get nodes
The worker node(s) should now appear in the list.
-
Step 4: Deploy a Test Application
-
Create a deployment YAML file:
nano nginx-deployment.yaml
-
Add the following configuration:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80
-
Apply the deployment:
kubectl apply -f nginx-deployment.yaml
-
Verify the deployment:
kubectl get pods
You should see two running pods for the Nginx application.
Step 5: Expose the Application
-
Create a service to expose Nginx:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
-
Get the service details:
kubectl get service nginx-deployment
-
Access the application:
- Note the
NodePort
value (e.g.,30008
). - Open a browser and navigate to
http://<node_ip>:<NodePort>
to see the Nginx welcome page.
- Note the
Step 6: Manage and Scale Your Cluster
-
Scale the deployment:
kubectl scale deployment nginx-deployment --replicas=4
-
Monitor cluster resources:
kubectl top nodes kubectl top pods
-
Delete the deployment and service (optional):
kubectl delete deployment nginx-deployment kubectl delete service nginx-deployment
FAQs
Q: Why use K3s instead of full Kubernetes?
A: K3s is lightweight and optimized for resource-constrained environments, making it ideal for homelabs.
Q: Can I run K3s on a Raspberry Pi?
A: Yes, K3s works well on Raspberry Pi (preferably Pi 4) for building ARM-based clusters.
Q: How do I back up my K3s cluster?
A: Back up the /etc/rancher/k3s
directory and etcd snapshots.
Q: Can I add more worker nodes later?
A: Yes, use the same installation process with the master node’s token.
Q: How do I secure my cluster?
A: Configure Role-Based Access Control (RBAC), use HTTPS for the API server, and regularly update K3s.
Q: What happens if the master node fails?
A: Without HA (High Availability), the cluster’s control plane will be unavailable. For HA, deploy multiple master nodes.
Q: How do I monitor my K3s cluster?
A: Use tools like Prometheus, Grafana, or Kubernetes Dashboard for cluster monitoring.
By setting up a lightweight K3s cluster, you can explore the world of container orchestration and learn Kubernetes basics in a resource-efficient way. Happy experimenting!