About This Guide
This comprehensive guide walks you through setting up a production-ready, highly available Kubernetes cluster using kubeadm. You'll learn to deploy a multi-master cluster with external load balancing, ensuring zero single points of failure.
Prerequisites
Before You Begin
Ensure you have the following ready before starting the installation process.
Infrastructure Requirements
| Component | Quantity | Specifications |
|---|---|---|
| Master Nodes | 3 (odd number recommended) | 2 CPU, 4GB RAM, 50GB disk |
| Worker Nodes | 3+ (based on workload) | 2 CPU, 4GB RAM, 50GB disk |
| Load Balancers | 2 (for HA) | 1 CPU, 2GB RAM, 20GB disk |
System Requirements
- Operating System: Ubuntu 20.04/22.04 LTS or RHEL/CentOS 8+
- Network: Full connectivity between all nodes (private or public)
- Firewall: Required ports open (see network section)
- Privileges: Root or sudo access on all nodes
- Unique Hostnames: Each node must have a unique hostname
- MAC & Product UUID: Must be unique for each node
Required Ports
| Node Type | Port Range | Purpose |
|---|---|---|
| Master | 6443 | Kubernetes API server |
| Master | 2379-2380 | etcd server client API |
| Master | 10250-10252 | Kubelet, kube-scheduler, kube-controller-manager |
| Worker | 10250 | Kubelet API |
| Worker | 30000-32767 | NodePort Services |
Software Versions (Latest 2025)
- Kubernetes: v1.29+ (kubeadm, kubelet, kubectl)
- Container Runtime: containerd v1.7+ or CRI-O v1.29+
- CNI Plugin: Calico, Flannel, or Cilium (latest)
- Load Balancer: HAProxy v2.8+ or NGINX
High Availability Architecture
Our HA cluster consists of 8 nodes configured for production workloads:
Cluster Topology
- 3 Master Nodes: master1, master2, master3 (stacked etcd)
- 3 Worker Nodes: worker1, worker2, worker3
- 2 Load Balancers: loadbalancer1, loadbalancer2 (HAProxy)
High Availability Benefits
This configuration provides 99.9% uptime by eliminating single points of failure. If any master node fails, the remaining nodes maintain cluster operations seamlessly.
Installation Steps
1 Configure HAProxy Load Balancers
Set up HAProxy on both load balancer nodes to distribute API server traffic:
Install HAProxy:
sudo apt update && sudo apt install -y haproxy
Configure HAProxy (/etc/haproxy/haproxy.cfg):
frontend kubernetes_frontend
bind *:6443
mode tcp
option tcplog
default_backend kubernetes_backend
backend kubernetes_backend
mode tcp
option tcp-check
balance roundrobin
server master1 192.168.1.101:6443 check fall 3 rise 2
server master2 192.168.1.102:6443 check fall 3 rise 2
server master3 192.168.1.103:6443 check fall 3 rise 2
Restart HAProxy:
sudo systemctl restart haproxy
sudo systemctl enable haproxy
sudo systemctl status haproxy
2 Prepare All Nodes (Masters & Workers)
Run these commands on all master and worker nodes:
Disable Swap:
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Load Kernel Modules:
cat <
Configure Sysctl:
cat <
Install Container Runtime (containerd):
sudo apt update
sudo apt install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Enable SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
Install Kubernetes Components:
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
3 Initialize the First Control Plane
On master1 only:
sudo kubeadm init \
--control-plane-endpoint="LOAD_BALANCER_IP:6443" \
--upload-certs \
--pod-network-cidr=10.244.0.0/16
Save the output! You'll need the join commands for:
- Additional control plane nodes
- Worker nodes
Configure kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4 Deploy Pod Network (Calico)
Install Calico CNI on master1:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
Verify pod network:
kubectl get pods -n kube-system
5 Join Additional Master Nodes
On master2 and master3, run the control plane join command from Step 3:
sudo kubeadm join LOAD_BALANCER_IP:6443 \
--token \
--discovery-token-ca-cert-hash sha256: \
--control-plane --certificate-key
6 Join Worker Nodes
On each worker node, run the worker join command from Step 3:
sudo kubeadm join LOAD_BALANCER_IP:6443 \
--token \
--discovery-token-ca-cert-hash sha256:
7 Verify Cluster Health
Check all nodes are ready:
kubectl get nodes -o wide
Verify component status:
kubectl get pods -n kube-system
kubectl cluster-info
Cluster Ready!
Your HA Kubernetes cluster is now operational and ready for production workloads!
Troubleshooting Tips
- Token expired: Generate a new token with
kubeadm token create --print-join-command - Pod network issues: Verify CNI plugin is running:
kubectl get pods -n kube-system - Node not ready: Check kubelet logs:
sudo journalctl -u kubelet -f - HAProxy not forwarding: Test connectivity:
nc -zv LOAD_BALANCER_IP 6443
Next Steps
- Configure RBAC and access controls
- Set up cluster security policies
- Deploy monitoring with Prometheus & Grafana
- Configure persistent storage solutions
- Implement backup strategies for etcd