K3s Cluster with Rancher on Alpine Linux VMs (Clone Method)
Complete Step-by-Step Guide – 50GB Storage with Isolated Cluster Network
Your Network Architecture
- eth0: Proxmox NAT network (for internet access, management)
- eth1: Isolated cluster network (for Kubernetes communication) – Alpine sees this as a regular network interface, not VXLAN
- Master Node: k3s-1 – Cluster IP: 10.50.1.101
- Worker Node 1: k3s-2 – Cluster IP: 10.50.1.102
- Worker Node 2: k3s-3 – Cluster IP: 10.50.1.103
Note: The VXLAN is configured at the Proxmox level. Your Alpine VMs just see eth1 as a normal network interface with an IP address. K3s will use this interface for all cluster communication.
Part 1: Create and Configure the Master VM Template
Step 1.1: Install Alpine Linux on Master VM
- Create a new VM in Proxmox with these specs:
- Name:
k3s-master-template - OS: Alpine Linux ISO
- Disk: 50GB
- CPU: 2 cores
- RAM: 2048 MB (2GB)
- Network: Two network interfaces
- eth0: NAT mode (for internet)
- eth1: Isolated network (VXLAN configured at Proxmox level) – gets IP 10.50.1.101
- Name:
- Boot the VM and install Alpine:
Log in as root (no password) and run:
# Run Alpine setup setup-alpineFollow these prompts:
- Keyboard layout:
us - Hostname:
k3s-master-template(temporary) - Network interface: Configure both interfaces
- First interface (eth0): DHCP
- Second interface (eth1): We’ll configure statically
- Password:
alpine(temporary) - Timezone:
UTC - Proxy:
none - NTP client:
chrony - Available disks:
sda - How to use disk:
sys - Erase disk?
y - Continue?
y
- Keyboard layout:
- After installation completes, the VM will reboot. Log in as root with password
alpine.
Step 1.2: Update and Install Basic Packages
# Update package repository
apk update
apk upgrade
# Install essential tools
apk add curl bash nano htop vim
# Install network tools
apk add net-tools bind-tools
# Install iptables (REQUIRED for K3s)
apk add iptables ip6tables
# Enable iptables to start at boot
rc-update add iptables boot
rc-update add ip6tables boot
# Start iptables now
rc-service iptables start
rc-service ip6tables start
Step 1.3: Configure Network Interfaces
The VMs see eth1 as a regular network interface – they don’t know it’s VXLAN:
# Edit network configuration
cat > /etc/network/interfaces << EOF
# Loopback
auto lo
iface lo inet loopback
# NAT interface (for internet)
auto eth0
iface eth0 inet dhcp
# Cluster interface (isolated network - VXLAN at Proxmox level)
auto eth1
iface eth1 inet static
address 10.50.1.101
netmask 255.255.255.0
# No gateway - cluster traffic stays within this network
EOF
# Restart networking
rc-service networking restart
# Verify both interfaces are configured
ip addr show eth0
ip addr show eth1
# Check that eth1 has the correct IP
# Should show: inet 10.50.1.101/24 scope global eth1
Step 1.4: Configure Hostname Resolution
# Create hosts file with all node cluster IPs
cat > /etc/hosts << EOF
127.0.0.1 localhost
::1 localhost
# K3s cluster nodes (cluster network)
10.50.1.101 k3s-1
10.50.1.102 k3s-2
10.50.1.103 k3s-3
EOF
# Verify
cat /etc/hosts
Step 1.5: Enable IP Forwarding
# Enable IP forwarding (required for Kubernetes)
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
# Verify
sysctl net.ipv4.ip_forward
Step 1.6: Configure SSH (Optional but Recommended)
# Install and enable SSH server
apk add openssh-server
rc-update add sshd default
rc-service sshd start
# Change root password
passwd
Step 1.7: Verify Everything
# Check that eth1 is ready
ip link show eth1
ip addr show eth1
# Test internet connectivity (via eth0)
ping -c 2 8.8.8.8
# Verify iptables is installed
iptables --version
Step 1.8: Prepare for Cloning
# Clean up logs and temporary files
apk cache clean
rm -f /var/log/*.log
rm -f /root/.bash_history
history -c
# Remove SSH host keys (will be regenerated on boot)
rm -f /etc/ssh/ssh_host_*
# Shutdown the VM
poweroff
Part 2: Clone the Master to Create Workers
Step 2.1: Create Worker VMs from Template
- In Proxmox, right-click on the
k3s-master-templateVM - Select “Clone”
- Create two clones with these settings:
Worker 1:
- Name:
k3s-2 - VM ID: 102
- Mode: Full Clone
- Target storage: Your storage
Worker 2:
- Name:
k3s-3 - VM ID: 103
- Mode: Full Clone
- Target storage: Your storage
- Name:
Note: Both clones inherit the two network interfaces. The VXLAN at Proxmox level will handle isolation.
Step 2.2: Configure Network for Master (k3s-1)
Start the master VM and verify network:
# On master VM, check network configuration
ip addr show eth0 # Should have DHCP IP
ip addr show eth1 # Should show 10.50.1.101/24
# If eth1 IP is wrong, fix it:
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.101
netmask 255.255.255.0
EOF
rc-service networking restart
Step 2.3: Configure Network for Worker 1 (k3s-2)
# On worker 1 VM, configure eth1 with its cluster IP
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.102
netmask 255.255.255.0
EOF
# Restart networking
rc-service networking restart
# Change hostname
echo "k3s-2" > /etc/hostname
hostname -F /etc/hostname
# Verify
hostname
ip addr show eth1 # Should show 10.50.1.102
Step 2.4: Configure Network for Worker 2 (k3s-3)
# On worker 2 VM, configure eth1 with its cluster IP
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.103
netmask 255.255.255.0
EOF
# Restart networking
rc-service networking restart
# Change hostname
echo "k3s-3" > /etc/hostname
hostname -F /etc/hostname
# Verify
hostname
ip addr show eth1 # Should show 10.50.1.103
Step 2.5: Verify Cluster Network Connectivity
On ALL nodes, test connectivity using eth1 IPs:
# Test ping to all nodes via cluster network
ping -c 2 10.50.1.101
ping -c 2 10.50.1.102
ping -c 2 10.50.1.103
# All should succeed - Proxmox VXLAN is working transparently
# Also verify internet access via eth0
ping -c 2 8.8.8.8
Part 3: Install K3s on Master Node (k3s-1)
Step 3.1: Install K3s Master
# On master node (10.50.1.101)
# Install K3s - tell it to use eth1 for cluster communication
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--node-ip=10.50.1.101 \
--advertise-address=10.50.1.101 \
--flannel-iface=eth1 \
--disable=traefik" \
sh -
What these flags do:
– --node-ip=10.50.1.101: K3s uses this IP for cluster communication
– --advertise-address=10.50.1.101: API server advertises this IP
– --flannel-iface=eth1: Flannel uses eth1 for pod networking
– --disable=traefik: Disable Traefik to save resources
Step 3.2: Configure kubectl
# Setup kubectl config
mkdir -p ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
chmod 600 ~/.kube/config
# Edit the config to use IP instead of hostname (more reliable)
sed -i 's/127.0.0.1/10.50.1.101/g' ~/.kube/config
# Add to bashrc
echo "export KUBECONFIG=~/.kube/config" >> ~/.bashrc
source ~/.bashrc
Step 3.3: Get Node Token for Workers
# Save this token - you'll need it for worker nodes
cat /var/lib/rancher/k3s/server/node-token
Step 3.4: Verify Master is Ready
# Wait 30 seconds for K3s to initialize
sleep 30
# Check nodes (only master should appear)
kubectl get nodes -o wide
# You should see:
# NAME STATUS ROLES INTERNAL-IP ...
# k3s-1 Ready control-plane,master 10.50.1.101 ...
Part 4: Install K3s on Worker Nodes
Step 4.1: On Worker 1 (k3s-2)
# On worker 1 (10.50.1.102)
# Set variables (replace TOKEN with actual value from master)
MASTER_IP="10.50.1.101"
TOKEN="YOUR_TOKEN_FROM_MASTER"
# Install K3s agent - tell it to use eth1 for cluster communication
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER_IP}:6443 K3S_TOKEN=${TOKEN} sh -s - agent \
--node-ip=10.50.1.102 \
--flannel-iface=eth1
# Check status
rc-service k3s-agent status
Step 4.2: On Worker 2 (k3s-3)
# On worker 2 (10.50.1.103)
# Set variables
MASTER_IP="10.50.1.101"
TOKEN="YOUR_TOKEN_FROM_MASTER"
# Install K3s agent
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER_IP}:6443 K3S_TOKEN=${TOKEN} sh -s - agent \
--node-ip=10.50.1.103 \
--flannel-iface=eth1
# Check status
rc-service k3s-agent status
Step 4.3: Verify Cluster from Master
# On master node (k3s-1)
kubectl get nodes -o wide
# You should see all nodes with INTERNAL-IP from cluster network:
# NAME STATUS ROLES INTERNAL-IP ...
# k3s-1 Ready control-plane,master 10.50.1.101
# k3s-2 Ready <none> 10.50.1.102
# k3s-3 Ready </none><none> 10.50.1.103
Part 5: Verify Cluster Network
Step 5.1: Check Flannel Interface
# On any node, check the flannel interface
ip link show flannel.1
ip addr show flannel.1
# This shows the overlay network created by Flannel on top of eth1
Step 5.2: Deploy Test Pods
# On master, deploy test pods
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: alpine
image: alpine:latest
command: ["sleep", "3600"]
EOF
# Wait for pods
kubectl get pods -o wide
# Note the pod IPs (they'll be in a different range, e.g., 10.42.x.x)
Step 5.3: Test Cross-Node Pod Communication
# Get pod IPs
kubectl get pods -o wide
# Exec into first pod (replace with actual pod name)
kubectl exec -it test-app-xxxx-yyyy -- sh
# Inside pod, install ping
apk add iputils
# Ping pod on another node (use its IP from get pods output)
ping <other-pod-ip>
# Exit
exit
Note: The pods communicate over the Flannel overlay network, which is built on top of your eth1 network. The VXLAN at Proxmox level is completely transparent to Kubernetes.
Part 6: Install Helm on Master
# On master node (k3s-1)
# Download and install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# Verify installation
helm version
# Add required Helm repositories
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo add jetstack https://charts.jetstack.io
helm repo update
# List added repos
helm repo list
Part 7: Install cert-manager
# On master node (k3s-1)
# Create namespace for cert-manager
kubectl create namespace cert-manager
# Install cert-manager
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--set installCRDs=true
# Wait for cert-manager pods to be ready
kubectl get pods -n cert-manager -w
# After all pods are running (Ctrl+C to exit), verify:
kubectl get pods -n cert-manager
Expected output:
NAME READY STATUS RESTARTS AGE
cert-manager-cainjector-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
cert-manager-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
cert-manager-webhook-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
Part 8: Install Rancher
Step 8.1: Install Rancher
# On master node (k3s-1)
# Create namespace for Rancher
kubectl create namespace cattle-system
# Install Rancher
helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=rancher.local \
--set bootstrapPassword=admin123 \
--set replicas=1 \
--set global.cattle.psp.enabled=false
# Check installation status
kubectl -n cattle-system get pods -w
# Wait for Rancher to be ready (this may take 2-3 minutes)
kubectl -n cattle-system rollout status deploy/rancher
Step 8.2: Verify Rancher Installation
# Check all Rancher pods are running
kubectl -n cattle-system get pods
# Check services
kubectl -n cattle-system get svc
Step 8.3: Expose Rancher via NodePort
# Expose Rancher as NodePort
kubectl -n cattle-system patch svc rancher -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 443, "nodePort": 30443}]}}'
# Get the NodePort
NODE_PORT=$(kubectl -n cattle-system get svc rancher -o jsonpath='{.spec.ports[0].nodePort}')
echo "=================================================="
echo "Rancher is available at: https://10.50.1.101:${NODE_PORT}"
echo "Username: admin"
echo "Password: admin123"
echo "=================================================="
# Verify the service is properly exposed
kubectl -n cattle-system get svc rancher
Step 8.4: Access Rancher Dashboard
- Open your web browser and navigate to: `https://10.50.1.101:30443`
Accept the security warning (self-signed certificate) – this is expected
Login with the bootstrap credentials:
- Username:
admin - Password:
admin123
- Change the admin password to something secure
- Accept the terms and conditions
- Set the server URL (use `https://10.50.1.101:30443`)
- Your cluster appears automatically as “local”
- Click “Cluster Explorer” to view nodes, workloads, etc.
Part 9: Test Your Complete Cluster
Step 9.1: Deploy a Test Application
# On master node, deploy nginx test
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-test
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080
selector:
app: nginx
EOF
Step 9.2: Verify the Application
# Check pods are distributed across nodes
kubectl get pods -o wide
# Check service
kubectl get svc nginx-test
# Test the service from any node
curl http://10.50.1.101:30080
curl http://10.50.1.102:30080
curl http://10.50.1.103:30080
Step 9.3: View in Rancher
- Go to Rancher UI: `https://10.50.1.101:30443`
- Navigate to “Cluster Explorer” > “Workloads”
- You should see the nginx-test deployment
- Click on it to view details
Part 10: Post-Installation Tasks
Step 10.1: Install Local Path Provisioner (Storage)
# Install local path provisioner for persistent storage
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
# Set as default storage class
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# Test with PVC
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 1Gi
EOF
# Check PVC status
kubectl get pvc
Step 10.2: Install Metrics Server
# Enable metrics server for resource monitoring
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Wait for it to start
kubectl -n kube-system rollout status deploy/metrics-server
# Test it
kubectl top nodes
kubectl top pods
Part 11: Maintenance Commands
On Master Node
# Check cluster status
kubectl get nodes
kubectl get pods -A
# View logs
cat /var/log/k3s.log
# Restart K3s
rc-service k3s restart
On Worker Nodes
# Check status
rc-service k3s-agent status
# View logs
cat /var/log/k3s-agent.log
# Restart agent
rc-service k3s-agent restart
Network Verification
# Check cluster network interface
ip addr show eth1
# Test cluster connectivity
ping -c 2 10.50.1.102
# Check Flannel overlay
ip link show flannel.1
Part 12: Troubleshooting Guide
Issue: Node Not Ready
# Check node details
kubectl describe node k3s-2
# On worker, check logs
cat /var/log/k3s-agent.log
# Restart agent
rc-service k3s-agent restart
Issue: Pods Stuck in Pending
# Check events
kubectl get events --all-namespaces
# Check node resources
kubectl describe nodes
# Check pod details
kubectl describe pod <pod-name>
Issue: Can’t Access Rancher
# Verify service
kubectl -n cattle-system get svc rancher
kubectl -n cattle-system get pods
# Port-forward for testing
kubectl -n cattle-system port-forward svc/rancher 8443:443
# Test locally
curl -k https://localhost:8443
Issue: Cluster Network Problems
# Check eth1 is up
ip link set eth1 up
ip addr show eth1
# Test basic connectivity
ping -c 2 10.50.1.102
# Check iptables (temporarily disable for testing)
rc-service iptables stop
# Test again, then re-enable:
rc-service iptables start
Part 13: Quick Reference
Node Information
| Node | Role | Cluster IP | Hostname |
|---|---|---|---|
| k3s-1 | Master | 10.50.1.101 | k3s-1 |
| k3s-2 | Worker | 10.50.1.102 | k3s-2 |
| k3s-3 | Worker | 10.50.1.103 | k3s-3 |
Important Ports
| Service | Port | Purpose |
|---|---|---|
| Kubernetes API | 6443 | K3s server API |
| Rancher NodePort | 30443 | Rancher dashboard |
| Test app NodePort | 30080 | Nginx test app |
Important Files
| File | Purpose |
|---|---|
/var/lib/rancher/k3s/server/node-token |
Worker join token |
/etc/rancher/k3s/k3s.yaml |
kubeconfig file |
/var/log/k3s.log |
Master logs |
/var/log/k3s-agent.log |
Worker logs |
/etc/network/interfaces |
Network config |
Useful Commands
# Cluster status
kubectl get nodes
kubectl get pods -A
# Logs
cat /var/log/k3s.log # master
cat /var/log/k3s-agent.log # worker
# Service management
rc-service k3s restart # master
rc-service k3s-agent restart # worker
# Network verification
ip addr show eth1
ping -c 2 10.50.1.102
Part 14: Final Verification
Run this on your master node:
echo "========== CLUSTER VERIFICATION =========="
echo ""
echo "1. NODE STATUS:"
kubectl get nodes
echo ""
echo "2. SYSTEM PODS:"
kubectl get pods -A | head -10
echo ""
echo "3. RANCHER STATUS:"
kubectl -n cattle-system get pods
echo ""
echo "4. TEST APPLICATION:"
kubectl get pods | grep nginx-test || echo "No test app deployed"
echo ""
echo "5. NETWORK INTERFACE:"
ip addr show eth1 | grep inet
echo ""
echo "6. ACCESS INFORMATION:"
echo "Rancher UI: https://10.50.1.101:30443"
echo "Login: admin / admin123"
echo ""
echo "=========================================="
Summary
You now have a fully functional K3s cluster with:
✅ 3 nodes (1 master, 2 workers)
✅ 50GB storage per node
✅ Dual network interfaces:
– eth0: Internet access (NAT)
– eth1: Cluster communication (VXLAN at Proxmox level – transparent to VMs)
✅ iptables properly installed
✅ Rancher for web-based management
✅ cert-manager for certificates
✅ Local path provisioner for storage
✅ Metrics server for monitoring
The VXLAN is completely transparent – your Alpine VMs just see eth1 as a normal network interface. All cluster communication runs over this isolated network.
Access Rancher at: https://10.50.1.101:30443` (username:admin, password:admin123`)