K3s Cluster with Rancher on Alpine Linux VMs (Clone Method)
50GB Storage – With VXLAN Networking
Your Network Architecture
- eth0: Proxmox NAT network (for internet access, management)
- eth1: VXLAN network (10.50.1.0/24 for cluster communication)
- Master Node: k3s-1 – VXLAN IP: 10.50.1.101
- Worker Node 1: k3s-2 – VXLAN IP: 10.50.1.102
- Worker Node 2: k3s-3 – VXLAN IP: 10.50.1.103
Part 1: Create and Configure the Master VM Template
Step 1.1: Install Alpine Linux on Master VM
- Create a new VM in Proxmox with these specs:
- Name:
k3s-master-template - OS: Alpine Linux ISO
- Disk: 50GB
- CPU: 2 cores
- RAM: 2048 MB (2GB)
- Network: Two network interfaces (important!)
- eth0: NAT mode (for internet)
- eth1: Bridge mode (for VXLAN)
- Name:
- Boot the VM and install Alpine:
Log in as root (no password) and run:
# Run Alpine setup setup-alpineFollow these prompts:
- Keyboard layout:
us - Hostname:
k3s-master-template(temporary) - Network interface: You’ll configure both interfaces
- First interface (eth0): DHCP (for NAT/internet)
- Second interface (eth1): We’ll configure statically later
- Password:
alpine(temporary) - Timezone:
UTC - Proxy:
none - NTP client:
chrony - Available disks:
sda - How to use disk:
sys - Erase disk?
y - Continue?
y
- Keyboard layout:
- After installation completes, the VM will reboot. Log in as root with password
alpine.
Step 1.2: Update and Install Basic Packages
# Update package repository
apk update
apk upgrade
# Install essential tools
apk add curl bash nano htop vim
# Install network tools (important for VXLAN)
apk add net-tools bind-tools bridge-utils iptables
Step 1.3: Configure Network Interfaces
# Edit network configuration
cat > /etc/network/interfaces << EOF
# Loopback
auto lo
iface lo inet loopback
# NAT interface (for internet)
auto eth0
iface eth0 inet dhcp
# VXLAN interface (for cluster communication)
auto eth1
iface eth1 inet static
address 10.50.1.101
netmask 255.255.255.0
# NO gateway on eth1 - cluster traffic stays within VXLAN
EOF
# Restart networking
rc-service networking restart
# Verify both interfaces are configured
ip addr show eth0
ip addr show eth1
# Check that eth1 has the correct IP
# Should show: inet 10.50.1.101/24 scope global eth1
Step 1.4: Configure Hostname Resolution
# Create hosts file with all node VXLAN IPs
cat > /etc/hosts << EOF
127.0.0.1 localhost
::1 localhost
# K3s cluster nodes (VXLAN network)
10.50.1.101 k3s-1
10.50.1.102 k3s-2
10.50.1.103 k3s-3
EOF
# Verify
cat /etc/hosts
Step 1.5: Enable IP Forwarding
# Enable IP forwarding (required for Kubernetes networking)
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
# Verify
sysctl net.ipv4.ip_forward
Step 1.6: Configure SSH (Optional but Recommended)
# Install and enable SSH server
apk add openssh-server
rc-update add sshd default
rc-service sshd start
# Change root password
passwd
Step 1.7: Verify VXLAN Interface
# Check that eth1 is ready for cluster communication
ip link show eth1
ip addr show eth1
# Test internet connectivity (via eth0)
ping -c 2 8.8.8.8
Step 1.8: Prepare for Cloning
# Clean up logs and temporary files
apk cache clean
rm -f /var/log/*.log
rm -f /root/.bash_history
history -c
# Remove SSH host keys (will be regenerated on boot)
rm -f /etc/ssh/ssh_host_*
# Shutdown the VM
poweroff
Part 2: Clone the Master to Create Workers
Step 2.1: Create Worker VMs from Template
- In Proxmox, right-click on the
k3s-master-templateVM - Select “Clone”
-
Create two clones with these settings:
Worker 1:
- Name:
k3s-2 - VM ID: 102
- Mode: Full Clone
- Target storage: Your storage
Worker 2:
- Name:
k3s-3 - VM ID: 103
- Mode: Full Clone
- Target storage: Your storage
- Name:
Important: Both clones will inherit the two network interfaces (eth0 NAT, eth1 VXLAN).
Step 2.2: Configure Network for Master (k3s-1)
Start the master VM and verify network:
# On master VM, check network configuration
ip addr show eth0 # Should have DHCP IP for internet
ip addr show eth1 # Should show 10.50.1.101/24
# If eth1 IP is wrong, fix it:
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.101
netmask 255.255.255.0
EOF
rc-service networking restart
Step 2.3: Configure Network for Worker 1 (k3s-2)
# On worker 1 VM, configure eth1 with its VXLAN IP
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.102
netmask 255.255.255.0
EOF
# Restart networking
rc-service networking restart
# Change hostname
echo "k3s-2" > /etc/hostname
hostname -F /etc/hostname
# Verify
hostname
ip addr show eth1 # Should show 10.50.1.102
Step 2.4: Configure Network for Worker 2 (k3s-3)
# On worker 2 VM, configure eth1 with its VXLAN IP
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.50.1.103
netmask 255.255.255.0
EOF
# Restart networking
rc-service networking restart
# Change hostname
echo "k3s-3" > /etc/hostname
hostname -F /etc/hostname
# Verify
hostname
ip addr show eth1 # Should show 10.50.1.103
Step 2.5: Verify VXLAN Network Connectivity
On ALL nodes, test VXLAN connectivity (using eth1 IPs):
# Test ping to all nodes via VXLAN network
ping -c 2 10.50.1.101
ping -c 2 10.50.1.102
ping -c 2 10.50.1.103
# All should succeed - this confirms VXLAN is working
# Also verify internet access via NAT (eth0)
ping -c 2 8.8.8.8
Part 3: Install K3s on Master Node (k3s-1)
Step 3.1: Install K3s Master with VXLAN Configuration
# On master node (10.50.1.101)
# Install K3s with VXLAN networking on eth1
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--node-ip=10.50.1.101 \
--advertise-address=10.50.1.101 \
--flannel-iface=eth1 \
--flannel-backend=vxlan \
--disable=traefik" \
sh -
What these flags do:
– --node-ip=10.50.1.101: K3s uses this IP for cluster communication
– --flannel-iface=eth1: Flannel uses the VXLAN interface
– --flannel-backend=vxlan: Use VXLAN for pod networking
– --disable=traefik: Disable Traefik (optional)
Step 3.2: Configure kubectl
# Setup kubectl config
mkdir -p ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
chmod 600 ~/.kube/config
# Add to bashrc
echo "export KUBECONFIG=~/.kube/config" >> ~/.bashrc
source ~/.bashrc
Step 3.3: Get Node Token for Workers
# Save this token - you'll need it for worker nodes
cat /var/lib/rancher/k3s/server/node-token
Step 3.4: Verify Master is Ready
# Wait 30 seconds for K3s to initialize
sleep 30
# Check nodes (only master should appear)
kubectl get nodes -o wide
# You should see:
# NAME STATUS ROLES INTERNAL-IP ...
# k3s-1 Ready control-plane,master 10.50.1.101 ...
Part 4: Install K3s on Worker Nodes with VXLAN
Step 4.1: On Worker 1 (k3s-2)
# On worker 1 (10.50.1.102)
# Set variables (replace TOKEN with actual value from master)
MASTER_IP="10.50.1.101"
TOKEN="YOUR_TOKEN_FROM_MASTER"
# Install K3s agent with VXLAN configuration
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER_IP}:6443 K3S_TOKEN=${TOKEN} sh -s - agent \
--node-ip=10.50.1.102 \
--flannel-iface=eth1
# Check status
rc-service k3s-agent status
Step 4.2: On Worker 2 (k3s-3)
# On worker 2 (10.50.1.103)
# Set variables
MASTER_IP="10.50.1.101"
TOKEN="YOUR_TOKEN_FROM_MASTER"
# Install K3s agent with VXLAN configuration
curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER_IP}:6443 K3S_TOKEN=${TOKEN} sh -s - agent \
--node-ip=10.50.1.103 \
--flannel-iface=eth1
# Check status
rc-service k3s-agent status
Step 4.3: Verify Cluster from Master
# On master node (k3s-1)
kubectl get nodes -o wide
# You should see all nodes with INTERNAL-IP from VXLAN network:
# NAME STATUS ROLES INTERNAL-IP ...
# k3s-1 Ready control-plane,master 10.50.1.101
# k3s-2 Ready <none> 10.50.1.102
# k3s-3 Ready </none><none> 10.50.1.103
# Verify all traffic is over VXLAN (eth1)
# On any node, check connections:
netstat -an | grep 10.50.1
Part 5: Verify VXLAN Network is Working
Step 5.1: Check Flannel Interface
# On any node, check the flannel interface
ip link show flannel.1
ip addr show flannel.1
# Check VXLAN VTEP
bridge fdb show | grep vxlan
Step 5.2: Deploy Test Pods Across Nodes
# On master, deploy test pods
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: alpine
image: alpine:latest
command: ["sleep", "3600"]
EOF
# Wait for pods
kubectl get pods -o wide
# Check pod IPs (they should be on the VXLAN network)
# Typically 10.42.x.x
Step 5.3: Test Cross-Node Pod Communication
# Get pod IPs
kubectl get pods -o wide
# Exec into first pod
kubectl exec -it test-app-xxxx-yyyy -- sh
# Inside pod, install ping
apk add iputils
# Ping pod on another node (use its IP)
ping <other-pod-ip>
# Exit
exit
Part 6: Install Rancher (Same as before)
# On master node, continue with Helm, cert-manager, and Rancher
# (Steps 5-8 from previous guide)
Important Notes About Your Network
Traffic Flow
- eth0 (NAT): Internet access, pulling container images, updates
- eth1 (VXLAN): All Kubernetes cluster traffic
- Control plane communication
- Pod-to-pod networking
- Flannel overlay network
- Service traffic
Verification Commands
# Check which interface K3s is using
kubectl get nodes -o wide # Shows INTERNAL-IP (should be 10.50.1.x)
# Check routes
ip route show
# Should show routes for 10.42.0.0/16 via flannel.1
# Check VXLAN traffic
netstat -s --vxlan
ip -s link show eth1
# Monitor cluster traffic
tcpdump -i eth1 -n not port 22
Benefits of This Setup
- Isolation: Cluster traffic separate from management traffic
- Performance: VXLAN optimized for container networking
- Scalability: Can add nodes across different Proxmox hosts
- Security: Cluster network isolated from NAT network
Troubleshooting VXLAN
Issue: Nodes can’t ping each other on 10.50.1.x
# Check if eth1 is up
ip link set eth1 up
# Check IP assignment
ip addr show eth1
# Check firewall
rc-service iptables stop # Temporarily disable for testing
Issue: Pods can’t communicate across nodes
# Check flannel
kubectl -n kube-system get pods | grep flannel
# Check flannel logs
kubectl -n kube-system logs -l k8s-app=flannel
# Check VXLAN port (8472) is open
netstat -uln | grep 8472
Issue: Internet not working (eth0)
# Check DHCP
dhcpcd eth0
# Check DNS
cat /etc/resolv.conf