Version: 1.0
Date: May 30, 2025
Environment: VMware Workstation with Ubuntu 22.04
Cluster Configuration: 1 Control Plane + Worker + 1 Worker Node
Execute on both ubuntu0 and ubuntu1:
bash# Update system packages sudo apt update && sudo apt upgrade -y # Install required packages sudo apt install -y apt-transport-https ca-certificates curl gpg # Disable swap (required for Kubernetes) sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # Verify swap is disabled free -h
Execute on both nodes:
bash# Load required kernel modules cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Verify modules are loaded lsmod | grep br_netfilter lsmod | grep overlay
Execute on both nodes:
bash# Set up required sysctl params cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl parameters sudo sysctl --system # Verify settings sysctl net.bridge.bridge-nf-call-iptables sysctl net.bridge.bridge-nf-call-ip6tables sysctl net.ipv4.ip_forward
Execute on both nodes:
bash# Install containerd sudo apt install -y containerd # Create containerd configuration directory sudo mkdir -p /etc/containerd # Generate default configuration containerd config default | sudo tee /etc/containerd/config.toml # Enable SystemdCgroup in containerd config sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml # Restart and enable containerd sudo systemctl restart containerd sudo systemctl enable containerd # Verify containerd is running sudo systemctl status containerd
Execute on both nodes:
bash# Add Kubernetes apt repository curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list # Update apt package index sudo apt update
Execute on both nodes:
bash# Install kubelet, kubeadm and kubectl sudo apt install -y kubelet kubeadm kubectl # Mark packages as held back from automatic updates sudo apt-mark hold kubelet kubeadm kubectl # Enable kubelet service sudo systemctl enable kubelet # Verify installation kubeadm version kubectl version --client kubelet --version
Execute only on ubuntu0 (192.168.9.131):
bash# Initialize the cluster sudo kubeadm init \ --apiserver-advertise-address=192.168.9.131 \ --pod-network-cidr=10.244.0.0/16 \ --node-name=ubuntu0 # Set up kubectl for regular user mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # Remove taint from control plane to allow scheduling pods kubectl taint nodes ubuntu0 node-role.kubernetes.io/control-plane:NoSchedule- # Verify control plane node kubectl get nodes
Important: Save the kubeadm join
command from the init output. Example:
bashkubeadm join 192.168.9.131:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash>
Execute only on ubuntu0:
bash# Install Flannel CNI kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml # Wait for Flannel pods to be ready kubectl wait --for=condition=ready pod -l app=flannel -n kube-flannel --timeout=60s # Verify Flannel installation kubectl get pods -n kube-flannel kubectl get nodes
Execute on ubuntu1 (192.168.9.132):
bash# Use the join command from Step 7 sudo kubeadm join 192.168.9.131:6443 --token <your-token> \ --discovery-token-ca-cert-hash sha256:<your-hash>
If you lost the join command, generate a new one on ubuntu0:
bashkubeadm token create --print-join-command
Execute on ubuntu0:
bash# Check all nodes kubectl get nodes -o wide # Check system pods kubectl get pods --all-namespaces # Check cluster info kubectl cluster-info # Check component status kubectl get componentstatuses
Expected output:
NAME STATUS ROLES AGE VERSION ubuntu0 Ready control-plane 5m v1.28.x ubuntu1 Ready <none> 2m v1.28.x
Execute on ubuntu0:
bash# Download and install metrics-server kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml # Patch metrics-server for insecure kubelet connections kubectl patch deployment metrics-server -n kube-system --type='json' -p='[ { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls" }, { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-preferred-address-types=InternalIP" } ]' # Wait for metrics-server to be ready kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=60s # Verify metrics are working kubectl top nodes kubectl top pods --all-namespaces
Execute on ubuntu0:
bash# Install Kubernetes Dashboard kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml # Wait for dashboard to be deployed kubectl wait --for=condition=ready pod -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard --timeout=60s # Configure dashboard to skip authentication kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type='json' -p='[ { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login" }, { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--disable-settings-authorizer" } ]' # Create service account with cluster-admin permissions cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: dashboard-admin namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: dashboard-admin namespace: kubernetes-dashboard EOF # Patch dashboard deployment to use service account kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard -p '{"spec":{"template":{"spec":{"serviceAccountName":"dashboard-admin"}}}}' # Wait for dashboard to restart kubectl rollout status deployment/kubernetes-dashboard -n kubernetes-dashboard
bash# Start proxy on ubuntu0 kubectl proxy --address='0.0.0.0' --disable-filter=true & # Access dashboard from your host machine at: # http://192.168.9.131:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ # Click "Skip" when prompted for authentication
bash# Edit dashboard service to use NodePort kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' # Get the NodePort kubectl -n kubernetes-dashboard get svc kubernetes-dashboard # Access dashboard at: # https://192.168.9.131:<NodePort> # https://192.168.9.132:<NodePort> # Click "Skip" when prompted for authentication
bash# Create insecure dashboard service cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: labels: k8s-app: kubernetes-dashboard-insecure name: kubernetes-dashboard-insecure namespace: kubernetes-dashboard spec: ports: - port: 9090 protocol: TCP targetPort: 9090 selector: k8s-app: kubernetes-dashboard type: NodePort EOF # Patch dashboard for insecure access kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type='json' -p='[ { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--insecure-bind-address=0.0.0.0" }, { "op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--insecure-port=9090" } ]' # Get NodePort for insecure access kubectl get svc kubernetes-dashboard-insecure -n kubernetes-dashboard # Access via HTTP (no HTTPS, no authentication): # http://192.168.9.131:<NodePort> # http://192.168.9.132:<NodePort>
bash# Create test deployment kubectl create deployment nginx-test --image=nginx --replicas=3 # Expose as service kubectl expose deployment nginx-test --port=80 --type=NodePort # Check deployment kubectl get deployments kubectl get pods -o wide kubectl get services # Test service access kubectl get svc nginx-test curl http://192.168.9.131:<NodePort> curl http://192.168.9.132:<NodePort>
bash# View resource usage kubectl top nodes kubectl top pods --all-namespaces # View cluster information kubectl get nodes -o wide kubectl get pods --all-namespaces -o wide # View cluster events kubectl get events --all-namespaces --sort-by='.lastTimestamp' # Monitor continuously watch kubectl top nodes watch kubectl top pods --all-namespaces
bash# Create monitoring script cat <<'EOF' > ~/monitor-k8s.sh #!/bin/bash echo "=== Kubernetes Cluster Monitor ===" echo "Date: $(date)" echo "" echo "=== Cluster Nodes ===" kubectl get nodes -o wide echo "" echo "=== Resource Usage ===" kubectl top nodes 2>/dev/null || echo "Metrics not available yet" echo "" echo "=== Pod Resource Usage ===" kubectl top pods --all-namespaces 2>/dev/null || echo "Metrics not available yet" echo "" echo "=== System Pods Status ===" kubectl get pods -n kube-system echo "" echo "=== User Workloads ===" kubectl get pods --all-namespaces | grep -v kube-system | grep -v kubernetes-dashboard | grep -v kube-flannel echo "" echo "=== Services ===" kubectl get svc --all-namespaces echo "" echo "=== Recent Events ===" kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -10 EOF chmod +x ~/monitor-k8s.sh # Run monitoring script ~/monitor-k8s.sh
bash# Check kubelet logs sudo journalctl -xeu kubelet # Check container runtime sudo systemctl status containerd # Restart kubelet sudo systemctl restart kubelet # Check node conditions kubectl describe nodes
bash# Check Flannel pods kubectl get pods -n kube-flannel # Check Flannel logs kubectl logs -n kube-flannel -l app=flannel # Restart Flannel pods kubectl delete pods -n kube-flannel -l app=flannel
bash# Check metrics-server logs kubectl logs -n kube-system -l k8s-app=metrics-server # Restart metrics-server kubectl rollout restart deployment/metrics-server -n kube-system # Verify metrics-server arguments kubectl describe deployment metrics-server -n kube-system
bash# Check dashboard pods kubectl get pods -n kubernetes-dashboard # Check dashboard logs kubectl logs -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard # Restart dashboard kubectl rollout restart deployment/kubernetes-dashboard -n kubernetes-dashboard # Check service endpoints kubectl get endpoints -n kubernetes-dashboard
bash# Monitor resource usage kubectl top nodes kubectl describe nodes # Check pod resource requests/limits kubectl describe pods --all-namespaces | grep -A 5 -B 5 "Requests\|Limits" # Check for pending pods kubectl get pods --all-namespaces | grep Pending
bash# On worker node (ubuntu1) sudo kubeadm reset sudo rm -rf /etc/cni/net.d sudo rm -rf $HOME/.kube/config
bash# On control plane (ubuntu0) sudo kubeadm reset sudo rm -rf /etc/cni/net.d sudo rm -rf $HOME/.kube
bash# After reset, restart from Step 7 (Initialize Control Plane) # Make sure to clean up any leftover containers sudo docker system prune -af
bash# View cluster information kubectl cluster-info kubectl get nodes -o wide kubectl get pods --all-namespaces # Resource monitoring kubectl top nodes kubectl top pods --all-namespaces # Service discovery kubectl get svc --all-namespaces kubectl get endpoints --all-namespaces # Logs and events kubectl logs <pod-name> -n <namespace> kubectl get events --all-namespaces --sort-by='.lastTimestamp'
bash# Deploy applications kubectl create deployment <name> --image=<image> kubectl expose deployment <name> --port=<port> --type=NodePort # Scale applications kubectl scale deployment <name> --replicas=<number> # Update applications kubectl set image deployment/<name> <container>=<new-image> # Delete applications kubectl delete deployment <name> kubectl delete service <name>
bash# Apply YAML configurations kubectl apply -f <file.yaml> # Edit resources kubectl edit deployment <name> kubectl edit service <name> # View resource definitions kubectl get deployment <name> -o yaml kubectl describe deployment <name>
For production deployments:
This guide provides a complete setup for a 2-node Kubernetes cluster suitable for learning and development purposes. The cluster includes:
✅ Functional Control Plane: ubuntu0 serves as both control plane and worker
✅ Worker Node: ubuntu1 provides additional compute capacity
✅ Pod Networking: Flannel CNI for pod-to-pod communication
✅ Metrics Monitoring: metrics-server for resource monitoring
✅ Web Dashboard: Kubernetes dashboard with no authentication
✅ Resource Monitoring: Complete observability stack
For additional help:
Document End
This document was generated on May 30, 2025, for VMware Workstation environment with Ubuntu 22.04.