How hard is the hard way?
My Experience: Kubernetes The Hard Way (Ubuntu & OrbStack)
I provisioned four Ubuntu 22.04 LTS VMs inside OrbStack on macOS. This setup acts like bare-metal after installing a few extra packages and some config here and there.
Kelsey’s original “Kubernetes The Hard Way” is a fantastic deep dive. This is my spin with detailed callouts, full configs, and MacOS/Orbstack/Ubuntu-specific notes.
Overview & Prerequisites
You’ll build a self-hosted Kubernetes cluster across four Ubuntu 22.04 LTS VMs (AMD64) in OrbStack:
Host | Role | CPU | RAM | Disk |
---|---|---|---|---|
jumpbox | Admin workstation | 1 | 1 GB | 10 GB |
server | Control plane node | 2 | 2 GB | 20 GB |
node-0 | Worker node | 1 | 2 GB | 20 GB |
node-1 | Worker node | 1 | 2 GB | 20 GB |
Bring your own IPs or let OrbStack assign them—just keep track and update machines.txt
accordingly.
On your macOS host:
# 1. Install OrbStack if not already
brew install orbstack
# 2. Start 4 Ubuntu 22.04 VMs (custom naming via UI/CLI)
orbstack start --vm-count 4 --memory 2G --cpus 2
Inside each VM as root (or sudo -i
):
apt update && apt -y install \
wget curl vim git socat conntrack ipset kmod jq sshpass openssl
jq
helps parse JSON in scripts, and sshpass
simplifies SSH key copy in this lab.
Verify Ubuntu release:
grep PRETTY_NAME /etc/os-release
1. Provisioning Compute Resources
1.1 VM Hostnames & Networking
- Gather VM IPs and assign FQDNs. On jumpbox, create
machines.txt
:10.0.0.10 server.k8s.local server 10.200.0.0/24 10.0.0.11 node-0.k8s.local node-0 10.200.1.0/24 10.0.0.12 node-1.k8s.local node-1 10.200.2.0/24
- Configure SSH root login (if needed) on each VM:
# run on each VM: sed -i 's/^#*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config systemctl restart sshd
1.2 Distribute SSH Keys
On jumpbox:
# 1. Generate keypair
ssh-keygen -t rsa -b 4096 -N "" -f ~/.ssh/id_rsa
# 2. Copy keys to all machines
while read IP FQDN HOST SUBNET; do
sshpass -p ubuntu ssh-copy-id -o StrictHostKeyChecking=no ubuntu@$IP
done < machines.txt
# 3. Verify connectivity
# verify SSH access to each host
while read IP FQDN HOST SUBNET; do
ssh -o BatchMode=yes ubuntu@$IP "hostname" || echo "SSH to $IP failed"
done < machines.txt
if your VM image uses a different default user, change ubuntu@
accordingly.
1.3 Hostname & Hosts File
On jumpbox:
# 1. Set hostnames
echo "Setting hostnames..."
while read IP FQDN HOST SUBNET; do
ssh ubuntu@$IP "hostnamectl set-hostname $HOST"
done < machines.txt
# 2. Build a cluster-wide hosts file
cat > hosts <<EOF
# Kubernetes The Hard Way - Cluster Hosts
$(awk '{print $1, $2, $3}' machines.txt)
EOF
# 3. Distribute and append to /etc/hosts
for HOST in server node-0 node-1; do
scp hosts ubuntu@$HOST:~/
ssh ubuntu@$HOST "sudo tee -a /etc/hosts < hosts"
done
Avoid duplicate /etc/hosts
entries—clean up old entries first.
1.4 Confirm Networking
# From jumpbox, test pings and SSH by FQDN
# ping and SSH test by FQDN
for HOST in server node-0 node-1; do
ping -c1 $HOST &>/dev/null && echo "$HOST ping OK"
ssh ubuntu@$HOST "echo SSH to $HOST OK"
done
2. TLS Certificate Authority & Certificates
We’ll generate a root CA and component-specific certs using a single OpenSSL config file.
2.1 Create ca.conf
cat > ca.conf <<EOF
[ req ]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = v3_req
distinguished_name = dn
[ dn ]
C = US
ST = Oregon
L = Portland
O = kubernetes
CN = kubernetes-ca
[ v3_req ]
basicConstraints = CA:TRUE, pathlen:0
keyUsage = critical, digitalSignature, keyCertSign, keyEncipherment
subjectKeyIdentifier= hash
authorityKeyIdentifier = keyid:always,issuer
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
EOF
This config covers the CA itself (via v3_req
). We’ll use profiles later for component CSRs.
2.2 Generate CA Key & Certificate
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -key ca.key \
-days 3650 -sha256 -out ca.crt -config ca.conf -extensions v3_req
Store ca.key
securely—if compromised, all certs must be reissued.
2.3 CSR Profiles for Components
Append to ca.conf
:
cat >> ca.conf <<'EOF'
[ admin_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
subjectAltName = @alt_names
[ kube-apiserver_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names
[ kube-controller-manager_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
[ kube-scheduler_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
[ kube-proxy_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
[ service-account_ext ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature
extendedKeyUsage = clientAuth
EOF
Make sure to cat
with quoted EOF for literal insertion of brackets and slashes.
2.4 Generate Keys, CSRs & Signed Certs
On jumpbox:
components=("admin" "kube-apiserver" "kube-controller-manager" "kube-scheduler" "kube-proxy" "service-account")
for comp in "${components[@]}"; do
# 1. Generate private key
openssl genrsa -out ${comp}.key 4096
# 2. Create CSR using appropriate extension
ext="${comp}_ext"
openssl req -new -key ${comp}.key -out ${comp}.csr -subj "/CN=system:${comp}" \
-config ca.conf -reqexts $ext
# 3. Sign CSR with CA
openssl x509 -req -in ${comp}.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out ${comp}.crt -days 365 -sha256 -extfile ca.conf -extensions $ext
done
The -subj
overrides DN section for each CSR; the -extfile
flag points back at ca.conf
.
2.5 Distribute Certificates
- Server (control plane):
/var/lib/kubernetes
- Workers:
/var/lib/kubelet
Example:
# To server:
scp ca.crt ca.key kube-apiserver.crt kube-apiserver.key service-account.crt service-account.key ubuntu@server:~/
ssh ubuntu@server "sudo mkdir -p /var/lib/kubernetes && sudo mv *.crt *.key ca.crt ca.key /var/lib/kubernetes/"
# To workers:
for host in node-0 node-1; do
scp ca.crt ${host}.crt ${host}.key ubuntu@$host:~/
ssh ubuntu@$host "sudo mkdir -p /var/lib/kubelet && \
sudo mv *.crt *.key /var/lib/kubelet/{${host}.crt,${host}.key}
"
done
Worker cert filenames must become kubelet.crt
and kubelet.key
in /var/lib/kubelet
—rename or symlink accordingly.
3. Download & Organize Binaries
3.1 Create Directory Structure
mkdir -p ~/downloads/{client,controller,worker,cni-plugins}
cd ~/downloads
3.2 Fetch Kubernetes Binaries (v1.32.3)
export ARCH=amd64
# Client (kubectl & etcdctl)
curl -sSLO https://dl.k8s.io/release/v1.32.3/bin/linux/$ARCH/kubectl \
https://dl.etcd.io/v3.6.0/etcd-v3.6.0-linux-$ARCH.tar.gz
tar xzf etcd-v3.6.0-linux-$ARCH.tar.gz --strip-components=1 \
-C controller etcd-v3.6.0-linux-$ARCH/etcdctl
# Control plane binaries
downloads=(kube-apiserver kube-controller-manager kube-scheduler)
for bin in "${downloads[@]}"; do
curl -sSLO https://dl.k8s.io/release/v1.32.3/bin/linux/$ARCH/$bin
chmod +x $bin
mv $bin controller/
done
# Worker binaries
downloads=(kubelet kube-proxy)
for bin in "${downloads[@]}"; do
curl -sSLO https://dl.k8s.io/release/v1.32.3/bin/linux/$ARCH/$bin
chmod +x $bin
mv $bin worker/
done
# CNI plugins
curl -sSL https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-$ARCH-v1.6.2.tgz \
-o cni.tgz
tar xzf cni.tgz -C cni-plugins/
curl
, add --insecure
or update CA certs with apt install ca-certificates
.3.3 Install kubectl
on Jumpbox
sudo mv client/kubectl /usr/local/bin/
kubectl version --client
4. Systemd Units & Control Plane Bootstrapping
4.1 etcd (Control Plane)
Install binaries & setup directories on server:
ssh ubuntu@server
sudo mv ~/downloads/controller/etcd ctl* /usr/local/bin/
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo mv ca.crt kube-apiserver.crt kube-apiserver.key /etc/etcd/
sudo chown -R root:root /etc/etcd /var/lib/etcd
Unit file: /etc/systemd/system/etcd.service
[Unit]
Description=etcd server daemon
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
--name server \
--data-dir /var/lib/etcd \
--listen-peer-urls https://127.0.0.1:2380 \
--listen-client-urls https://127.0.0.1:2379 \
--advertise-client-urls https://server.k8s.local:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster server=https://127.0.0.1:2380 \
--client-cert-auth \
--trusted-ca-file /etc/etcd/ca.crt \
--cert-file /etc/etcd/kube-apiserver.crt \
--key-file /etc/etcd/kube-apiserver.key \
--peer-client-cert-auth \
--peer-trusted-ca-file /etc/etcd/ca.crt \
--peer-cert-file /etc/etcd/kube-apiserver.crt \
--peer-key-file /etc/etcd/kube-api...
Flag notes:
--listen-client-urls
: where etcd listens for API clients.--advertise-client-urls
: endpoint that other components use.
sudo systemctl daemon-reload
sudo systemctl enable --now etcd
etcdctl --cacert /etc/etcd/ca.crt --cert /etc/etcd/kube-apiserver.crt \
--key /etc/etcd/kube-apiserver.key member list
If member list
hangs, verify port 2379/2380 aren’t firewalled (Ubuntu ufw status
).
4.2 Kubernetes API Server
Move binaries & certs on server:
sudo mv ~/downloads/controller/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/local/bin/
sudo mkdir -p /var/lib/kubernetes /etc/kubernetes/config
sudo mv ca.crt ca.key admin.crt admin.key \
kube-apiserver.crt kube-apiserver.key \
service-account.crt service-account.key encryption-config.yaml \
/var/lib/kubernetes/
Unit file: /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
After=network.target etcd.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=10.0.0.10 \
--allow-privileged=true \
--authorization-mode=Node,RBAC \
--client-ca-file=/var/lib/kubernetes/ca.crt \
--etcd-servers=https://127.0.0.1:2379 \
--etcd-cafile=/var/lib/kubernetes/ca.crt \
--etcd-certfile=/var/lib/kubernetes/kube-apiserver.crt \
--etcd-keyfile=/var/lib/kubernetes/kube-apiserver.key \
--service-account-key-file=/var/lib/kubernetes/service-account.crt \
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \
--service-cluster-ip-range=10.32.0.0/24 \
--v=2
Restart=on-failure
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now kube-apiserver
Check sudo journalctl -u kube-apiserver -f
for live logs.
4.3 Controller Manager & Scheduler
Controller Manager unit (/etc/systemd/system/kube-controller-manager.service
):
[Unit]
Description=Kubernetes Controller Manager
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--leader-elect=true \
--bind-address=127.0.0.1 \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
--root-ca-file=/var/lib/kubernetes/ca.crt \
--service-account-private-key-file=/var/lib/kubernetes/service-account.key \
--use-service-account-credentials=true \
--v=2
Restart=on-failure
[Install]
WantedBy=multi-user.target
Scheduler unit (/etc/systemd/system/kube-scheduler.service
):
[Unit]
Description=Kubernetes Scheduler
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2
Restart=on-failure
[Install]
WantedBy=multi-user.target
sudo systemctl enable --now kube-controller-manager kube-scheduler
5. CNI & Containerd on Worker Nodes
5.1 CNI Configuration
Create /etc/cni/net.d/10-bridge.conf
on each worker:
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "k8s-br0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [ [ { "subnet": "10.200.0.0/24" } ] ],
"routes": [ { "dst": "0.0.0.0/0" } ]
}
}
Create /etc/cni/net.d/99-loopback.conf
:
{
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
10-bridge.conf
matches machines.txt
for each host.5.2 Containerd Config & Service
Generate default config and adjust for CRI:
ssh ubuntu@$HOST
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Optional tuning: set SystemdCgroup = true under [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
sudo systemctl restart containerd
The default config works, but verify your runc
path and cgroup settings if pods fail to start.
5.3 Kubelet & Kube-Proxy
Copy and install:
ssh ubuntu@$HOST
sudo mv ~/downloads/worker/{kubelet,kube-proxy} /usr/local/bin/
sudo mkdir -p /var/lib/kubelet /var/lib/kube-proxy /var/lib/kubernetes
sudo mv ~/ca.crt ~/${HOST}.crt ~/${HOST}.key /var/lib/kubelet/
sudo mv ~/kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
sudo mv ~/${HOST}.kubeconfig /var/lib/kubelet/kubeconfig
Enable pod networking prerequisites:
sudo modprobe br_netfilter
echo "br_netfilter" | sudo tee /etc/modules-load.d/k8s.conf
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Systemd services (on each worker):
# containerd, kubelet, kube-proxy units already in repo
sudo mv ~/downloads/units/{containerd.service,kubelet.service,kube-proxy.service} /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd kubelet kube-proxy
6. Networking & Pod Routing
By default, the Linux kernel on server doesn’t know how to route Pod subnets on workers.
On server:
export NODE0_IP=10.0.0.11 NODE1_IP=10.0.0.12
export NODE0_SUB=10.200.1.0/24 NODE1_SUB=10.200.2.0/24
sudo ip route add $NODE0_SUB via $NODE0_IP
sudo ip route add $NODE1_SUB via $NODE1_IP
On node-0:
sudo ip route add $NODE1_SUB via $NODE1_IP
On node-1:
sudo ip route add $NODE0_SUB via $NODE0_IP
Verify:
ip route show
kubectl get nodes --kubeconfig=/var/lib/kubernetes/admin.kubeconfig
If pods remain in ContainerCreating
, check docker ps -a
and journalctl -u kubelet
.
7. Smoke Tests & Validation
On jumpbox (with kubectl
):
7.1 Cluster Info
kubectl --kubeconfig=~/downloads/client/kubectl config set-cluster ktw \
--certificate-authority=ca.crt --embed-certs=true --server=https://server.k8s.local:6443
kubectl --kubeconfig=~/downloads/client/kubectl config set-credentials admin \
--client-certificate=admin.crt --client-key=admin.key --embed-certs=true
kubectl --kubeconfig=~/downloads/client/kubectl config set-context ktw \
--cluster=ktw --user=admin
kubectl --kubeconfig=~/downloads/client/kubectl config use-context ktw
kubectl version && kubectl get nodes
7.2 Deploy Nginx
kubectl create deployment nginx --image=nginx:stable --replicas=2
kubectl get pods -o wide
7.3 Port Forward & Access
POD=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD 8080:80 &
curl -I http://127.0.0.1:8080
Use kubectl logs -f $POD
and kubectl exec -it $POD -- /bin/sh
for debugging.
8. Cleanup & OrbStack Teardown
8.1 Delete Cluster Resources
kubectl delete deployment nginx
kubectl delete secret kubernetes-the-hard-way --ignore-not-found
8.2 Destroy VMs
On your macOS host:
orbstack stop --all
orbstack delete --all
Use orbstack snapshot
before major changes so you can roll back easily.
And that’s a wrap! Your Ubuntu + OrbStack cluster should now be humming. If you hit roadblocks, check your systemd logs, verify certs, and review ca.conf
profiles. Good luck!