When I created a small "bare-metal" Kubernetes cluster on Linode, I used
kubeadm to perform the installation.
Since I wanted most of the communication to be over the Linode internal network, I used
kubeadm init --apiserver-advertise-address=192.168.X.X.
Unfortunately, this doesn't allow me to control this cluster remotely, as the Kubernetes apiserver doesn't have SSL certificates with SANs for the host's public IP address or hostname.
After a bit of searching and troubleshooting, it seems that all I needed to do was generate new SSL certificates with these additional SANs.
kubeadm includes an alpha tool to do this, which is fairly workable.
To regenerate the SSL certificate:
- Delete or move the old ones from
sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address 192.168.X.X --apiserver-cert-extra-sans <sans>where
<sans>is a comma-separated list of IP addresses and/or hostnames.
- Restart the apiserver. I don't quite know the proper way to do this, so I just deleted the local Docker container and let Kubernetes recreate it.
The catch is that
kubectl from Sydney to Fremont is so much slower than
kubectl locally inside an SSH session, though, so I might just continue to run
kubectl over SSH rather than run
kubectl on my local PCs.
Subject Alternative Name, or an additional name in the certificate that the certificate can be used for. ↩︎
Now that I think about it, I should have probably just restarted the local Docker container. ↩︎