My Kubernetes Journey

Kubernetes (aka. k8s) has been one of the hottest topics in the infrastructure world for about two years. It wasn’t until late last year I really got on the containers bandwagon via Docker. But Kubernetes was something else that I didn’t want to touch. I thought it was overkill for my modest needs at home and too complicated to delve into without some solid time for setup and education. Stars aligned when I received access to some labs at work and the gift of a quiet week due to the 4th of July falling on a Thursday.

I talked to a co-worker a few weeks ago who setup a cluster for himself using Rancher Kubernetes Engine (RKE). RKE stands up the environment in containers using a questionnaire for the configuration information. After a couple hours of experimenting with it, it appeared to work at first but wasn’t going beyond basic configuration without throwing errors. Out of time for the day I stopped and came back a couple of days later.

Instead of continuing with the existing installation, I threw out the VM and built a new one. The main difference is I took the approach as described on kubernetes.io and use kubeadm instead of RKE. The instructions are more manual with more room for error. A typical beginner’s k8s cluster is a single master with two worker nodes. I could have included HA but wanted to make sure it worked instead of getting too fancy. The most error prone part was problems installing kubectl due to pasting problems over VDI. kubectl get nodes showed three nodes with two of them worker and the other the master node. kubectl get pods --all-namespaces showed information.

If you are looking to setup Kubernetes, I’d recommend the getting started guide. Be careful to read the details as there are a couple of things you can miss. For example, using Canal as the network controller requires a network to be specified. You can copy paste, but you need to meet that need.

The configuration guide recommends running Sonobuoy to test the environment. It took a while to run so I cancelled it as I had to go. But it wasn’t able to delete the namespaces or pods it created so I had to do that manually.

Once the environment was stood up, I tried to configure the Web UI with little success. Enabling it is trivial but by default it only listens to requests from 127.0.0.1. It’s doubtful there’s a graphical interface or browser on a master controller so it required a little bit of research. Dashboard UI requires kubectl proxy to run on the localhost and that opens a connection to the controller kubectl is communicating with. Another bump in the road approached me. A token is needed to access the UI. Documentation provides the command to find the token. Despite what I thought, the service account wasn’t created. Creating sample user documentation says:

“Copy provided snippets to some dashboard-adminuser.yaml file and use kubectl apply -f dashboard-adminuser.yaml to create them.”

Following the instructions, I put their examples in a file and ran it, but it never created a service account. A few kubectl get commands showed me the account was missing so I ran each snippet on its own. Kubernetes gave me the token as expected.

I also installed Helm which is a package manager for k8s. It was almost trivial but nothing was working when I tried to install a sample stable/mysql deployment. It appears Helm doesn’t create a service account when deploying Tiller. Taking the steps recommended steps, it worked.

At this point the cluster appears to work. But I feel like a dog who caught a car in that I don’t know what to do with it. I don’t have a need for a Kubernetes cluster but it’s good experience to learn. Next time I have an opportunity to work on it I’ll setup the web UI and try to create a small proof of concept application. Where I take it then is anyone’s guess.

My original concern about K8s being fiddly and fickle seem to be true. Documentation is clear but everything I do has a problem that others have experienced. It’s still relatively immature and is in active development so these problems should go away in time. But there is a learning curve and time commitment to successfully deploy Kubernetes.