Rockford Lhotka

 Tuesday, September 25, 2018

As people reading my blog know, I'm an advocate of container-based deployment of server software. The most commonly used container technology at the moment is Docker. And the most popular way to orchestrate or manage clusters of Docker containers is via Kubernetes (K8s).

Azure, AWS, and GCE all have managed K8s offerings, so if you can use the public cloud there's no good reason I can see for installing and mananaging your own K8s environment. It is far simpler to just use one of the pre-existing managed offerings.

But if you need to run K8s in your own datacenter then you'll need some way to get a cluster going. I thought I'd figure this out from the base up, installing everything. There are some pretty good docs out there, but as with all things, I ran into some sharp edges and bumps on the way, so I thought I'd write up my experience in the hopes that it helps someone else (or me, next time I need to do an install).

Note: I haven't been a system admin of any sort since 1992, when I was admin for a Novell file server, a Windows NT server, and a couple VAX computers. So if you are a system admin for Linux and you think I did dumb stuff, I probably did 😃

The environment I set up is this:

  1. 1 K8s control node
  2. 2 K8s worker nodes
  3. All running as Ubuntu server VMs in HyperV on a single host server in the Magenic data center

Install Ubuntu server

So step 1 was to install Ubuntu 64 bit server on three VMs on our HyperV server.

Make sure to install Ubuntu server, not client, otherwise this article has good instructions.

This is pretty straightforward, the only notes are:

  1. When the server install offers to pre-install stuff, DON'T (at least don't pre-install Docker)
  2. Make sure the IP addresses won't change over time - K8s doesn't like that
  3. Make sure the MAC addresses for the virtual network cards won't change over time - Ubuntu doesn't like that

Install Docker

The next step is to install Docker on all three nodes. There's a web page with instructions

⚠ Just make sure to read to the bottom, because most Linux install docs are seem to read like those elementary school trick tests where the last step in the quiz is to do nothing - you know, a PITA.

In this case, the catch is that there is a release version of Docker so don't use the test version. Here's the bash steps to get stable Docker for Ubuntu 18.04:

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] bionic stable"
sudo apt update
sudo apt install docker-ce

Later on the K8s instructions will say to install, but that's for Ubuntu desktop, while docker-ce is for Ubuntu server.

You will probably want to grant your current user id permission to interact with docker:

sudo usermod -aG docker $USER

Repeat this process on all three nodes.

Install kubeadm, kubectl, and kubelet

All three nodes need Docker and also the Kubernetes tools: kubeadm, kubectl, and kubelet.

There's a good instruction page on how to install the tools. Notes:

  1. Ignore the part about installing Docker, we already did that
  2. In fact, you can read-but-ignore all of the page except for the section titled Installing kubeadm, kubelet and kubectl. Only this one bit of bash is necessary:

Become root:

sudo su -

Then install the tools:

apt-get update && apt-get install -y apt-transport-https curl
curl -s | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

I have no doubt that all the other instructions are valuable if you don't follow the default path. But in my case I wanted a basic K8s cluster install, so I followed the default path - after a lot of extra reading related to the non-critical parts of the doc about optional features and advanced scenarios.

Install Kubernetes on the master node

One of my three nodes is the master node. By default this node doesn't run any worker containers, only containers necessary for K8s itself.

Again, there's a good instruction page on how to create a Kubernetes cluster. The first part of the doc describes the master node setup, followed by the worker node setup.

This doc is another example of read-to-the-bottom. I found it kind of confusing and had some false starts following these instructions: hence this blog post.

Select pod network

One key thing is that before you start you need to figure out the networking scheme you'll be using between your K8s nodes and pods: the pod network.

I've been unable to find a good comparison or explanation as to why someone might use any of the numerous options. In all my reading the one that came up most often is Flannel, so that's what I chose. Kind of arbitrary, but what are you going to do?

Note: A colleague of mine found that Flannel didn't interact well with the DHCP server when installing on his laptop. He used Weave instead and it seemed to resolve the issue. I guess your mileage may vary when it comes to pod networks.

Once you've selected your pod network then you can proceed with the instructions to set up the master node.

Set up master node

Read the doc referenced earlier for more details, but here are the distilled steps to install the master K8s node with the Flannel pod network.

Kubernetes can't run if swap is enabled, so turn it off:

swapoff -a

This command only affects the current session. You also need to turn off swap after a reboot. To do this you need to edit the /etc/fstab file and comment out the lines regarding the swap file. For example, I've added a # to comment out these two lines in mine:

#UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 defaults 0 0
#/swap.img	none	swap	sw	0	0

You can, optionally, pre-download the required container images before continuing. They'll get downloaded either way, but if you want to make sure they are local before initializing the cluster you can do this:

kubeadm config images pull

Now it is possible to initialize the cluster:

kubeadm init --pod-network-cidr=

💡 The output of kubeadm init includes a lot of information. It is important that you take note of the kubeadm join statement that's part of the output, as we'll need that later.

Next make kubectl work for the root user:

export KUBECONFIG=/etc/kubernetes/admin.conf

Pass bridged IPv4 traffic to iptables (as per the Flannel requirements):

sysctl net.bridge.bridge-nf-call-iptables=1

Apply the Flannel v0.10.0 pod network configuration to the cluster:

kubectl apply -f

⚠ Apparently once v0.11.0 is released this URL will change.

Now the master node is set up, so you can test to see if it is working:

kubectl get pods --all-namespaces

Optionally allow workers to run on master

If you are ok with the security ramifications (such as in a dev environment), you might consider allowing the master node to run worker containers.

To do this run the following command on the master node (as root):

kubectl taint nodes --all

Configure kubectl for admin users

The last step in configuring the master node is to allow the use of kubectl if you aren't root, but are a cluster admin.

⚠ This step should only be followed for the K8s admin users. Regular users are different, and I'll cover that later in the blog post.

First, if you are still root, exit:


Then the following bash is used to configure kubectl for the K8s admin user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

What this does is take the K8s keys that are in a secure location and copy them to the admin user's ~/.kube directory. That's where kubectl looks for the config file with the information necessary to run as cluster admin.

At this point the master node should be up and running.

Set up worker nodes

In my case I have 2 worker nodes. Earlier I talked about installing Docker and the K8s tools on each node, so all that work is done. All that remains is to join each worker node to the cluster controlled by the master node.

Like the master node, worker nodes must have swap turned off. Follow the same instructions from earlier to turn off swap on each worker node before joining it to the cluster.

That 'kubeadm join' statement that was display as a result of 'kubeadm init' is the key here.

Log onto each worker node and run that bash command as root. Something like:

sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

That'll join the worker node to the cluster.

Once you've joined the worker nodes, back on the master node you can see if they are connected:

kubectl get nodes

That is it - the K8s cluster is now up and running.

Grant access to developers

The final important requirement is to allow developers access to the cluster.

Earlier I copied the admin.conf file for use by the cluster admin, but for regular users you need to create a different conf file. This is done with the following command on the master node:

sudo kubeadm alpha phase kubeconfig user --client-name user > ~/user.conf

The result is a user.conf file that provides non-admin access to the cluster. Users need to put that file in their own '/.kube/' directory with the file name ofconfig:~/.kube/config`.

If you plan to create user accounts going forward, you can put this file into the /etc/skel/ directory as a default for new users:

sudo mkdir /etc/skel/.kube
sudo cp ~/user.conf /etc/skel/.kube/config

As you create new users (on the master node server) they'll now already have the keys necessary to use kubectl to deploy their images.

Recovering from a mistake

If you somehow end up with a non-working cluster (which I did a few times as I experimented), you can use:

kubeadm reset

Do this on the master node to reset the install so you can start over.


There are a lot of options and variations on how to install Kubernetes using kubeadm. My intent with this blog post is to have a linear walkthrough of the process based as much as possible on defaults; the exception being my choice of Flannel as the pod network.

Of course the world is dynamic and things change over time, so we'll see how long this blog post remains valid into the future.

Tuesday, September 25, 2018 9:17:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, September 17, 2018

I'm not 100% sure of the cause here, but today I ran into an issue getting the latest Visual Studio 2017 to work with Docker for Windows.

My device does have the latest VS preview installed too, and I suspect that's the core of my issue, but I don't know for sure.

So here's the problem I encountered.

  1. Open VS 2017 15.8.4
  2. Create a new ASP.NET Core web project with Docker selected
  3. Press F5 to run
  4. The docker container gets built, but doesn't run

I tried a lot of stuff. Eventually I just ran the image from the command line

docker run -i 3247987a3

By using -i I got an interactive view into the container as it failed to launch.

The problem turns out to be that the container doesn't have Microsoft.AspNetCore.App 2.1.1, and apparently the newly-created project wants that version. The container only has 2.1.0 installed.

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.1' was not found.
  - Check application dependencies and target a framework version installed at:
  - Installing .NET Core prerequisites might help resolve this problem:
  - The .NET Core framework and SDK can be installed from:
  - The following versions are installed:
      2.1.0 at [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]

The solution turns out to be to specify the version number in the csproj file.

    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0" />

Microsoft's recent guidance has been to not specify the version at all and their new project template reflects that guidance. Unfortunately there's something happening on my machine (I assume the VS preview) that makes things fail if the version is not explicitly marked as 2.1.0.

Monday, September 17, 2018 10:49:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, September 12, 2018

I've recently become a bit addicted to Quora. It is probably because of their BNBR (be nice, be respectful) policy, so it isn't as nasty as Twitter and Facebook have become over the past couple years.

It also turns out that there are tech communities found on the site, and I've answered some questions recently. Stuff I probably would have (should have?) put on my blog, but wrote there instead.

Wednesday, September 12, 2018 3:13:35 PM (Central Standard Time, UTC-06:00)  #    Disclaimer