Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Tuesday, 25 September 2018

As people reading my blog know, I'm an advocate of container-based deployment of server software. The most commonly used container technology at the moment is Docker. And the most popular way to orchestrate or manage clusters of Docker containers is via Kubernetes (K8s).

Azure, AWS, and GCE all have managed K8s offerings, so if you can use the public cloud there's no good reason I can see for installing and mananaging your own K8s environment. It is far simpler to just use one of the pre-existing managed offerings.

But if you need to run K8s in your own datacenter then you'll need some way to get a cluster going. I thought I'd figure this out from the base up, installing everything. There are some pretty good docs out there, but as with all things, I ran into some sharp edges and bumps on the way, so I thought I'd write up my experience in the hopes that it helps someone else (or me, next time I need to do an install).

Note: I haven't been a system admin of any sort since 1992, when I was admin for a Novell file server, a Windows NT server, and a couple VAX computers. So if you are a system admin for Linux and you think I did dumb stuff, I probably did 😃

The environment I set up is this:

  1. 1 K8s control node
  2. 2 K8s worker nodes
  3. All running as Ubuntu server VMs in HyperV on a single host server in the Magenic data center

Install Ubuntu server

So step 1 was to install Ubuntu 64 bit server on three VMs on our HyperV server.

Make sure to install Ubuntu server, not client, otherwise this article has good instructions.

This is pretty straightforward, the only notes are:

  1. When the server install offers to pre-install stuff, DON'T (at least don't pre-install Docker)
  2. Make sure the IP addresses won't change over time - K8s doesn't like that
  3. Make sure the MAC addresses for the virtual network cards won't change over time - Ubuntu doesn't like that

Install Docker

The next step is to install Docker on all three nodes. There's a web page with instructions

⚠ Just make sure to read to the bottom, because most Linux install docs are seem to read like those elementary school trick tests where the last step in the quiz is to do nothing - you know, a PITA.

In this case, the catch is that there is a release version of Docker so don't use the test version. Here's the bash steps to get stable Docker for Ubuntu 18.04:

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce

Later on the K8s instructions will say to install docker.io, but that's for Ubuntu desktop, while docker-ce is for Ubuntu server.

You will probably want to grant your current user id permission to interact with docker:

sudo usermod -aG docker $USER

Repeat this process on all three nodes.

Install kubeadm, kubectl, and kubelet

All three nodes need Docker and also the Kubernetes tools: kubeadm, kubectl, and kubelet.

There's a good instruction page on how to install the tools. Notes:

  1. Ignore the part about installing Docker, we already did that
  2. In fact, you can read-but-ignore all of the page except for the section titled Installing kubeadm, kubelet and kubectl. Only this one bit of bash is necessary:

Become root:

sudo su -

Then install the tools:

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

I have no doubt that all the other instructions are valuable if you don't follow the default path. But in my case I wanted a basic K8s cluster install, so I followed the default path - after a lot of extra reading related to the non-critical parts of the doc about optional features and advanced scenarios.

Install Kubernetes on the master node

One of my three nodes is the master node. By default this node doesn't run any worker containers, only containers necessary for K8s itself.

Again, there's a good instruction page on how to create a Kubernetes cluster. The first part of the doc describes the master node setup, followed by the worker node setup.

This doc is another example of read-to-the-bottom. I found it kind of confusing and had some false starts following these instructions: hence this blog post.

Select pod network

One key thing is that before you start you need to figure out the networking scheme you'll be using between your K8s nodes and pods: the pod network.

I've been unable to find a good comparison or explanation as to why someone might use any of the numerous options. In all my reading the one that came up most often is Flannel, so that's what I chose. Kind of arbitrary, but what are you going to do?

Note: A colleague of mine found that Flannel didn't interact well with the DHCP server when installing on his laptop. He used Weave instead and it seemed to resolve the issue. I guess your mileage may vary when it comes to pod networks.

Once you've selected your pod network then you can proceed with the instructions to set up the master node.

Set up master node

Read the doc referenced earlier for more details, but here are the distilled steps to install the master K8s node with the Flannel pod network.

Kubernetes can't run if swap is enabled, so turn it off:

swapoff -a

This command only affects the current session. You also need to turn off swap after a reboot. To do this you need to edit the /etc/fstab file and comment out the lines regarding the swap file. For example, I've added a # to comment out these two lines in mine:

#UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 defaults 0 0
#/swap.img	none	swap	sw	0	0

You can, optionally, pre-download the required container images before continuing. They'll get downloaded either way, but if you want to make sure they are local before initializing the cluster you can do this:

kubeadm config images pull

Now it is possible to initialize the cluster:

kubeadm init --pod-network-cidr=10.244.0.0/16

💡 The output of kubeadm init includes a lot of information. It is important that you take note of the kubeadm join statement that's part of the output, as we'll need that later.

Next make kubectl work for the root user:

export KUBECONFIG=/etc/kubernetes/admin.conf

Pass bridged IPv4 traffic to iptables (as per the Flannel requirements):

sysctl net.bridge.bridge-nf-call-iptables=1

Apply the Flannel v0.10.0 pod network configuration to the cluster:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

⚠ Apparently once v0.11.0 is released this URL will change.

Now the master node is set up, so you can test to see if it is working:

kubectl get pods --all-namespaces

Optionally allow workers to run on master

If you are ok with the security ramifications (such as in a dev environment), you might consider allowing the master node to run worker containers.

To do this run the following command on the master node (as root):

kubectl taint nodes --all node-role.kubernetes.io/master-

Configure kubectl for admin users

The last step in configuring the master node is to allow the use of kubectl if you aren't root, but are a cluster admin.

⚠ This step should only be followed for the K8s admin users. Regular users are different, and I'll cover that later in the blog post.

First, if you are still root, exit:

exit

Then the following bash is used to configure kubectl for the K8s admin user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

What this does is take the K8s keys that are in a secure location and copy them to the admin user's ~/.kube directory. That's where kubectl looks for the config file with the information necessary to run as cluster admin.

At this point the master node should be up and running.

Set up worker nodes

In my case I have 2 worker nodes. Earlier I talked about installing Docker and the K8s tools on each node, so all that work is done. All that remains is to join each worker node to the cluster controlled by the master node.

Like the master node, worker nodes must have swap turned off. Follow the same instructions from earlier to turn off swap on each worker node before joining it to the cluster.

That 'kubeadm join' statement that was display as a result of 'kubeadm init' is the key here.

Log onto each worker node and run that bash command as root. Something like:

sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

That'll join the worker node to the cluster.

Once you've joined the worker nodes, back on the master node you can see if they are connected:

kubectl get nodes

That is it - the K8s cluster is now up and running.

Grant access to developers

The final important requirement is to allow developers access to the cluster.

Earlier I copied the admin.conf file for use by the cluster admin, but for regular users you need to create a different conf file. This is done with the following command on the master node:

sudo kubeadm alpha phase kubeconfig user --client-name user > ~/user.conf

The result is a user.conf file that provides non-admin access to the cluster. Users need to put that file in their own '/.kube/' directory with the file name ofconfig:~/.kube/config`.

If you plan to create user accounts going forward, you can put this file into the /etc/skel/ directory as a default for new users:

sudo mkdir /etc/skel/.kube
sudo cp ~/user.conf /etc/skel/.kube/config

As you create new users (on the master node server) they'll now already have the keys necessary to use kubectl to deploy their images.

Recovering from a mistake

If you somehow end up with a non-working cluster (which I did a few times as I experimented), you can use:

kubeadm reset

Do this on the master node to reset the install so you can start over.

Summary

There are a lot of options and variations on how to install Kubernetes using kubeadm. My intent with this blog post is to have a linear walkthrough of the process based as much as possible on defaults; the exception being my choice of Flannel as the pod network.

Of course the world is dynamic and things change over time, so we'll see how long this blog post remains valid into the future.

Tuesday, 25 September 2018 09:17:32 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 17 September 2018

I'm not 100% sure of the cause here, but today I ran into an issue getting the latest Visual Studio 2017 to work with Docker for Windows.

My device does have the latest VS preview installed too, and I suspect that's the core of my issue, but I don't know for sure.

So here's the problem I encountered.

  1. Open VS 2017 15.8.4
  2. Create a new ASP.NET Core web project with Docker selected
  3. Press F5 to run
  4. The docker container gets built, but doesn't run

I tried a lot of stuff. Eventually I just ran the image from the command line

docker run -i 3247987a3

By using -i I got an interactive view into the container as it failed to launch.

The problem turns out to be that the container doesn't have Microsoft.AspNetCore.App 2.1.1, and apparently the newly-created project wants that version. The container only has 2.1.0 installed.

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.1' was not found.
  - Check application dependencies and target a framework version installed at:
      /usr/share/dotnet/
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.0 at [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]

The solution turns out to be to specify the version number in the csproj file.

    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0" />

Microsoft's recent guidance has been to not specify the version at all and their new project template reflects that guidance. Unfortunately there's something happening on my machine (I assume the VS preview) that makes things fail if the version is not explicitly marked as 2.1.0.

Monday, 17 September 2018 22:49:26 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, 30 August 2018

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.

Thursday, 30 August 2018 12:47:24 (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2018, Marimer LLC

Send mail to the author(s) E-mail



Sign In