Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Thursday, 30 August 2018

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.

Thursday, 30 August 2018 12:47:24 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, 23 August 2018

Git can be confusing, or at least intimidating. In particular, if you end up working on a project that relies on a pull request (PR) model, and even more so if forks are involved.

This is pretty common when working on GitHub open source projects. Rarely is anyone allowed to directly update the master branch of the primary repository (repo). The way changes get into master is by submitting a PR.

In a GitHub scenario any developer is usually interacting with three repos:

Forks are created using the GitHub web interface, and they basically create a virtual "copy" of the primary repo in the developer's GitHub workspace. That fork is then cloned to the developer's workstation.

In many corporate environments everyone works in the same repo, but the only way to update master (or dev or a shared branch) is via a PR.

In a corporate scenario developers often interact with just two repos:

The developer clones the primary repo to their workstation.

Whether from a GitHub fork or a corporate repo, cloning looks something like this (at the command line):

$ git clone https://github.com/rockfordlhotka/csla.git

This creates a copy of the repo in the cloud onto the dev workstation. It also creates a connection (called a remote) to the cloud repo. By default this remote is named "origin".

Whether originally from a GitHub fork or a corporate repo, the developer does their work against the clone, what I'm calling the Dev workstation repo in these diagrams.

First though, if you are using the GitHub model where you have the primary repo, a fork, and a clone, then you'll need to add an upstream repo to your dev workstation repo. Something like this:

$ git remote add MarimerLLC https://github.com/MarimerLLC/csla.git

This basically creates a (readonly) connection between your dev workstation repo and the primary repo, in addition to the existing connection to your fork. In my case I've named the upstream (primary) repo "MarimerLLC".

This is important, because you are very likely to need to refresh your dev workstation repo from the primary repo from time to time.

Again, developers do their work against the dev workstation repo. They should do their work in a branch other than master. Mostly work should be done in a feature branch, usually based on some work item in VSTS, GitHub, Jira, or whatever you are using for project and issue management.

Back to creating a branch in the dev workstation repo. Personally I name my branches with the issue number, a dash, and a word or two that reminds me what I'm working on in this branch.

$ git fetch MarimerLLC
$ git checkout -b 123-work MarimerLLC/master

This is where things get a little tricky.

First, the git fetch command makes sure my dev workstation repo has the latest changes from the primary repo. You might think I'd want the latest from my fork, but in most cases what I really want is the latest from the primary repo, because that's where changes from other developers might have been merged - and I want their changes!

The git checkout command creates a new branch named "123-work" based on MarimerLLC/master. So based on the real master branch from the primary repo; the one I just made sure was updated from the cloud to be current.

This means my working directory on my computer is now using the 123-work branch, and that branch is identical to master from the primary repo. What a great starting point for any new work.

Now the developer does any work necessary. Editing, adding, removing files, etc.

One note on moving or renaming files: if you want to keep the file's history intact as you move or rename a file it is best to use git to make the changes.

$ git mv OldFile.cs NewFile.cs

At any point while you are doing your work you can commit your changes to the dev workstation repo. This isn't a "backup", because it is on your computer. But it is a snapshot of your work, and you can always roll back to earlier snapshots. So it isn't a bad idea to commit after you've done some work, especially if you are about to take any risks with other changes!

Personally I often use a Windows shell add-in called TortoiseGit to do my local commits, because I like the GUI experience integrated into the Windows Explorer tool. Other people like different GUI tools, and some like the command line.

At the command line a "commit" is really a two part process.

$ git add .
$ git commit -m '#123 My comment here'

The git add command adds any changes you've made into the local git index. Though it says "add", this adds all move/rename/delete/edit/add operations you've done to any files.

The git commit command actually commits the changes you just added, so they become part of the permanent record within your dev workstation repo. Note my use of the -m switch to add a comment (including the issue number) about this commit. I think this is critical! Not only does it help you and your colleagues, but putting the issue number as a tag allows tools like GitHub and VSTS to hyperlink to the issue details.

OK, so now my changes are committed to my dev workstation repo, and I'm ready to push them up into the cloud.

If I'm using GitHub and a fork then I'll push to my personal fork. If I'm directly using a corporate repo I'll push to the corporate repo. Keep in mind though, that I'm pushing my feature branch, not master!

$ git push origin

This will push my current branch (123-work) to origin, which is the cloud-based repo I cloned to create my dev workstation repo.

GitHub with a fork:

Corporate:

The 123-work in the cloud is a copy of that branch in my dev workstation repo. There are a couple immediate benefits to having it in the cloud

  1. It is backed up to a server
  2. It is (typicaly) visible to other developers on my team

I'll often push even non-working code into the cloud to enable collaboration with other people. At least in GitHub and VSTS, my team members can view my branch and we can work together to solve problems I might be facing. Very powerful!

(even better, but more advanced than I want to get in this post, they can actually pull my branch down onto their workstation, make changes, and create a PR so I can merge their changes back into my working branch)

At this point my work is both on my workstation and in the cloud. Now I can create a pull request (PR) if I'm ready for my work to be merged into the primary master.

BUT FIRST, I need to make sure my 123-work branch is current with any changes that might have been made to the primary master while I've been working locally. Other developers (or even me) may have submitted a PR to master in the meantime, so master may have changed.

This is where terms like "rebase" come into play. But I'm going to skip the rebase concept for now and show a simple merge approach:

$ git pull MarimerLLC master

The git pull command fetches any changes in the MarimerLLC primary repo, and then merges the master branch into my local working branch (123-work). If the merge can be done automatically it'll just happen. If not, I'll get a list of files that I need to edit to resolve conflicts. The files will contain both my changes and any changes from the cloud, and I'll need to edit them in Visual Studio or some other editor to resolve the conflicts.

Once any conflicts are resolved I can move forward. Even if there weren't conflicts I'll need to commit the merged changes from the cloud into my local repo.

$ git add .
$ git commit -m 'Merge upstream changes from MarimerLLC/master'

It is critical at this point that you make sure the code compiles and that your unit tests run locally! If so, proceed. If not, fix any issues, then proceed.

Push your latest changes into the cloud.

$ git push origin

With the latest code in the cloud you can create a PR. A PR is created using the web UI of GitHub, VSTS, or whatever cloud tool you are using. The PR simply requests that the code from your branch be merged into the primary master branch.

In GitHub with a fork the PR sort of looks like this:

In a corporate setting it looks like this:

In many cases submitting a PR will trigger a continuous integration (CI) build. In the case of CSLA I use AppVeyor, and of course VSTS has great build tooling. I can't imagine working on a project where a PR doesn't trigger a CI build and automatic run of unit tests.

The great thing about a CI build at this point is that you can tell that your PR builds and your unit tests pass before merging it into master. This isn't 100% proof of no issues, but it sure helps!

It is really important to understand that there is an ongoing link from the 123-work branch in the cloud to the PR. If I change anything in the 123-work branch in the cloud that changes the PR.

The upside to this is that GitHub and VSTS have really good web UI tools for code reviews and commenting on code in a PR. And the developer can just go change their 123-work branch on the dev workstation to respond to any comments, then

  1. git add
  2. git commit
  3. git push origin

as shown above to get those changes into the cloud-based 123-work branch, thus updating the PR.

Assuming any changes requested to the PR have been made and the CI build and unit tests pass, the PR can be accepted. This is done through the web UI of GitHub or VSTS. The result is that the 123-work branch is merged into master in the primary repo.

At this point the 123-work branch can (and should) be deleted from the cloud and the dev workstation repo. This branch no longer has value because it has been merged into master. Don't worry about losing history or anything, that won't happen. Getting rid of feature branches once merged is necessary to keep the cloud and local repos all tidy.

The web UI can be used to delete a branch in the cloud. To delete the branch from your dev workstation repo you need to move out of that branch, then delete it.

$ git checkout master
$ git branch -D 123-work

Now you are ready to repeat this process from the top based on the next work item in the backlog.

Git
Thursday, 23 August 2018 16:00:41 (Central Standard Time, UTC-06:00)  #    Disclaimer

Does anyone understand how System.Data.SqlClient assemblies get pulled into projects?

I have a netstandard 2.0 project where I reference System.Data.SqlClient. I then reference/use that assembly in a Xamarin project. And this seems to work, but creates a compile-time warning in the Xamarin project

The assembly 'System.Data.SqlClient.dll' was loaded from a different 
  path than the provided path

provided path: /Users/user135287/Library/Caches/Xamarin/mtbs/builds/
  UI.iOS/4a61fb5d59d8c2875723f6d1e7f44ce3/bin/iPhoneSimulator/Debug/
  System.Data.SqlClient.dll

actual path: /Library/Frameworks/Xamarin.iOS.framework/Versions/
  11.6.1.4/lib/mono/Xamarin.iOS/Facades/System.Data.SqlClient.dll

I don't think the warning actually causes any issues - but (like a lot of people) I dislike warnings during my builds. Sadly, I don't know how to get rid of this particular warning.

I guess I also don't know if it has anything to do with my Class Library project using System.Data.SqlClient, or maybe this is just a weird thing with Xamarin iOS?

Thursday, 23 August 2018 15:15:31 (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
September, 2018 (2)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2018, Marimer LLC

Send mail to the author(s) E-mail



Sign In