Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Friday, January 11, 2019

During 2018 I gave a talk at some VS Live events discussing how one might migrate existing .NET Framework enterprise apps/code to .NET Core. In this talk I have some assumptions I think are reasonable:

  • Most of us can't do a "big bang" rewrite of our apps/code all in one shot
    • It'll take months or years to migrate from .NET to .NET Core
    • During this time it is necessary to maintain the existing code while working on the new code
  • A lot of existing code is still on .NET 2, 3, and 4
  • We're talking Windows Forms, WPF, and ASP.NET code - lots of variety
    • In most cases business logic is embedded in the UI - code-behind forms/pages or in controllers
    • Editorial observation: More people should be using CSLA to gain separation of concerns: keep their business logic in a separate and reusable layer from the UI or data access 😃
  • In most cases you are not just migrating from .NET Framework to .NET Core, but also modernizing/rewriting the UI to also be modern
    • Replacing Windows Forms and WPF with ASP.NET Core Razor Pages or MVC, or Xamarin Forms
    • Maybe upgrading Windows Forms or WPF to the new .NET Core 3.0 support once it is available

Several people have asked if I'd blog the gist of my presentation, so here it is.

In summary:

  • Step 0: Understand .NET Core vs .NET Standard
  • Step 1: Get to .NET 4.6.1 or Higher
  • Step 2: Separation of Concerns
  • Step 3: Move Business Code to Shared Library
  • Step 4: Create .NET Standard Project
  • Step 5: Mitigate Dependency Conflicts
  • Step 6: Mitigate Code Conflicts
  • Step 7: Have a Glass of Bourbon

The code used in my talk and this post is the Net2NetStandard solution on GitHub.

Step 0: Understand .NET Core vs .NET Standard

I've encountered a lot of confusion between .NET Core and .NET Standard and .NET Framework. It is important to have a good understanding of these terms before moving forward at all, so you end up in the right place.

  • .NET Framework is the "legacy" .NET implementation we've been using since 2002, and the long-term goal is to move off .NET Framework onto something more modern
  • .NET Core is a new implementation of .NET that currently supports two types of UI: console and web server. .NET Core 3 is slated to also support Windows Forms and WPF UI frameworks. It does not currently support Xamarin (iOS, Android, Mac, Linux), or WebAssembly (mono-wasm/Blazor).
  • .NET Standard is an interface against which you can write code, and that interface is implemented by .NET Framework 4.6.1+ and by .NET Core 2+ and by Xamarin (and by mono and mono-wasm). If you write your code against .NET Standard, then your compiled DLL can be deployed to .NET Framework, .NET Core, Xamarin, and other .NET implementations.

As a result, my recommendation is that you should always get as much of your code into .NET Standard as possible, because the resulting compiled DLL can run essentially anywhere.

If all you do is get your code to .NET Core, that currently blocks you from reusing that code on .NET Framework, Xamarin, WebAssembly, and other .NET implementations.

All that said, it is important to understand that your UI code will almost certainly be .NET platform specific. In other words, you'll choose to write a console app, a web site, a mobile app, or a desktop app in a specific implementation of .NET. So your UI is not portable or reusable to the same degree as non-UI code.

Your non-UI code should always be built with .NET Standard so it is as portable as possible, enabling reuse of that code in current and future .NET implementations and UI technologies.

This is why my talk (and this post) are about how to get to .NET Standard, not .NET Core. .NET Standard gets you to .NET Core plus Xamarin and other platforms.

Step 1: Get to .NET 4.6.1 or Higher

Version 4.6.1 of the .NET Framework is special, because this is the earliest version that is compatible with .NET Standard 2.0. In reality you'll probably want to get to 4.7.1 or whatever version exists when you start this journey, but the minimum bar is 4.6.1.

Basically, if your existing code won't run on .NET 4.6.1, you'll need to take whatever steps are necessary to get from your older unsupported version (2? 3? 3.5? 4.0? 4.5?) to 4.6.1 or higher.

Fortunately this is usually not that difficult, because Microsoft has done a good job of minimizing breaking changes and preserving backward compatibility over time.

Step 2: Separation of Concerns

This is almost certainly the hardest step: if your existing code is "typical" it probably has tons of non-UI logic in button click or lostfocus event handlers, postback handlers, or controller methods. People have "enjoyed" this style of coding since VB3 back in the early 1990's and it persists through today.

The problem is that moving the UI to .NET Standard is a whole different thing from moving business logic or even data access logic to .NET Standard. Yes, .NET Core 3.0 is planned to have Windows Forms and WPF support, so that should help. But I suspect for most people the migration from .NET Framework to .NET Core ultimately means rewriting the UI into something more modern.

As a result, any code embedded in the UI or presentation layer needs to be cleaned up. You need to apply the concept of separation of concerns and get non-UI code out of the UI. That means no business or data access logic in code-behind or controllers or viewmodels. The goal should be (in my view) that all business logic (validation, calculations, manipulation, rules, authorization) is in a separate business layer, and all data access logic is in its own layer.

In short, you'll have a much easier time migrating code outside the UI to .NET Standard than any code inside the UI.

Step 3: Move Business Code to Shared Library

Now we get to the fun part. This step is in some ways the simplest and yet the most scary.

Right now your code is in a .NET Framework Class Library project. That means it compiles specifically for the .NET Framework, and uses .NET Framework specific dependency references. And this is your existing, running code, so we want to minimize risk in changing it, because changes to this code and existing references and even the csproj file will have a direct impact on your production environment.

The Net2NetStandard solution is intentionally stripped down to the bare minimum. My talk is often a 20 minute lightning talk, so the demo needs to be concise, and this qualifies. The start point is a .NET Framework Class Library project with some existing production code. That code uses Newtonsoft.Json and Entity Framework, with NuGet references to both dependencies.

Importantly, this project is already targeting .NET Framework 4.6.1.

What we want to do is get the code from this project into a location where it can continue to be used to build the existing .NET Framework DLL and also build a .NET Standard DLL. And we want to do this without duplicating the code or files, as that would make maintainability much harder.

Fortunately Visual Studio includes a feature called Shared Projects that solves this issue. A Shared Project is not a normal project at all, it is nothing more than a location to store code files. Those code files are then pulled into a real project at compile time as though they were part of that real project.

To see this in action, add a new C# Shared Project to the solution.

What you'll see in Solution Explorer is that this new project is missing common things like a References or Dependencies node, or a Properties folder. Again, this is not a normal project, it is nothing more than a placeholder to contain code files.

Next, select the source files from the .NET Framework project and drag-drop them into the new SharedLibrary project. That'll copy the files, so there's no risk here.

Before proceeding with any real code, now is the time to make sure you've done a commit to source control so you have an easy way to revert in case something does go wrong!

However, this next step might make your heart race and palms sweat a little, because I want you to highlight and delete the source files from the original .NET Framework project. I know, this sounds scary, but trust me (and your backups).

And here's the key: go to the .NET Framework project and add a reference to the SharedProject.

At this point you can build the original .NET Framework project and you'll get the exact same DLL output as before. Zero changes to your existing code or build result. And yet your code is now in a physical location that'll enable forward movement.

Hopefully your heart has slowed and your palms are now dry 😃

This is the point where you'd do a commit/push/PR of your code to finalize the shift of the files to their new shared project home. All in preparation for the next step where you'll finally get to .NET Standard.

Step 4: Create .NET Standard Project

To recap, you've updated to .NET Framework 4.6.1+, you've moved non-UI code out of the UI to its own class library, and now those code files are in a shared project, while still being compiled by the .NET Framework class library so production is unaffected.

Now you can add a new .NET Standard Class Library project to the solution, the first real step toward the future!

With that done you can add a reference to the same SharedLibrary project so that exact same set of code files will be compiled by this new project as well.

If you try and build the solution or .NET Standard project now you'll find that it won't build. That's because the project is missing some dependencies. However, the original .NET Framework project should keep building fine, production remains unaffected.

Step 5: Mitigate Dependency Conflicts

The new .NET Standard project needs references to Newtonsoft.Json and the Entity Framework, much like the original .NET Framework project. The code makes use of these two packages and won't build without them.

I didn't pick these two dependencies by accident. Newtonsoft.Json has a NuGet package that supports .NET Standard. Entity Framework does not. These two dependencies exemplify likely scenarios you'll encounter with real code. The possible scenarios are that your existing dependencies:

  1. Do not have .NET Standard support, and there's no alternative
  2. Already have .NET Standard support with the current version
  3. Already have .NET Standard support if you upgrade to the latest version
  4. Do not have .NET Standard support, but a new equivalent exists

Scenario 1

Scenario 1 is a worst-case scenario that may be a roadblock to forward movement. If you have a dependency on a DLL or NuGet package that has no .NET Standard support, and there's no modern equivalent to the functionality, then you'll almost certainly have to wait until such support does exist or write it yourself.

Scenarios 2 and 3

If you are in scenario 2, where the existing version of your dependency already has .NET Standard support, then reference the same version in your .NET Standard project as in your exisitng projects and your code should continue to compile and work as-is. This is the simplest scenario.

The dependency may fit into scenario 3, where a newer version of the package supports .NET Standard, but not the version you are currently using. This is quite common with Newtonsoft.Json, where the most commonly used version is quite old, but the more recent versions support .NET Standard.

In this case you may be able to upgrade your production projects to the latest version and use the same version for both .NET Framework and .NET Standard. This incurs some risk to production, because you are upgrading a dependency, but it is often the best solution.

In the case that you can't upgrade the version used by production, you'll need to leave the old package version reference in your .NET Framework project and use a newer version in the .NET Standard project. In this case however, you may have to deal with behavior or API differences between package versions and you should treat this as scenario 4.

Scenario 4

Entity Framework is an example of scenario 4. Microsoft chose not to carry the existing (legacy?) Entity Framework forward. Instead they implemented something new called Entity Framework Core. This new equivalent offers the same conceptual functionality, but with a new implementation and API, so it is absolutely not code-compatible with the old Entity Framework in use in production.

I'll discuss two solutions to scenario 4: compiler directives and upgrading production.

Scenario 4: Compiler Directives

In the .NET Standard project, add references to the latest Newtonsoft.Json and EntityFrameworkCore packages from NuGet.

You'll find that the project still won't build, because the existing code uses the old Entity Framework API. It is an scenario 4 dependency.

But you shouldn't get any errors compiling the code using Newtonsoft.Json, because it is a scenario 2 dependency.

The offending Entity Framework code is in the PersonFactory class:

using System.Data.Entity;

namespace FullNetLibrary
{
  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new DbContext(""))
      {
      }
    }
  }
}

There are two problems in this trivial case. First, the namespaces are different, so the using statement is invalid. Second, the API for interacting with entity contexts has changed, so the new DbContext statement is invalid. In a more realistic scenario more parts of the API would be invalid as well.

The goal is to minimize changes and risk to production code, while enabling the .NET Standard code to move forward. Remember that this exact same code file is being compiled for two different targets: once for .NET Framework, and once for .NET Standard (where it fails).

The solution is to use compiler directives so the code file can include code that is only compiled for one target or the other. The first step is to define a constant in the .NET Standard project's Build tab.

You can name the constant whatever you'd like, but NETSTANDARD2_0 is a defacto standard.

Then in your code file you can use this constant in a compiler directive. For example:

#if NETSTANDARD2_0
using Microsoft.EntityFrameworkCore;
#else
using System.Data.Entity;
#endif

What happens here is that when the .NET Framework project builds there's no NETSTANDARD2_0 constant defined, so the compiler only uses the using System.Data.Entity; code. Conversely, when the .NET Standard project builds the constant is defined, so the compiler only uses the using Microsoft.EntityFrameworkCore; code.

At this point you may ask whether this won't get extremely messy to have these #if statements scattered throughout your code. And that is a valid concern. There are three scenarios to consider within a code file:

  1. No code differences exist between the .NET Framework and .NET Core targets
  2. Very few code differences exist between the targets
  3. Many code differences exist between the targets

In scenario 1 you don't need compiler directives, so there's no issue. And that'll happen quite often with business logic, where the use of external dependencies is often very low.

Scenario 2 is a judgment call. What qualifies as "few"? My recommendation is that if 80% of the code is common and 20% is different, then you should use #if statements on a line-by-line or focused block-by-block scenario. This will result in a code file having numerous compiler directives, but most of the code will remain common across both targets.

Scenario 3 is where so much code is different that if you start scattering compiler directives through the code it would become unreadable. Again, my recommendation is that if more than 20% of your code will be different you should consider scenario 3. In this case you should duplicate the code within the file, essentially creating a different set of code for each platform. For example:

#if NETSTANDARD2_0
using Microsoft.EntityFrameworkCore;

namespace FullNetLibrary
{
  public class PersonContext : DbContext
  {
    public DbSet<Person> Persons { get; set; }
  }

  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new PersonContext())
      {

      }
    }
  }
}
#else
using System.Data.Entity;

namespace FullNetLibrary
{
  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new DbContext(""))
      {
      }
    }
  }
}
#endif

Notice that there's no code that's compiled for both targets. Instead the #if statement is used to create an implementation for .NET Standard, and another implementation for .NET Framework.

In a sense this is the lowest risk solution, because the .NET Framework production code is entirely unchanged. However, it is also the least maintainable solution, because the entire class has been duplicated, so future changes must be made to both sets of code.

Option 3: Upgrading Production Code

There's another alternative to using compiler directives, and that is to upgrade your production code to use the new dependency. This solution is only available in the case that the new NuGet package not only supports .NET Standard, but also supports .NET Framework. EntityFrameworkCore is an example of this, where you can use the new EntityFrameworkCore package from .NET Framework code.

Obviously this solution brings risk, because you are rewriting your existing production code to use the new library. That'll require good unit and acceptance testing of your production code to make sure nothing is broken by the changes.

On the upside, this solution helps keep the common codebase clean and unified. In the Net2NetStandard example, the PersonFactory code can end up looking like this:

using Microsoft.EntityFrameworkCore;

namespace FullNetLibrary
{
  public class PersonContext : DbContext
  {
    public DbSet<Person> Persons { get; set; }
  }

  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new PersonContext())
      {

      }
    }
  }
}

Same code for both the .NET Framework and .NET Standard targets. But only if the old Entity Framework reference in the production .NET Framework project is replaced with the new EntityFrameworkCore reference.

This often comes dangerously close to a "big bang" solution, and incurs real risk to the existing software. But there's also a very real upside in terms of maintaining a common codebase for development, testing, and maintenance over time.

Step 6: Mitigate Code Conflicts

The final issue you may encounter is pure code conflicts between your .NET Framework code and what can be done in .NET Standard. This is very uncommon, because .NET Standard describes so much of the functionality normally used by .NET code. However, if you are using some fancy bit of reflection or other "non-mainstream" parts of .NET you could find that your code won't compile for .NET Standard.

Solving this is really the same as Option 3 when dealing with dependency differences: use compiler directives. Or rewrite your "non-mainstream" production code to use techniques that are supported by .NET Standard.

Step 7: Have a Glass of Bourbon

or your beverage of choice

Not that you are done at this point, but you are on the path. In some ways finding the path and getting onto the path is the hardest part. The rest of the work might take months or years, but at least your code is in a structure where it is possible to migrate forward, while still maintaining the legacy deployment.

Yes there's some risk and additional unit testing (and acceptance testing) required as you make changes to the legacy code, which also changes the future code. That's a net benefit though, because at least you don't have to write those changes twice every time thanks to having a unified codebase.

There's a bit more risk (and therefore testing) required when making changes to the unified codebase for future code, because those changes will usually also impact the legacy app. But you have some control over that impact via compiler directives, and in many cases your business stakeholders will see this also as an advantage because they'll get some new features/capabilities in the existing legacy app even as you build them for the future state.

The point is that you've done the heavy lifting to establish a way forward that is at least achievable. So take a little time and have a small celebration. You deserve it!

Friday, January 11, 2019 12:06:07 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 12, 2018

CSLA .NET 4.9.0 includes some exciting new features, primarily focused on server-side capabilities for containers, .NET Core, and ASP.NET Core. There are a number of data portal enhancements, as well as powerful new configuration options based around the .NET Core configuration subsystem.

Data Portal Enhancements

Many of the enhancements are focused on enabling powerful cloud/container based scenarios with the data portal. Most notably, the data portal now supports two different types of routing to enable the use of multiple server instances. It also has the option to track recent activity with the intent of supporting a basic health dashboard, the ability to force a client "offline", and support for the .NET Core IoC/DI model.

Client-side data portal routing

One common request is that it would be nice if a subset of data portal requests could be routed to some server endpoint other than the default. This idea supports a number of scenarios, including security, logging, multi-tenant server farms, and I'm sure many others.

When you configure the client-side data portal on a web server, mobile device, PC, Mac, or Linux desktop you can now provide mappings so specific business domain types (root object types) cause the data portal to call server-side endpoints other than the default. This can be done on a per-type basis, or by applying the DataPortalServerResource attribute to a root domain type.

As the client app starts up, the data portal must be configured with mappings from domain types or DataPortalServerResource attribute values to specific server endpoint definitions. For example:

      // set up default data portal for most types
      ApplicationContext.DataPortalProxy = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName;
      ApplicationContext.DataPortalUrlString = "https://default.example.com/dataportal";

      // add mapping for DataPortalServerResource attribute
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        (int)ServerResources.SpecializedAlgorithm,
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://specialized.example.com/dataportal" });

      // add mapping for specific business type
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        typeof(MyImportantType),
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://important.example.com/dataportal" });

Notice that the server-side data portal endpoint is defined not just by a URL, but also by the proxy type, so you can use different network transport technologies to access different data portal servers.

Server-side data portal routing

One of the most common issues we all face when hosting server-side endpoints is dealing with client-side app versioning and having server endpoints that support clients running older software. This is particularly important when deploying mobile apps through the Apple/Google/Microsoft stores, when you can't directly control the pace of rollout for client apps.

Additionally, in a clustered server-side environment such as Kubernetes or Cloud Foundry it is likely that you'll want to organize your containers based on various characteristics, such as CPU consumption, memory requirements, or the need for specialized hardware.

The HTTP data portal (Csla.Server.Hosts.HttpPortalController) now supports a solution for both scenarios via server-side data portal routing. It is now possible to set up a public gateway server that hosts a data portal endpoint, which in turn routes all calls to other data portal endpoints within your server environment. Typically these other endpoints are not public, forcing all client apps to route all data portal requests through that gateway router endpoint.

Note that using technologies like Kubernetes it is entirely realistic for the gateway router to be composed of multiple container instances to provide scaling and fault tolerance.

To minimize overhead, the data portal router does not deserialize the inbound data stream from the client. Instead, it uses a header value to determine how to route the request to a worker node where the data stream is deserialized and processed. That header value is created by the client-side data portal by combining two concepts.

First is an optional application version value. If this value is set by the client-side app it is then passed through to the server-side data portal where the value is used to route requests to a worker node running a corresponding version of the server-side app. This allows you to leave versionA nodes running while also running versionB nodes, and once all client devices have migrated off versionA then you can shut down those versionA nodes with no disruption to your users.

Second is a DataPortalServerRoutingTag attribute you can apply to root domain types. This attribute provides a routing tag value that is also passed to the server-side data portal. The intent of this attribute is that you can specify that certain root domain types have characteristics that should be used for server-side routing. For example, you might tag domain types that you know will require a lot of CPU or memory on the server so they are routed to Kubernetes pods that are hosted on fast or large physical servers (Kubernetes nodes), while all other requests go to pods running on normal sized Kubernetes nodes.

Another example would be where the server-side processing requires access to specialized hardware. In that case your Kubernetes nodes would have a taint indicating that they have that hardware, and the DataPortalServerRoutingTag is used to tell the data portal router to send requests for that root domain type only to server-side data portal instances running on those nodes.

What makes this data portal feature a little complex is that the data portal router uses both the version and routing tag to route calls. This is obviously required to support both scenarios at the same time, but it does mean you need to define your routes such that they consider both version and routing tag elements.

The client-side data portal passes a routing tag to the server with the following format:

  • null - neither routing tag nor version are specified
  • routingTag-versionTag - both routing tag and version are specified
  • routingTag- - a routing tag is specified, but no version
  • -versionTag - a version is specified, but no routing tag

Your server-side data portal gateway router is configured on startup with code like this (in the data portal controller class):

  public class DataPortalController : HttpPortalController
  {
    public DataPortalController()
    {
      RoutingTagUrls.Add("routingTag-versionTag", "https://serviceName:36123/api/DataPortal");
    }
  }

You can specify as many routing tag/version keys as necessary to handle routing to the data portal endpoints running in your server environment.

If a key can't be found, or if the target URL is localhost then the request will be processed directly on the data portal router instance. Of course you may choose not to deploy any of your business DLLs to the router instance, in which case all such requests will fail, returning an exception to the client.

Offline mode

A common feature request, especially for mobile devices and laptops, is for the application to force itself into "offline mode" where all data portal calls are directed only to the local data portal. The client-side data portal now supports this via the Csla.ApplicationContext.IsOffline property.

If you set IsOffline to true the client-side data portal will immediately route all data portal requests to the local app rather than a remote server. It is as if all your data portal methods had the RunLocal attribute applied.

Clearly you need to build your DataPortal_XYZ methods to behave properly when they are running on the client (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Client) vs the server (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Server). Typically you'd have your code delegate to a DAL provider specific to the client or server based on execution location using the encapsulated invocation model as described in Using CSLA 4: Data Access.

Save and Merge

CSLA has included the GraphMerger type for some time now. This type allows you to merge the results of a SaveAsync call back into the existing domain object graph. The BusinessBase and BusinessListBase types now implement a simpler SaveAndMergeAsync method you can use to save a root domain object such that the results are automatically merged into the object graph.

DataPortalFactory in ASP.NET Core dependency injection

ASP.NET Core includes a dependency injection model based on IServiceCollection, with dependencies defined in the ConfigureServices method of the Startup.cs file. CSLA .NET now supports that scenario via an AddCsla extension method for IServiceCollection.

    public void ConfigureServices(IServiceCollection services)
    {
      // ...
      services.AddCsla();
    }

The only service that is defined is a new IDataPortalFactory, implemented by DataPortalFactory. The factory is a singleton, and is used to create instances of the data portal for use in a page. For example, in a Razor Page you might have code like this:

  public class EditModel : PageModel
  {
    private IDataPortal<Customer> dataPortal;

    public EditModel(IDataPortalFactory factory)
    {
      dataPortal = factory.GetPortal<Customer>();
    }
    
    // ...
  }

Configuration Enhancements

An important design pattern for success in both DevOps and container-based server deployment is to store config in the environment. The .NET Core configuration subsystem supports this pattern, and now CSLA .NET does as well. Not only that, but we now support an optional fluent configuration model if you want to set configuration in code (common in Xamarin apps for example), and integration with the .NET Core and ASP.NET Core configuration models.

.NET Core configuration subsystem integration

The .NET Core configuration subsystem supports modern configuration concepts, while still supporting legacy concepts like config files. It is also extensible, allowing providers to load configuration values from many different sources.

The base type used in .NET Core configuration is a ConfigurationBuilder. An instance of this type is provided to you by ASP.NET Core, or you can create your own instance for use in console apps. CSLA .NET provides an extension method to integrate into this configuration model. For example, in a console app you might write code like this:

  var config = new ConfigurationBuilder()
     .AddJsonFile("appsettings.coresettings.test.json")
     .Build()
     .ConfigureCsla();

In ASP.NET Core you'd simply use the ConfigurationBuilder instance already avalable to invoke the ConfigureCsla method.

The result of the ConfigureCsla method is that the configuration values are loaded for use by CSLA .NET based on any configuration sources defined by the ConfigurationBuilder. In many cases that includes a config file, environment values, and command line parameters. It might also include secrets loaded from Azure, Docker, or whatever environment is hosting your code. That's all outside the scope of CSLA itself - those are features of .NET Core. The point is that the ConfigureCsla method takes the results of the Build method and maps the config values for use by CSLA .NET.

Note that nothing we've done here has any impact on .NET Framework code. All your existing .NET 4.x code will continue to function against web.config/app.config files.

Fluent configuration API

Although it is best to follow the 12 Factor guidance around config, there are times when you just need to set config values through code. This is particularly true in Xamarin and WebAssembly scenarios, where the configuration systems are weak or non-existant.

In the past you've been able to directly set propery values on Csla.ApplicationContext, various data portal types, and elsewhere in CSLA. That's confusing and inconsistent, and is a source of bugs in many people's code.

There's a new fluent API for configuration you can use when you need to set configuration through code. The entry point for this API is Csla.Configuration.CslaConfiguration, which implements Csla.Configuration.ICslaConfiguration. The model uses extension methods for sub-configuration, extending the ICslaConfiguration type.

Basic configuration is done like this:

      new Csla.Configuration.CslaConfiguration()
        .PropertyChangedMode(Csla.ApplicationContext.PropertyChangedModes.Windows)
        .PropertyInfoFactory("a,b")
        .RuleSet("abc")
        .UseReflectionFallback(false)
        .SettingsChanged();

Data portal configuration can be chained like this:

      new Csla.Configuration.CslaConfiguration()
        .DataPortal().AuthenticationType("custom")
        .DataPortal().AutoCloneOnUpdate(false)
        .DataPortal().ActivatorType(typeof(TestActivator).AssemblyQualifiedName)
        .DataPortal().ProxyFactoryType("abc")
        .DataPortal().DataPortalReturnObjectOnException(true)
        .DataPortal().DefaultProxy(typeof(HttpProxy), "https://myserver.com/api/DataPortal")
        .DataPortal().ExceptionInspectorType("abc")
        .DataPortal().FactoryLoaderType("abc")
        .DataPortal().InterceptorType("abc")
        .DataPortal().ServerAuthorizationProviderType("abc")
        .SettingsChanged();

Client-side data portal descriptors for client-side data portal routing (see above) can be configured like this:

      new CslaConfiguration()
        .DataPortal().ProxyDescriptors(new List<Tuple<string, string, string>>
        {
          Tuple.Create(typeof(TestType).AssemblyQualifiedName, 
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, 
                       "https://example.com/test"),
          Tuple.Create(((int)ServerResources.SpecializedAlgorithm).ToString(),
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName,
                       "https://example.com/test")
        });

Configuration of Csla.Data options:

      new CslaConfiguration()
        .Data().DefaultTransactionIsolationLevel(Csla.TransactionIsolationLevel.RepeatableRead)
        .Data().DefaultTransactionTimeoutInSeconds(123);

Security options:

      new CslaConfiguration()
        .Security().PrincipalCacheMaxCacheSize(123);

You should consider using this new fluent API instead of the old/fragmented configuration options at all times - for setting config values. CSLA .NET itself continues to rely on Csla.ApplicationContext to read config values for use throughout the code.

Conclusion

CSLA .NET version 4.9 is very exciting for server-side developers, or people using a remote data portal where versioning or advanced routing scenarios are important. Between the data portal enhancements and more modern configuration techniques available in this version, your life should be improved as a CSLA .NET developer.

Wednesday, December 12, 2018 1:38:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, November 30, 2018

For some time now it has been the case that people are all excited about "microservices".

I want to start by saying that I think this is a horrible name for what we're trying to accomplish, because the term attempts to put the concept of size onto services.

Calling a service is expensive. You are creating a message, serializing that message, sending it over a network via http or a queue (or both), where it is deserialized, and finally the service can do some work. Any response goes through all that same overhead to get back to the caller. That overhead is measured in milliseconds - usually 10's of milliseconds.

So if your "micro"service does work that takes less than 10-20 milliseconds to complete, then you've paid this massive performance cost to get to/from the service over the network, only to have the actual work take less time than the overhead.

That's like paying your auto mechanic for a full hour, just so they can do a 5 minute fix to your car. Nobody would do that without being forced (and sometimes we are - due to specialized skills or tools/parts availability).

This analogy translates to the services world too. Calling a service is fine if the service:

  1. does enough work to be worth the overhead of calling the service (the work took longer than the network overhead)
  2. needs to run in a secure location
  3. is required to run in a geo-bounded location (think GDPR)
  4. must run on specialized hardware that doesn't exist elsewhere
  5. contains code that is so valuable that, if reverse engineered, would destroy your organization (variation on security)

I've tried to put those in order from most to least likely, but everyone's scenario is a bit different.

The point I'm really getting at, is that in the majority of cases it is important to make sure a service does enough work to make it worth the non-trivial overhead of getting to/from the service over http and/or a queue.

In those other less common scenarios there's a business decision to be made. If the service is very small and fast, you will be paying massive overhead for very little work to be done. Presumably the business is saying that those other concerns outweigh the fact that your app will be far slower than it could be.

(hint: as a developer you should make this clear to the business stakeholders up front - head off the question you know is coming later: "why is this app so slow???" - you want to be able to pull out the email thread where you explained that it will be slow because the business decision maker chose to put some tiny-but-critical bit of functionality in a service)

Friday, November 30, 2018 11:21:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, October 2, 2018

For many months (maybe years) I tried to use the Netflix app on my Surface Pro 3 and then Surface Pro 4.

The audio would always get out of sync with the video when streaming shows. Once Netflix allowed download of shows then it worked ok - but only with downloaded content.

This was true as recently as February 2018.

Searching the web revealed that other people had the issues too, and the concensus was something to do with the audio driver for the chipset used in Surface (and maybe other devices).

Recently I got a Surface Go, and to my surprise Netflix worked fine.

So I tried it again on my Surface Pro 4, and it works fine there too now.

I assume that either a Surface driver update or a Netflix app update (or both) occurred since February that finally resolved the issue with streaming and audio playback sync.

In any case, the news is good, and the Netflix app is working great on both my Surface Pro and Surface Go. This makes life in hotels a whole lot nicer, as the last thing I want is to be stuck with cable! 😃

Tuesday, October 2, 2018 4:12:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 25, 2018

As people reading my blog know, I'm an advocate of container-based deployment of server software. The most commonly used container technology at the moment is Docker. And the most popular way to orchestrate or manage clusters of Docker containers is via Kubernetes (K8s).

Azure, AWS, and GCE all have managed K8s offerings, so if you can use the public cloud there's no good reason I can see for installing and mananaging your own K8s environment. It is far simpler to just use one of the pre-existing managed offerings.

But if you need to run K8s in your own datacenter then you'll need some way to get a cluster going. I thought I'd figure this out from the base up, installing everything. There are some pretty good docs out there, but as with all things, I ran into some sharp edges and bumps on the way, so I thought I'd write up my experience in the hopes that it helps someone else (or me, next time I need to do an install).

Note: I haven't been a system admin of any sort since 1992, when I was admin for a Novell file server, a Windows NT server, and a couple VAX computers. So if you are a system admin for Linux and you think I did dumb stuff, I probably did 😃

The environment I set up is this:

  1. 1 K8s control node
  2. 2 K8s worker nodes
  3. All running as Ubuntu server VMs in HyperV on a single host server in the Magenic data center

Install Ubuntu server

So step 1 was to install Ubuntu 64 bit server on three VMs on our HyperV server.

Make sure to install Ubuntu server, not client, otherwise this article has good instructions.

This is pretty straightforward, the only notes are:

  1. When the server install offers to pre-install stuff, DON'T (at least don't pre-install Docker)
  2. Make sure the IP addresses won't change over time - K8s doesn't like that
  3. Make sure the MAC addresses for the virtual network cards won't change over time - Ubuntu doesn't like that

Install Docker

The next step is to install Docker on all three nodes. There's a web page with instructions

⚠ Just make sure to read to the bottom, because most Linux install docs are seem to read like those elementary school trick tests where the last step in the quiz is to do nothing - you know, a PITA.

In this case, the catch is that there is a release version of Docker so don't use the test version. Here's the bash steps to get stable Docker for Ubuntu 18.04:

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce

Later on the K8s instructions will say to install docker.io, but that's for Ubuntu desktop, while docker-ce is for Ubuntu server.

You will probably want to grant your current user id permission to interact with docker:

sudo usermod -aG docker $USER

Repeat this process on all three nodes.

Install kubeadm, kubectl, and kubelet

All three nodes need Docker and also the Kubernetes tools: kubeadm, kubectl, and kubelet.

There's a good instruction page on how to install the tools. Notes:

  1. Ignore the part about installing Docker, we already did that
  2. In fact, you can read-but-ignore all of the page except for the section titled Installing kubeadm, kubelet and kubectl. Only this one bit of bash is necessary:

Become root:

sudo su -

Then install the tools:

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

I have no doubt that all the other instructions are valuable if you don't follow the default path. But in my case I wanted a basic K8s cluster install, so I followed the default path - after a lot of extra reading related to the non-critical parts of the doc about optional features and advanced scenarios.

Install Kubernetes on the master node

One of my three nodes is the master node. By default this node doesn't run any worker containers, only containers necessary for K8s itself.

Again, there's a good instruction page on how to create a Kubernetes cluster. The first part of the doc describes the master node setup, followed by the worker node setup.

This doc is another example of read-to-the-bottom. I found it kind of confusing and had some false starts following these instructions: hence this blog post.

Select pod network

One key thing is that before you start you need to figure out the networking scheme you'll be using between your K8s nodes and pods: the pod network.

I've been unable to find a good comparison or explanation as to why someone might use any of the numerous options. In all my reading the one that came up most often is Flannel, so that's what I chose. Kind of arbitrary, but what are you going to do?

Note: A colleague of mine found that Flannel didn't interact well with the DHCP server when installing on his laptop. He used Weave instead and it seemed to resolve the issue. I guess your mileage may vary when it comes to pod networks.

Once you've selected your pod network then you can proceed with the instructions to set up the master node.

Set up master node

Read the doc referenced earlier for more details, but here are the distilled steps to install the master K8s node with the Flannel pod network.

Kubernetes can't run if swap is enabled, so turn it off:

swapoff -a

This command only affects the current session. You also need to turn off swap after a reboot. To do this you need to edit the /etc/fstab file and comment out the lines regarding the swap file. For example, I've added a # to comment out these two lines in mine:

#UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 defaults 0 0
#/swap.img	none	swap	sw	0	0

You can, optionally, pre-download the required container images before continuing. They'll get downloaded either way, but if you want to make sure they are local before initializing the cluster you can do this:

kubeadm config images pull

Now it is possible to initialize the cluster:

kubeadm init --pod-network-cidr=10.244.0.0/16

💡 The output of kubeadm init includes a lot of information. It is important that you take note of the kubeadm join statement that's part of the output, as we'll need that later.

Next make kubectl work for the root user:

export KUBECONFIG=/etc/kubernetes/admin.conf

Pass bridged IPv4 traffic to iptables (as per the Flannel requirements):

sysctl net.bridge.bridge-nf-call-iptables=1

Apply the Flannel v0.10.0 pod network configuration to the cluster:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

⚠ Apparently once v0.11.0 is released this URL will change.

Now the master node is set up, so you can test to see if it is working:

kubectl get pods --all-namespaces

Optionally allow workers to run on master

If you are ok with the security ramifications (such as in a dev environment), you might consider allowing the master node to run worker containers.

To do this run the following command on the master node (as root):

kubectl taint nodes --all node-role.kubernetes.io/master-

Configure kubectl for admin users

The last step in configuring the master node is to allow the use of kubectl if you aren't root, but are a cluster admin.

⚠ This step should only be followed for the K8s admin users. Regular users are different, and I'll cover that later in the blog post.

First, if you are still root, exit:

exit

Then the following bash is used to configure kubectl for the K8s admin user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

What this does is take the K8s keys that are in a secure location and copy them to the admin user's ~/.kube directory. That's where kubectl looks for the config file with the information necessary to run as cluster admin.

At this point the master node should be up and running.

Set up worker nodes

In my case I have 2 worker nodes. Earlier I talked about installing Docker and the K8s tools on each node, so all that work is done. All that remains is to join each worker node to the cluster controlled by the master node.

Like the master node, worker nodes must have swap turned off. Follow the same instructions from earlier to turn off swap on each worker node before joining it to the cluster.

That 'kubeadm join' statement that was display as a result of 'kubeadm init' is the key here.

Log onto each worker node and run that bash command as root. Something like:

sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

That'll join the worker node to the cluster.

Once you've joined the worker nodes, back on the master node you can see if they are connected:

kubectl get nodes

That is it - the K8s cluster is now up and running.

Grant access to developers

The final important requirement is to allow developers access to the cluster.

Earlier I copied the admin.conf file for use by the cluster admin, but for regular users you need to create a different conf file. This is done with the following command on the master node:

sudo kubeadm alpha phase kubeconfig user --client-name user > ~/user.conf

The result is a user.conf file that provides non-admin access to the cluster. Users need to put that file in their own '/.kube/' directory with the file name ofconfig:~/.kube/config`.

If you plan to create user accounts going forward, you can put this file into the /etc/skel/ directory as a default for new users:

sudo mkdir /etc/skel/.kube
sudo cp ~/user.conf /etc/skel/.kube/config

As you create new users (on the master node server) they'll now already have the keys necessary to use kubectl to deploy their images.

Recovering from a mistake

If you somehow end up with a non-working cluster (which I did a few times as I experimented), you can use:

kubeadm reset

Do this on the master node to reset the install so you can start over.

Summary

There are a lot of options and variations on how to install Kubernetes using kubeadm. My intent with this blog post is to have a linear walkthrough of the process based as much as possible on defaults; the exception being my choice of Flannel as the pod network.

Of course the world is dynamic and things change over time, so we'll see how long this blog post remains valid into the future.

Tuesday, September 25, 2018 9:17:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, September 17, 2018

I'm not 100% sure of the cause here, but today I ran into an issue getting the latest Visual Studio 2017 to work with Docker for Windows.

My device does have the latest VS preview installed too, and I suspect that's the core of my issue, but I don't know for sure.

So here's the problem I encountered.

  1. Open VS 2017 15.8.4
  2. Create a new ASP.NET Core web project with Docker selected
  3. Press F5 to run
  4. The docker container gets built, but doesn't run

I tried a lot of stuff. Eventually I just ran the image from the command line

docker run -i 3247987a3

By using -i I got an interactive view into the container as it failed to launch.

The problem turns out to be that the container doesn't have Microsoft.AspNetCore.App 2.1.1, and apparently the newly-created project wants that version. The container only has 2.1.0 installed.

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.1' was not found.
  - Check application dependencies and target a framework version installed at:
      /usr/share/dotnet/
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.0 at [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]

The solution turns out to be to specify the version number in the csproj file.

    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0" />

Microsoft's recent guidance has been to not specify the version at all and their new project template reflects that guidance. Unfortunately there's something happening on my machine (I assume the VS preview) that makes things fail if the version is not explicitly marked as 2.1.0.

Monday, September 17, 2018 10:49:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, September 12, 2018

I've recently become a bit addicted to Quora. It is probably because of their BNBR (be nice, be respectful) policy, so it isn't as nasty as Twitter and Facebook have become over the past couple years.

It also turns out that there are tech communities found on the site, and I've answered some questions recently. Stuff I probably would have (should have?) put on my blog, but wrote there instead.

Wednesday, September 12, 2018 3:13:35 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 30, 2018

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.

Thursday, August 30, 2018 12:47:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
April, 2019 (2)
January, 2019 (1)
December, 2018 (1)
November, 2018 (1)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2019, Marimer LLC

Send mail to the author(s) E-mail



Sign In