Rockford Lhotka

 Wednesday, September 4, 2019

I recently blogged about the new support coming in CSLA 5 for Blazor. Shipping in .NET Core 3, Blazor is an HTML-based UI framework for WebAssembly.

There's another very cool UI framework for WebAssembly that sits on top of .NET: Uno Platform.

Uno Platform CSLA

This UI framework relies on XAML (specifically the UWP dialect) to not only reach WebAssembly, but also Android and iOS, as well as Windows 10 of course. In short, the Uno approach allows you to write one codebase that can run on:

  1. Any modern browser via WebAssembly
  2. Android devices
  3. iOS devices
  4. Windows 10

Uno is similar to Xamarin Forms, in that it leverages the mono runtime to run code on iOS, Android, and WebAssembly. The primary differences are that Xamarin Forms has its own dialect of XAML (vs the UWP dialect), and doesn't target WebAssembly.

Solution Structure

When you create an Uno solution you get a number of projects:

  1. A project with the implementation shared across all platforms
  2. An iOS project
  3. An Android project
  4. A UWP (Windows 10) project
  5. A wasm (WebAssembly) project

You can see a working example of this in the CSLA UnoExample sample app.

Following typical CSLA best practices, you'll also add a .NET Standard 2.0 Class Library project for your business domain types, and at least one other for your data access layer implementation. You'll also normally have an ASP.NET Core project that acts as your application server, because a typical business app needs to interact with server-side resources like databases.

It is important to understand that, like Xamarin Forms, the platform-specific projects for iOS, Android, UWP, and wasm have almost no code. They exist to bootstrap the app on each type of platform. This is true of the app server code also, it is just there to provide an endpoint for the apps. All the real code is in the shared project, your business library, and your data access library.

Business Layer

The purpose of CSLA .NET is to provide a home for business logic. This is done by enabling the creation of business domain classes that encapsulate all business logic, and that's supported through consistent coding structures and a rules engine.

To improve developer productivity CSLA also abstracts many platform differences around data binding to various types of UI and interactions with app servers and data access layers.

ℹ Much more information about CSLA is available in the free Using CSLA 2019: CSLA Overview ebook.

As a demo, nothing in the UnoExample is terribly complex, but it does demonstrate some very basic types of rule: informational messaging, warning messaging, and validation errors. It doesn't demonstrate things like calculated values, cross-object rules, etc. There's a lot more info about the rules engine in the CSLA RuleTutorial sample.

The BusinessLayer project has a couple custom rules: CheckCase and InfoText, and it relies on the Required attribute from System.ComponentModel.DataAnnotations for a simple validation rule.

The PersonEdit class relies on these rules to enable basic creation and editing of a "person" domain concept:

  [Serializable]
  public class PersonEdit : BusinessBase<PersonEdit>
  {
    public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(nameof(Id));
    public int Id
    {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
    }

    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(nameof(Name));
    [Required]
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    protected override void AddBusinessRules()
    {
      base.AddBusinessRules();
      BusinessRules.AddRule(new InfoText(NameProperty, "Person name (required)"));
      BusinessRules.AddRule(new CheckCase(NameProperty));
    }
    
    // data access abstraction below...
  }

The PersonEdit class also leverages the CSLA data portal to abstract the concept of an application server (or not) and prescribes how to interact with the data access layer. This code also leverages the new CSLA version 5 support for method-level dependency injection:

    // properties and business rules above...

    [Create]
    private void Create()
    {
      Id = -1;
      BusinessRules.CheckRules();
    }

    [Fetch]
    private void Fetch(int id, [Inject]DataAccess.IPersonDal dal)
    {
      var data = dal.Get(id);
      using (BypassPropertyChecks)
        Csla.Data.DataMapper.Map(data, this);
      BusinessRules.CheckRules();
    }

    [Insert]
    private void Insert([Inject]DataAccess.IPersonDal dal)
    {
      using (BypassPropertyChecks)
      {
        var data = new DataAccess.PersonEntity
        {
          Name = Name
        };
        var result = dal.Insert(data);
        Id = result.Id;
      }
    }

    [Update]
    private void Update([Inject]DataAccess.IPersonDal dal)
    {
      using (BypassPropertyChecks)
      {
        var data = new DataAccess.PersonEntity
        {
          Id = Id,
          Name = Name
        };
        dal.Update(data);
      }
    }

As is normal with CSLA, any interaction with the database is managed by the data access layer, and management of private fields or data within the domain object is managed in these data portal methods, for a clean separation of concerns.

Data Access Layer

I am not going to dive into the data access layer (DAL) in any depth. The implementation in the sample is a pure in-memory model relying on LINQ and a static collection as a stand-in for a database Person table.

This approach is valuable for two scenarios:

  1. A demo or example where I want the code to "just work" without forcing you to create a database
  2. Integration testing scenarios where automated integration or user acceptance testing should be fast, and should be performed on a known set of data - without having to reinitialize a real database each time

This sample code isn't quite at the level to accomplish the second goal, but it is easily achieved, especially given that the DAL is injected into the code via DI.

App Server

The AppServer project is an ASP.NET Core project using the empty API template. So it just contains a Controllers folder and some configuration. It also references the business and data access layers so they are available on the hosting server (maybe IIS, maybe in a Docker container in Kubernetes - thanks to .NET Core there's a lot of flexibility here).

Configuration

In the Startup class CSLA and the DAL are added to the available services in the ConfigureServices method:

      services.AddCsla();
      services.AddTransient(typeof(DataAccess.IPersonDal), typeof(DataAccess.PersonDal));

And the Configure method configures CSLA:

      app.UseCsla();

It is also important to note that the app server is configured for CORS, because the wasm client runs in a browser, and isn't deployed from the app server. Without CORS configuration the app server would reject HTTP requests from the wasm client app.

Data Portal Controllers

The reason for the app server is to expose endpoints for use by the client apps on iOS, Android, Windows, and wasm. CSLA has a component called the data portal that abstracts the entire idea of an app server and the network transport used to interact with any app server. As a result, an app server exposes "data portal endpoints" using components supplied by CSLA.

In the Controllers folder are DataPortalController and DataPortalTextController classes. The DataPortalController looks like this:

  [Route("api/[controller]")]
  [ApiController]
  public class DataPortalController : Csla.Server.Hosts.HttpPortalController
  {
    [HttpGet]
    public string Get()
    {
      return "Running";
    }
  }

The Get method is entirely optional. I tend to implement one because it makes it easier to troubleshoot my web server. But the real behavior of a data portal endpoint is in its ability to handle a POST request, and that's already provided via the HttpPortalController base class.

The only difference in the DataPortalTextController is one line in the constructor:

    public DataPortalTextController()
    {
      UseTextSerialization = true;
    }

This is due to a current limitation of .NET running in WebAssembly in the browser: the HttpClient can't transfer binary data, only text data.

Normally the CSLA data portal transfers data as binary, often compressed. The whole point is to minimize data over the network and maximize performance. However, in the case of a wasm client app that binary data needs to be Base64 encoded into text for transfer over the network.

The result are two data portal endpoints, one binary, the other text. Otherwise they do the same thing.

Uno UI Apps

The remaining projects in the solution rely on Uno to implement a common XAML-based UI with apps for iOS, Android, Windows, and wasm. Each of these apps is called a "head", and they all rely on a common implementation from the UnoExample.Shared project.

Platform-Specific UI Projects

Each head project is a bootstrap for the client app, providing platform or operating system specific startup code and then handing off to the shared code for all the "real work". I'm not going to explore the various head apps, because they are template code - nothing I wrote.

⚠ The current Uno templates start with an Assets folder in the shared project. That'll cause compiler warnings from Android. Move that Assets folder from the shared project to the UWP project to solve this problem.

⚠ Do not update the console logging dependency in NuGet. It starts at version 1.1.1, and if you upgrade that'll cause a runtime issue with threading in the wasm head.

⚠ You may need to update the current target OS versions for Android and UWP, as the Uno template targets older versions of both.

Shared Project

The way Uno works is that you implement nearly all of your application's UI logic in a shared project, and that project is compiled and deployed via each platform-specific head project. In other words, the code in the shared project is compiled into the iOS project, and into the Android project, and into the Windows project, and into the wasm project.

The result, is that as long as you don't do anything platform-specific, all your UI client code ends up in this one shared project.

Configuration

When each platform-specific head app starts up it hands off control to the shared code as soon as any platform-specific details are handled. The entry point to the shared project is via App.xaml and any code behind in App.Xaml.cs.

CSLA is configured in the constructor of the App class. In the UnoExample code there are two sets of configuration, only one of which should be uncommented at a time.

Use Text-based Data Transfer for WebAssembly

As I mentioned when discussing the app server, the HttpClient object in .NET on WebAssembly can't currently transfer binary data. This means that CSLA needs to be configured to Base64 encode the data sent to the app server. In the constructor of the App class there is this code:

#if __WASM__
      Csla.DataPortalClient.HttpProxy.UseTextSerialization = true;
#endif

This uses a compiler directive that is predefined by the wasm UI project to conditionally compile a line of code. In other words, this line of code is only compiled into the wasm project, and it is totally ignored for all the other project types.

Run "App Server" In-Process

After that, the constructor configures CSLA to run the "app server" code in-proc on the client. This is useful for demo purposes as it means each app will "just run" without you having to set up a real app server:

      CslaConfiguration.Configure().
        ContextManager(new Csla.Xaml.ApplicationContextManager());
      var services = new ServiceCollection();
      services.AddCsla();
      services.AddTransient(typeof(DataAccess.IPersonDal), typeof(DataAccess.PersonDal));

CSLA is configured to use an ApplicationContextManager designed for Uno. This kind of gets into the internals of CSLA, but CSLA relies on different context managers for different runtime environments, because in some cases context is managed by HttpContext, others on a per-thread basis, and still others via static fields.

The use of ServiceCollection configures the .NET Core dependency injection subsystem so CSLA can inject the correct implementation of the data access layer upon request.

ℹ In a real app of this sort the data access layer would not be deployed to, or available on, the client apps. It would only be deployed to the app server. But this is a demo, and it is far more convenient for you to be able to just run the various head apps without first having to set up an app server.

Invoke a Remote App Server

For the UWP and wasm heads you can easily comment out the "local app server" configuration and uncomment the configuration for the actual app server:

      string appserverUrl = "http://localhost:60223/api/dataportal";
      if (Csla.DataPortalClient.HttpProxy.UseTextSerialization)
        appserverUrl += "Text";
      CslaConfiguration.Configure().
        ContextManager(new Csla.Xaml.ApplicationContextManager()).
        DataPortal().
          DefaultProxy(typeof(Csla.DataPortalClient.HttpProxy), appserverUrl);

This code sets up a URL for the app server endpoint, appending "Text" to the controller name if text-based encoding should be used.

It then configures the application context manager, and also tells the CSLA data portal to use the HttpProxy type to communicate with the app server, along with the endpoint URL.

⚠ This only works with the Windows and wasm client apps. It won't work with the iOS or Android client apps because they won't have access to your localhost. If you want to use an app server with the Android or iOS apps you'll need to deploy the AppServer project to a real app server.

Notice that there is no configuration of the data access layer in this scenario. That's because the data access layer is only invoked on the app server, not on the client. This is a more "normal" scenario, and in this case the client-side head projects would not reference the DataAccess project at all, so that code wouldn't be deployed to the client devices.

UI Implementation

I'm not going to walk through the UI implementation in great detail. If you know XAML it is pretty straightforward, and if you don't know XAML there are tons of resources out there about how UWP apps work.

The really cool thing though, is how Uno manages to provide a largely consistent experience for UWP-style XAML across four different platforms. In particular, I found building this example quite enjoyable because I could use standard debugging against the UWP head, and then just run the code in the wasm head. Usually that means the wasm UI "just works", and that's wonderful!

ℹ In the cases where I had difficulties, the Uno team is active on the Uno Gitter channel (like a public Teams or Slack), and between the team and the community I got over any hurdles very rapidly.

Loading List of People

The MainPage displays a list of people in the database, and has a button to add a new person. It also has a couple text output lines to show status and errors. I don't claim that this is a nice or pretty user interface, but it demonstrates important concepts 😊

The important bit is in the RefreshData method, where the CSLA data portal is used to retrieve the list of people:

    private async Task RefreshData()
    {
      this.InfoText.Text = "Loading ...";
      try
      {
        DataContext = await DataPortal.FetchAsync<PersonList>();
        this.InfoText.Text = "Loaded";
      }
      catch (Exception ex)
      {
        OutputText.Text = ex.ToString();
      }
    }

This demonstrates a key feature of CSLA: location transparency. The data portal call will work regardless of whether the data portal was configured to run the app server code in-process on the client, or remotely on a server. Even better, though this example app uses HTTP as a transport, you could configure the data portal to use gRPC or RabbitMQ or other network transports, and the code here in the UI wouldn't be affected at all!

ℹ When editing XAML or codebehind a page you might find that you get all sorts of Intellisense errors. This can be resolved by making sure the Project dropdown in the upper-left of the code editor is set to UnoExample.UWP. It'll often default to some other project, and that causes the errors.

In this example notice that the Project dropdown is set to UnoExample.Droid, and that is why the code editor is all confused.

Editing a PersonEdit Object

The EditPerson page is a typical forms-over-data scenario. As the page loads it is data bound to a new or existing domain object:

    public int PersonId { get; set; } = -1;

    protected override async void OnNavigatedTo(NavigationEventArgs e)
    {
      if (e.Parameter != null)
        PersonId = (int)e.Parameter;

      PersonEdit person;
      this.InfoText.Text = "Loading ...";
      if (PersonId > -1)
        person = await DataPortal.FetchAsync<PersonEdit>(PersonId);
      else
        person = await DataPortal.CreateAsync<PersonEdit>();
      DataContext = person;
      this.InfoText.Text = "Loaded";
    }

The user is then able to edit the Name property, which has rules associated with it from the business library. Details about those rules (and other metastate about the domain object and individual properties) can be displayed to the user via data binding. The Csla.Xaml.PropertyInfo type provides data binding with access to all sorts of metastate about a specific property, and that is used in the XAML:

    <TextBox Text="{Binding Name, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" />
    <csla:PropertyInfo x:Name="NameInfo" Property="{Binding Name, Mode=TwoWay}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=Value}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=IsValid}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=InformationText}" Foreground="Blue" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=WarningText}" Foreground="DarkOrange" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=ErrorText}" Foreground="Red" />

Again, I don't claim to be a UX designer, but this does demonstrate some of the capabilities available to a UI developer given the rich metastate provided by CSLA. The result looks like this:

Once the user has edited the values on the page, they can click the button to save the person. That also relies on CSLA to provide location transparent code:

    private async void SavePerson(object sender, RoutedEventArgs e)
    {
      try
      {
        var person = (PersonEdit)DataContext;
        await person.SaveAsync();
        var rootFrame = Window.Current.Content as Frame;
        rootFrame.Navigate(typeof(MainPage));
      }
      catch (Exception ex)
      {
        OutputText.Text = ex.ToString();
      }
    }

The SaveAsync method uses the data portal to have the DAL insert or update (or delete) the data associated with the domain object. In this case, once the object's data has been saved the user is navigated to the list of people.

Conclusion

I am very excited about WebAssembly and the ability to run native .NET code in any modern browser. The Uno Platform offers a powerful UI framework for building apps that run native on mobile devices, in Windows, and in any modern browser.

CSLA .NET has always been about providing a home for your business logic, allowing you to write your logic once and to then leverage it on any platform or environment supported by .NET. Thanks to .NET running in WebAssembly, this means that you can take your business logic directly into any modern browser on any device or platform.

Android | CSLA .NET | iOS | UWP | WebAssembly
Wednesday, September 4, 2019 10:07:18 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, September 2, 2019

I'm excited about two things in our industry right now: containers (specifically Kubernetes) and WebAssembly (specifically Blazor and Uno).

CSLA .NET version 4.11 got a lot of new and exciting features to support container and Kubernetes scenarios, and there are some more coming in CSLA version 5 as well.

But over the past few days I've been building a new Csla.Blazor package to provide some basic UI support when using CSLA 5 with Blazor. Specifically client-side Blazor, which is the really exciting part, though this should all work fine on server-side Blazor as well.

Application Context Manager

Part of this is a context manager that helps simplify configuration and context management within the Blazor client environment. Specifically:

  1. The HttpProxy is set to use text-based serialization, because Blazor (wasm) doesn't currently support passing binary data via HttpClient
  2. The User property is maintained in a static, just like in all other smart client scenarios

Configuring a Blazor Client App

Blazor relies on the standard Startup class like ASP.NET Core for configuring a client app. CSLA supports this model via an AddCsla method and the fluent CslaConfiguration system. As a result, basic configuration looks like this:

using Csla;
using Csla.Configuration;
using Microsoft.AspNetCore.Components.Builder;
using Microsoft.Extensions.DependencyInjection;

namespace BlazorExample.Client
{
  public class Startup
  {
    public void ConfigureServices(IServiceCollection services)
    {
      services.AddCsla();
      services.AddTransient(typeof(IDataPortal<>), typeof(DataPortal<>));
      services.AddTransient(typeof(Csla.Blazor.ViewModel<>), typeof(Csla.Blazor.ViewModel<>));
    }

    public void Configure(IComponentsApplicationBuilder app)
    {
      app.AddComponent<App>("app");

      CslaConfiguration.Configure().
        ContextManager(typeof(Csla.Blazor.ApplicationContextManager)).
        DataPortal().
          DefaultProxy(typeof(Csla.DataPortalClient.HttpProxy), "/api/DataPortal");
    }
  }
}

Blazor defaults to providing HttpClient as a service, and this code adds mappings for IDataPortal<T> and ViewModel<T>.

Notice that it also configures the app to use the ApplicationContextManager designed to support Blazor.

The HttpProxy data portal channel will gain access to the environment's HttpClient via dependency injection.

Data Portal Server Needs to Use Text

As noted above, the HttpClient implementation currently used by .NET in wasm can't transfer binary data. As a result both client and server need to be configured to use text-based data transfer (basically Base64 encoded binary data). This is automatic on the Blazor client, but the data portal server controller needs to use text as well. Here's the controller code from the server's endpoint:

  [Route("api/[controller]")]
  [ApiController]
  public class DataPortalController : Csla.Server.Hosts.HttpPortalController
  {
    public DataPortalController()
    {
      UseTextSerialization = true;
    }
  }

If your data portal needs to support Blazor and non-wasm clients, you'll need two controllers, one for wasm clients and one for everything else.

ViewModel Type

The new Csla.Blazor.ViewModel type provides basic support for creating a Razor Page that binds to a business domain object via the viewmodel.

As with all the previous XAML-based viewmodel types, this one exposes the domain object via a Model property, because the CSLA-based domain object already fully supports data binding. It would be a waste of code (to write, debug, test, and maintain) to duplicate all the properties from a CSLA-based domain class in a viewmodel.

Also, like the previous XAML-based viewmodel types, this one supports some basic verbs/operations that are likely to be triggered by the UI. Specifically the create/fetch and save operations.

Finally, the viewmodel type exposes a set of metastate methods designed to allow the Razor Page to easily understand and bind to the state of the business object. For example, is the object currently saveable? What are the information/warning/error validation messages for a given property? Is a property currently running any async business rules?

You can use all these metastate values to create a rich UI, much like in XAML, with no code. The Blazor data binding model, combined with the new ViewModel typically provide everything necessary.

To use this type in a page, make sure to add it to the services in Startup as shown earlier. Then inject it into the page:

@inject Csla.Blazor.ViewModel<PersonEdit> vm

And in the @code block call the RefreshAsync method:

@code {
  protected override async Task OnInitializedAsync()
  {
    await vm.RefreshAsync();
  }
}

The RefreshAsync method has various default behaviors around how it knows to fetch or create an instance of the business domain type:

  1. If the domain type is read-only it always does a fetch
  2. If the domain type is editable and no criteria parameters are provided it does a create
  3. If the domain type is editable and criteria is provided it does a fetch

You can override this behavior, but these defaults work well in many cases.

BlazorExample Sample

You can look at the Samples/BlazorExample sample to see how this comes together. That sample is the basic Blazor start template, plus the ability to add/edit person objects, and get a list of people in the "database" (a mock in-memory data store).

Monday, September 2, 2019 10:41:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 22, 2019

I recently posted about the new gRPC data portal channel coming in CSLA 5.

I've also been working on a data portal channel based on using RabbitMQ as the underlying transport.

Now this might seem odd, because the CSLA .NET data portal is essentially a "synchronous" model, much like HTTP. The caller sends a message to the server, and waits for a response. One of:

  1. A response message indicating some result (success or failure)
  2. An exception due to the transport failing
  3. A timeout due to the server not responsing

This makes sense with gRPC and HTTP, because they both follow that bi-directional communication model. But (by themselves) queues don't. Queues are "fire and forget", providing a one-way message protocol.

However, it has been a common practice for decades to use queues for bi-directional messaging through the use of a reply queue.

In this model callers (logical client-side code) sends requests to the data portal by sending a message to the data portal server's queue.

The data portal server processes those messages exactly as though they came in via HTTP or gRPC. The calls are routed to your business code, which can do whatever it wants on the server (typically talk to a database). When your business code is done, the response is sent back to each caller's respective reply queue.

This seems pretty intuitive and straightforward. The various request/response pairs are coordinated using something called a correlation id, which is just a unique value for each original request. Also, each request includes the name of its reply queue, making it easy to respond to the original caller.

The data portal server can handle many inbound requests at the same time, because they are all uniquely identified via correlation id and reply queue. In fact there are some amazing benefits to this approach:

  1. If the data portal server crashes and comes back up it'll pick up where it left off - a valuable attribute in an environment such as Kubernetes
  2. Multiple instances of the data portal server can run at the same time to spread the workload across multiple servers - useful in traditional data centers and in Kubernetes
  3. Fault tolerance can be achieved by configuring RabbitMQ itself to run in a redundant clustered environment

It is also the case that the caller might be something like a web server. So a given caller might send multiple concurrent requests to the data portal. And that's fine, because each request has a unique correlation id, allowing replies from the data portal server to be mapped back to the original requester.

The one primarily limitation is that if a caller crashes then its "client-side" state is lost. This is an inherent part of the bi-directional, caller-driven model used by the data portal.

You can think of it as being no different from an HTTP caller (e.g. a browser) shutting down after making a request to a web server and before the server responds. The server may complete its work, but even if the user opens a new browser window they'll never get the response from the server.

The same thing is true in this implementation of the data portal using RabbitMQ. So it has total parity with HTTP or gRPC in this regard.

The great thing is how CSLA abstracts the use of RabbitMQ, just like it does for HTTP, gRPC, and any other network transport.

Identifying the RabbitMQ Service

Everything assumes you have a RabbitMQ instance running. It might be a single node or a cluster; either way RabbitMQ has an IP address and a port. The data portal also requires that you provide a name for the data portal server queue, and you can optionally manually name the reply queues.

To make this fit within the URL-based model for other transports, CSLA relies on a URI for the RabbitMQ service. It looks like this:

rabbitmq://servername/queuename

And optionally on the client, if you want to manually specify the reply queue name:

rabbitmq://servername/queuename?reply=replyqueuename

In advanced scenarios you can use more of the URI scheme:

rabbitmq://username:password@servername:port/queuename

Think of this like a URL for an HTTP or gRPC endpoint.

Implementing a Client

On the client all that's needed is:

  1. Reference the Csla.Channels.RabbitMq NuGet package (CSLA v5.0.0-R19082201 or higher)
  2. Configure the data portal to use the new channel:
  CslaConfiguration.Configure().
    DataPortal().
      DefaultProxy(typeof(Csla.Channels.RabbitMq.RabbitMqProxy), "rabbitmq://localhost/rmqserver");

This configures the data portal to use the RabbitMQ channel, and to find the server using the provided URI.

Implementing the Server

Unlike with HTTP and gRPC where the server is probably hosted in ASP.NET Core, RabbitMQ servers are usually implemented as a console app. This is ideal for hosting in lightweight containers in Docker or Kubernetes, as there's no need for the overhead of ASP.NET.

  1. Create a console app (.NET Core 2.0 or higher)
  2. Create an instance of the data portal host
  3. Tell the data portal to start listening for requests

Here's a complete implementation:

using System;
using System.Threading.Tasks;

namespace rmqserver
{
  class Program
  {
    static async Task Main(string[] args)
    {
      Console.WriteLine("Start listener; ctl-c to exit");
      var host = new Csla.Channels.RabbitMq.RabbitMqPortal("rabbitmq://localhost/rmqserver");
      host.StartListening();

      await new Csla.Reflection.AsyncManualResetEvent().WaitAsync();
    }
  }
}

Shared Business Logic

Of course the centerpiece of CSLA .NET is the idea of shared business logic in a common assembly. So any solution would contain the client code as shown above, the server, and a .NET Standard 2.0 Class Library that contains all the business classes that encapsulate business logic.

Both the client and server projects must reference the business class library assembly. That business assembly needs to be available to both client and server code. The data portal takes care of the rest.

In that shared assembly you might have a simple type like this:

using Csla;
using System;
using System.ComponentModel.DataAnnotations;
using System.Threading.Tasks;

namespace ClassLibrary1
{
  [Serializable]
  public class PersonEdit : BusinessBase<PersonEdit>
  {
    public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(nameof(Id));
    public int Id
    {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
    }

    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(nameof(Name));
    [Required]
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    [Create]
    private void Create(int id)
    {
      using (BypassPropertyChecks)
        Id = id;
    }

    [Fetch]
    private async Task Fetch(int id)
    {
      // TODO: get object's data
    }

    [Insert]
    private async Task Insert()
    {
      // TODO: insert object's data
    }

    [Update]
    private async Task Update()
    {
      // TODO: update object's data
    }

    [DeleteSelf]
    private async Task DeleteSelf()
    {
      await Delete(ReadProperty(IdProperty));
    }

    [Delete]
    private async Task Delete(int id)
    {
      // TODO: delete object's data
    }
  }
}

The client can interact with this type via the data portal. For example:

  var obj = await DataPortal.CreateAsync<PersonEdit>(42);
  obj.Name = "Arnold";
  if (obj.IsSaveable)
  {
    await obj.SaveAndMergeAsync();
  }

And that's it. The data portal takes care of relaying that call to the server (in this case via RabbitMQ). The server creates an instance of PersonEdit and invokes the method marked with the Create attribute so the object can invoke a data access layer (DAL) to initialize itself or do whatever is necessary.

In CSLA 5 those create/fetch/insert/update/delete methods can all accept parameters that are provided via dependency injection, but that's a topic for another blog post. Keep in mind that DI is the appropriate way to gain access to the DAL, and that any interact with databases is encapsulated within the DAL.

Thursday, August 22, 2019 9:31:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, August 21, 2019

The new .NET Core 3 release includes support for the gRPC protocol. This is an efficient binary protocol for making network calls, and so is something that CSLA .NET should obviously support.

CSLA already has an extensible channel-based model for network communication via the data portal. Over the years there have been numerous channels, including:

  • .NET Remoting (obsolete)
  • asmx services (obsolete)
  • WCF (of limited value in modern .NET)
  • Http

I'm sure there have been others as well. The current recommended channel is via Http (using the HttpProxy type), as it best supports performance, routing, and various other features native to HTTP and to the data portal channel implementation.

CSLA .NET version 5.0.0 will include a new gRPC channel. Like all the other channels, this is a drop-in replacement for your existing channel.

⚠ This requires CSLA .NET version 5.0.0-R19082107 or higher

Client Configuration

On the client it requires a new NuGet package reference and a configuration change.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. On app startup, configure the data portal as shown here:
      CslaConfiguration.Configure().
        DataPortal().DefaultProxy(typeof(Csla.Channels.Grpc.GrpcProxy), "https://localhost:5001");

This configures the data portal to use the new GrpcProxy and provides the URL to the service endpoint. Obviously you need to provide a valid URL.

Server Configuration

On the server it requires a new NuGet package reference and a bit of code in Startup.cs to set up the service endpoint.

⚠ This requires an ASP.NET Core 3.0 project.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. In the ConfigureServices method you must configure gRPC: services.AddGrpc();
  3. In the Configure method add the data portal endpoint:
      app.UseRouting();

      app.UseEndpoints(endpoints =>
      {
        endpoints.MapGrpcService<Csla.Channels.Grpc.GrpcPortal>();
      });

General Notes

As usual, both client and server need to reference the same business library assembly, which should be a .NET Standard 2.0 library that references the Csla NuGet package. This assembly contains implementations of all your business domain classes based on the CSLA .NET base classes.

The gRPC data portal channel uses MobileFormatter to serialize and deserialize all object graphs, and so your business classes need to use modern CSLA coding conventions so they work with that serializer.

All the version and routing features added to the Http data portal channel in CSLA version 4.9.0 are also supported in this new gRPC channel, allowing it to take full advantage of container orchestration environments such as Kubernetes.

Also, as with the Http channel, the GrpcProxy and GrpcPortal types have virtual methods you can optionally override to implement compression on the data stream, and (on the client) to support advanced configuration scenarios when creating the underlying HttpClient and gRPC client objects.

Wednesday, August 21, 2019 1:37:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 25, 2019

Containers and Kubernetes (k8s) are useful for building and deploying distributed systems in general. This includes service-based architectures (SOA, microservices) as well as n-tier client/server endpoints.

I do think container-based runtimes and service-based architecture go hand-in-hand. However, a lot of the benefits of container-based runtimes apply just as effectively to a well-architected n-tier client/server application as well.

By “well architected” I mean an n-tier app that has:

  1. Good separation of concerns between interface, interface control, business, data access, and data storage layers
  2. “Chunky” communication between client and server
    1. A limited number of server endpoints - the n-tier services have been well-considered and are cohesive
    2. Effective use of things like the unit of work pattern to minimize calls over the network by bundling “multiple calls” into a single unit of work (single call)
  3. Efficient use of data transfer - no blind use of ORM tools where extraneous data flows over the network just because it “made things easier to implement”
    1. Focus on decoupling over reuse (two sides of the same coin, where reuse leads to coupling and coupling is extremely bad)

In such an n-tier app, the client is often quite smart (whether mobile, Windows, Mac, WebAssembly, or even TypeScript), and the overall solution is architected to enable this smart client to efficiently interact with n-tier endpoints (really a type of service) to leverage server-side behaviors.

These n-tier endpoints are not designed for use by other apps. They are not “open”. This is arguably a good thing, because it means they can (and should) use much more efficient binary serialization protocols, as compared to the abysmally inefficient JSON or XML protocols used by “open” services.

At the end of the day, the key is that nothing stops you from hosting a n-tier endpoint/service in a container. Well, nothing beyond what might stop you from hosting any code in a container.

In other words, if you build (or update) your n-tier endpoint code to follow cloud-native best practices (such as embracing 12factor design and avoiding the fallacies of distributed computing) - just like you must with microservice implementations - your endpoint code can take advantage of container-based runtimes very effectively.

Now a lot of folks tend to look at any n-tier app and think “monolith”. Which can be true, but doesn’t have to be true. People can create good or bad n-tier solutions, just like they can create good or bad microservice solutions. These architectures aren’t silver bullets.

Look at MVC - a great design pattern - unless you put your business logic in the controller. And LOTS of people write business logic in their controllers. Horrible! Totally defeats the purpose of the design pattern. But it is expedient, so people do it.

What I’m saying, is that if you’ve done a good job designing your n-tier endpoints to be cohesive around business behaviors, and you can make them (at least mostly) 12factor-compliant, you can get container-based runtime benefits such as:

  1. Scaling/Elasticity - quickly spin up/down workers based on active load
  2. Functional grouping - have certain endpoints run on designated server nodes based on CPU, IO, or other workload requirements
  3. Manageability - the same management features that draw people to k8s for microservices are available for n-tier endpoints as well
  4. Resiliency - auto-fail over of k8s pods applies to n-tier endpoints just as effectively as microservices
  5. Infrastructure abstraction - k8s is basically the same (from a dev or code perspective) regardless of whether it is in your datacenter, or in Azure, or AWS

I’ll confess that I’m a little biased. I’ve spent many years talking about good n-tier client server architecture, and have over 22 years of experience maintaining the open source CSLA framework based on such an architecture.

The most recent versions of CSLA have some key features that allow folks to truly exploit container-based runtimes. Usually with little or no change to any existing code.

My key point is this: container-based runtimes offer fantastic benefits to organizations, for both service-based and n-tier client/server architectures.

Tuesday, June 25, 2019 1:47:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, April 5, 2019

One thing I’ve observed in my career is something I call the “pit of success”. People (often including me when I was younger) write quick-and-dirty software because what we’re doing is a “simple project” or “for just a couple users” or a one-off or a stopgap or a host of other diminutive descriptions.

What so often happens is that the software works well - it is successful. And months later you get a panicked phone call in the middle of the night because your simple app for a couple users that was only a stop-gap until the “real solution came online” is now failing for some users in Singapore.

You ask how it has users in Singapore, the couple users you wrote it for were in Chicago? The answer: oh, people loved it so much we rolled it out globally and it is used by a few hundred users.

OF COURSE IT FAILS, because you wrote it as a one-off (no architecture or thoughtful implementation) for a couple users. You tell them that fixing the issues requires a complete rearchitect and implementation job. And that it’ll take a team of 4 people 9 months. They are shocked, because you wrote this thing initially in 3 weeks.

This is often a career limiting move, bad news all around.

Now if you’d put a little more thought into the original architecture and implementation, perhaps using some basic separation of concerns, a little DDD or real OOD (not just using an ORM) the original system may well have scaled globally to a hundred users.

Even if you do have to enhance it to support the fact that you fell into the pit of success, at least the software is maintainable and can be enhanced without a complete rewrite.

This "pit of success" concept was one of the major drivers behind the design of CSLA .NET and the data portal. If you follow the architecture prescribed by CSLA you'll have clear separation of concerns:

  1. Interface
  2. Interface control
  3. Business logic
  4. Data access
  5. Data storage

And you'll be able to deploy your initial app as a 1- or 2-tier quick-and-easy thing for those couple users. Better yet, when that emergency call comes in the night, you can just:

  1. Stand up an app server
  2. Change configuration on all clients to use the app server

And now your quick app for a couple users is capable of global scaling for hundreds of users. No code changes. Just an app server and a configuration change.

(and that just scratches the surface - with a little more work you could stand up a Kubernetes cluster instead of just an app server and support tens of thousands of users)

So instead of a career limiting move, you are a hero. You get a raise, extra vacation, and the undying adoration of your users 😃

Friday, April 5, 2019 10:38:52 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

How can a .NET developer remain relevant in the industry?

Over my career I've noticed that all technologies require developers to work to stay relevant. Sometimes that means updating your skills within the technology, sometimes it means shifting to a whole new technology (which is much harder).

Fortunately for .NET developers, the .NET platform is advancing rapidly, keeping up with new software models around containers, cloud-native, and cross-platform client development.

First it is important to recognize that the .NET Framework is not the same as .NET Core. The .NET Framework is effectively now in maintenance mode, and all innovation is occurring in the open source .NET Core now and into the future. So step one to remaining relevant is to understand .NET Core (and the closely related .NET Standard).

Effectively, you should plan for .NET Framework 4.8 to become stable and essentially unchanging for decades to come, while all new features and capabilities are built into .NET Core.

Right now, you should be working to have as much of your code as possible target .NET Standard 2.0, because that makes your code compatible with .NET Framework, .NET Core, and mono. See Migrating from .NET to .NET Standard.

Second, if you are a client-side developer (Windows Forms, WPF, Xamarin) you need to watch .NET Core 3, which is slated to support Windows Forms and WPF. This will require migration of existing apps, but is a way forward for Windows client developers. Xamarin is a cross-platform client technology, and in this space you should learn Xamarin.Forms because it lets you write a single app that can run on iOS, Android, Mac, Linux desktop, and Windows.

Third, if you are a client-side developer (web or smart client) you should be watching WebAssembly. Right now .NET has experimental support for WebAssembly via the open source mono runtime. And also a wasm UI framework called Blazor, and another XAML-based UI framework called the Uno Platform. This is an evolving space, but in my opinion WebAssembly has great promise and is something I’m watching closely.

Fourth, if you are a server-side developer it is important to understand the major industry trends around containers and container orchestration. Kubernetes, Microsoft Azure, Amazon AWS, and others all have support for containers, and containers are rapidly becoming the defacto deployment model for server-side code. Fortunately .NET Core and ASP.NET both have very good support for cloud-native development.

Finally, that’s all from a technology perspective. In a more general sense, regardless of technology, modern developers need good written and verbal communication skills, the ability to understand business and user requirements, at least a basic understanding of agile workflows, and a good understanding of devops.

I follow my own guidance, by the way. CSLA .NET 4.10 (the current version) targets .NET Standard 2.0, supports WebAssembly, and has a bunch of cool new features to help you leverage container-based cloud-native server environments.

Friday, April 5, 2019 10:19:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, January 11, 2019

During 2018 I gave a talk at some VS Live events discussing how one might migrate existing .NET Framework enterprise apps/code to .NET Core. In this talk I have some assumptions I think are reasonable:

  • Most of us can't do a "big bang" rewrite of our apps/code all in one shot
    • It'll take months or years to migrate from .NET to .NET Core
    • During this time it is necessary to maintain the existing code while working on the new code
  • A lot of existing code is still on .NET 2, 3, and 4
  • We're talking Windows Forms, WPF, and ASP.NET code - lots of variety
    • In most cases business logic is embedded in the UI - code-behind forms/pages or in controllers
    • Editorial observation: More people should be using CSLA to gain separation of concerns: keep their business logic in a separate and reusable layer from the UI or data access 😃
  • In most cases you are not just migrating from .NET Framework to .NET Core, but also modernizing/rewriting the UI to also be modern
    • Replacing Windows Forms and WPF with ASP.NET Core Razor Pages or MVC, or Xamarin Forms
    • Maybe upgrading Windows Forms or WPF to the new .NET Core 3.0 support once it is available

Several people have asked if I'd blog the gist of my presentation, so here it is.

In summary:

  • Step 0: Understand .NET Core vs .NET Standard
  • Step 1: Get to .NET 4.6.1 or Higher
  • Step 2: Separation of Concerns
  • Step 3: Move Business Code to Shared Library
  • Step 4: Create .NET Standard Project
  • Step 5: Mitigate Dependency Conflicts
  • Step 6: Mitigate Code Conflicts
  • Step 7: Have a Glass of Bourbon

The code used in my talk and this post is the Net2NetStandard solution on GitHub.

Step 0: Understand .NET Core vs .NET Standard

I've encountered a lot of confusion between .NET Core and .NET Standard and .NET Framework. It is important to have a good understanding of these terms before moving forward at all, so you end up in the right place.

  • .NET Framework is the "legacy" .NET implementation we've been using since 2002, and the long-term goal is to move off .NET Framework onto something more modern
  • .NET Core is a new implementation of .NET that currently supports two types of UI: console and web server. .NET Core 3 is slated to also support Windows Forms and WPF UI frameworks. It does not currently support Xamarin (iOS, Android, Mac, Linux), or WebAssembly (mono-wasm/Blazor).
  • .NET Standard is an interface against which you can write code, and that interface is implemented by .NET Framework 4.6.1+ and by .NET Core 2+ and by Xamarin (and by mono and mono-wasm). If you write your code against .NET Standard, then your compiled DLL can be deployed to .NET Framework, .NET Core, Xamarin, and other .NET implementations.

As a result, my recommendation is that you should always get as much of your code into .NET Standard as possible, because the resulting compiled DLL can run essentially anywhere.

If all you do is get your code to .NET Core, that currently blocks you from reusing that code on .NET Framework, Xamarin, WebAssembly, and other .NET implementations.

All that said, it is important to understand that your UI code will almost certainly be .NET platform specific. In other words, you'll choose to write a console app, a web site, a mobile app, or a desktop app in a specific implementation of .NET. So your UI is not portable or reusable to the same degree as non-UI code.

Your non-UI code should always be built with .NET Standard so it is as portable as possible, enabling reuse of that code in current and future .NET implementations and UI technologies.

This is why my talk (and this post) are about how to get to .NET Standard, not .NET Core. .NET Standard gets you to .NET Core plus Xamarin and other platforms.

Step 1: Get to .NET 4.6.1 or Higher

Version 4.6.1 of the .NET Framework is special, because this is the earliest version that is compatible with .NET Standard 2.0. In reality you'll probably want to get to 4.7.1 or whatever version exists when you start this journey, but the minimum bar is 4.6.1.

Basically, if your existing code won't run on .NET 4.6.1, you'll need to take whatever steps are necessary to get from your older unsupported version (2? 3? 3.5? 4.0? 4.5?) to 4.6.1 or higher.

Fortunately this is usually not that difficult, because Microsoft has done a good job of minimizing breaking changes and preserving backward compatibility over time.

Step 2: Separation of Concerns

This is almost certainly the hardest step: if your existing code is "typical" it probably has tons of non-UI logic in button click or lostfocus event handlers, postback handlers, or controller methods. People have "enjoyed" this style of coding since VB3 back in the early 1990's and it persists through today.

The problem is that moving the UI to .NET Standard is a whole different thing from moving business logic or even data access logic to .NET Standard. Yes, .NET Core 3.0 is planned to have Windows Forms and WPF support, so that should help. But I suspect for most people the migration from .NET Framework to .NET Core ultimately means rewriting the UI into something more modern.

As a result, any code embedded in the UI or presentation layer needs to be cleaned up. You need to apply the concept of separation of concerns and get non-UI code out of the UI. That means no business or data access logic in code-behind or controllers or viewmodels. The goal should be (in my view) that all business logic (validation, calculations, manipulation, rules, authorization) is in a separate business layer, and all data access logic is in its own layer.

In short, you'll have a much easier time migrating code outside the UI to .NET Standard than any code inside the UI.

Step 3: Move Business Code to Shared Library

Now we get to the fun part. This step is in some ways the simplest and yet the most scary.

Right now your code is in a .NET Framework Class Library project. That means it compiles specifically for the .NET Framework, and uses .NET Framework specific dependency references. And this is your existing, running code, so we want to minimize risk in changing it, because changes to this code and existing references and even the csproj file will have a direct impact on your production environment.

The Net2NetStandard solution is intentionally stripped down to the bare minimum. My talk is often a 20 minute lightning talk, so the demo needs to be concise, and this qualifies. The start point is a .NET Framework Class Library project with some existing production code. That code uses Newtonsoft.Json and Entity Framework, with NuGet references to both dependencies.

Importantly, this project is already targeting .NET Framework 4.6.1.

What we want to do is get the code from this project into a location where it can continue to be used to build the existing .NET Framework DLL and also build a .NET Standard DLL. And we want to do this without duplicating the code or files, as that would make maintainability much harder.

Fortunately Visual Studio includes a feature called Shared Projects that solves this issue. A Shared Project is not a normal project at all, it is nothing more than a location to store code files. Those code files are then pulled into a real project at compile time as though they were part of that real project.

To see this in action, add a new C# Shared Project to the solution.

What you'll see in Solution Explorer is that this new project is missing common things like a References or Dependencies node, or a Properties folder. Again, this is not a normal project, it is nothing more than a placeholder to contain code files.

Next, select the source files from the .NET Framework project and drag-drop them into the new SharedLibrary project. That'll copy the files, so there's no risk here.

Before proceeding with any real code, now is the time to make sure you've done a commit to source control so you have an easy way to revert in case something does go wrong!

However, this next step might make your heart race and palms sweat a little, because I want you to highlight and delete the source files from the original .NET Framework project. I know, this sounds scary, but trust me (and your backups).

And here's the key: go to the .NET Framework project and add a reference to the SharedProject.

At this point you can build the original .NET Framework project and you'll get the exact same DLL output as before. Zero changes to your existing code or build result. And yet your code is now in a physical location that'll enable forward movement.

Hopefully your heart has slowed and your palms are now dry 😃

This is the point where you'd do a commit/push/PR of your code to finalize the shift of the files to their new shared project home. All in preparation for the next step where you'll finally get to .NET Standard.

Step 4: Create .NET Standard Project

To recap, you've updated to .NET Framework 4.6.1+, you've moved non-UI code out of the UI to its own class library, and now those code files are in a shared project, while still being compiled by the .NET Framework class library so production is unaffected.

Now you can add a new .NET Standard Class Library project to the solution, the first real step toward the future!

With that done you can add a reference to the same SharedLibrary project so that exact same set of code files will be compiled by this new project as well.

If you try and build the solution or .NET Standard project now you'll find that it won't build. That's because the project is missing some dependencies. However, the original .NET Framework project should keep building fine, production remains unaffected.

Step 5: Mitigate Dependency Conflicts

The new .NET Standard project needs references to Newtonsoft.Json and the Entity Framework, much like the original .NET Framework project. The code makes use of these two packages and won't build without them.

I didn't pick these two dependencies by accident. Newtonsoft.Json has a NuGet package that supports .NET Standard. Entity Framework does not. These two dependencies exemplify likely scenarios you'll encounter with real code. The possible scenarios are that your existing dependencies:

  1. Do not have .NET Standard support, and there's no alternative
  2. Already have .NET Standard support with the current version
  3. Already have .NET Standard support if you upgrade to the latest version
  4. Do not have .NET Standard support, but a new equivalent exists

Scenario 1

Scenario 1 is a worst-case scenario that may be a roadblock to forward movement. If you have a dependency on a DLL or NuGet package that has no .NET Standard support, and there's no modern equivalent to the functionality, then you'll almost certainly have to wait until such support does exist or write it yourself.

Scenarios 2 and 3

If you are in scenario 2, where the existing version of your dependency already has .NET Standard support, then reference the same version in your .NET Standard project as in your exisitng projects and your code should continue to compile and work as-is. This is the simplest scenario.

The dependency may fit into scenario 3, where a newer version of the package supports .NET Standard, but not the version you are currently using. This is quite common with Newtonsoft.Json, where the most commonly used version is quite old, but the more recent versions support .NET Standard.

In this case you may be able to upgrade your production projects to the latest version and use the same version for both .NET Framework and .NET Standard. This incurs some risk to production, because you are upgrading a dependency, but it is often the best solution.

In the case that you can't upgrade the version used by production, you'll need to leave the old package version reference in your .NET Framework project and use a newer version in the .NET Standard project. In this case however, you may have to deal with behavior or API differences between package versions and you should treat this as scenario 4.

Scenario 4

Entity Framework is an example of scenario 4. Microsoft chose not to carry the existing (legacy?) Entity Framework forward. Instead they implemented something new called Entity Framework Core. This new equivalent offers the same conceptual functionality, but with a new implementation and API, so it is absolutely not code-compatible with the old Entity Framework in use in production.

I'll discuss two solutions to scenario 4: compiler directives and upgrading production.

Scenario 4: Compiler Directives

In the .NET Standard project, add references to the latest Newtonsoft.Json and EntityFrameworkCore packages from NuGet.

You'll find that the project still won't build, because the existing code uses the old Entity Framework API. It is an scenario 4 dependency.

But you shouldn't get any errors compiling the code using Newtonsoft.Json, because it is a scenario 2 dependency.

The offending Entity Framework code is in the PersonFactory class:

using System.Data.Entity;

namespace FullNetLibrary
{
  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new DbContext(""))
      {
      }
    }
  }
}

There are two problems in this trivial case. First, the namespaces are different, so the using statement is invalid. Second, the API for interacting with entity contexts has changed, so the new DbContext statement is invalid. In a more realistic scenario more parts of the API would be invalid as well.

The goal is to minimize changes and risk to production code, while enabling the .NET Standard code to move forward. Remember that this exact same code file is being compiled for two different targets: once for .NET Framework, and once for .NET Standard (where it fails).

The solution is to use compiler directives so the code file can include code that is only compiled for one target or the other. The first step is to define a constant in the .NET Standard project's Build tab.

You can name the constant whatever you'd like, but NETSTANDARD2_0 is a defacto standard.

Then in your code file you can use this constant in a compiler directive. For example:

#if NETSTANDARD2_0
using Microsoft.EntityFrameworkCore;
#else
using System.Data.Entity;
#endif

What happens here is that when the .NET Framework project builds there's no NETSTANDARD2_0 constant defined, so the compiler only uses the using System.Data.Entity; code. Conversely, when the .NET Standard project builds the constant is defined, so the compiler only uses the using Microsoft.EntityFrameworkCore; code.

At this point you may ask whether this won't get extremely messy to have these #if statements scattered throughout your code. And that is a valid concern. There are three scenarios to consider within a code file:

  1. No code differences exist between the .NET Framework and .NET Core targets
  2. Very few code differences exist between the targets
  3. Many code differences exist between the targets

In scenario 1 you don't need compiler directives, so there's no issue. And that'll happen quite often with business logic, where the use of external dependencies is often very low.

Scenario 2 is a judgment call. What qualifies as "few"? My recommendation is that if 80% of the code is common and 20% is different, then you should use #if statements on a line-by-line or focused block-by-block scenario. This will result in a code file having numerous compiler directives, but most of the code will remain common across both targets.

Scenario 3 is where so much code is different that if you start scattering compiler directives through the code it would become unreadable. Again, my recommendation is that if more than 20% of your code will be different you should consider scenario 3. In this case you should duplicate the code within the file, essentially creating a different set of code for each platform. For example:

#if NETSTANDARD2_0
using Microsoft.EntityFrameworkCore;

namespace FullNetLibrary
{
  public class PersonContext : DbContext
  {
    public DbSet<Person> Persons { get; set; }
  }

  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new PersonContext())
      {

      }
    }
  }
}
#else
using System.Data.Entity;

namespace FullNetLibrary
{
  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new DbContext(""))
      {
      }
    }
  }
}
#endif

Notice that there's no code that's compiled for both targets. Instead the #if statement is used to create an implementation for .NET Standard, and another implementation for .NET Framework.

In a sense this is the lowest risk solution, because the .NET Framework production code is entirely unchanged. However, it is also the least maintainable solution, because the entire class has been duplicated, so future changes must be made to both sets of code.

Option 3: Upgrading Production Code

There's another alternative to using compiler directives, and that is to upgrade your production code to use the new dependency. This solution is only available in the case that the new NuGet package not only supports .NET Standard, but also supports .NET Framework. EntityFrameworkCore is an example of this, where you can use the new EntityFrameworkCore package from .NET Framework code.

Obviously this solution brings risk, because you are rewriting your existing production code to use the new library. That'll require good unit and acceptance testing of your production code to make sure nothing is broken by the changes.

On the upside, this solution helps keep the common codebase clean and unified. In the Net2NetStandard example, the PersonFactory code can end up looking like this:

using Microsoft.EntityFrameworkCore;

namespace FullNetLibrary
{
  public class PersonContext : DbContext
  {
    public DbSet<Person> Persons { get; set; }
  }

  public class PersonFactory
  {
    public void GetPerson()
    {
      using (var db = new PersonContext())
      {

      }
    }
  }
}

Same code for both the .NET Framework and .NET Standard targets. But only if the old Entity Framework reference in the production .NET Framework project is replaced with the new EntityFrameworkCore reference.

This often comes dangerously close to a "big bang" solution, and incurs real risk to the existing software. But there's also a very real upside in terms of maintaining a common codebase for development, testing, and maintenance over time.

Step 6: Mitigate Code Conflicts

The final issue you may encounter is pure code conflicts between your .NET Framework code and what can be done in .NET Standard. This is very uncommon, because .NET Standard describes so much of the functionality normally used by .NET code. However, if you are using some fancy bit of reflection or other "non-mainstream" parts of .NET you could find that your code won't compile for .NET Standard.

Solving this is really the same as Option 3 when dealing with dependency differences: use compiler directives. Or rewrite your "non-mainstream" production code to use techniques that are supported by .NET Standard.

Step 7: Have a Glass of Bourbon

or your beverage of choice

Not that you are done at this point, but you are on the path. In some ways finding the path and getting onto the path is the hardest part. The rest of the work might take months or years, but at least your code is in a structure where it is possible to migrate forward, while still maintaining the legacy deployment.

Yes there's some risk and additional unit testing (and acceptance testing) required as you make changes to the legacy code, which also changes the future code. That's a net benefit though, because at least you don't have to write those changes twice every time thanks to having a unified codebase.

There's a bit more risk (and therefore testing) required when making changes to the unified codebase for future code, because those changes will usually also impact the legacy app. But you have some control over that impact via compiler directives, and in many cases your business stakeholders will see this also as an advantage because they'll get some new features/capabilities in the existing legacy app even as you build them for the future state.

The point is that you've done the heavy lifting to establish a way forward that is at least achievable. So take a little time and have a small celebration. You deserve it!

Friday, January 11, 2019 12:06:07 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 12, 2018

CSLA .NET 4.9.0 includes some exciting new features, primarily focused on server-side capabilities for containers, .NET Core, and ASP.NET Core. There are a number of data portal enhancements, as well as powerful new configuration options based around the .NET Core configuration subsystem.

Data Portal Enhancements

Many of the enhancements are focused on enabling powerful cloud/container based scenarios with the data portal. Most notably, the data portal now supports two different types of routing to enable the use of multiple server instances. It also has the option to track recent activity with the intent of supporting a basic health dashboard, the ability to force a client "offline", and support for the .NET Core IoC/DI model.

Client-side data portal routing

One common request is that it would be nice if a subset of data portal requests could be routed to some server endpoint other than the default. This idea supports a number of scenarios, including security, logging, multi-tenant server farms, and I'm sure many others.

When you configure the client-side data portal on a web server, mobile device, PC, Mac, or Linux desktop you can now provide mappings so specific business domain types (root object types) cause the data portal to call server-side endpoints other than the default. This can be done on a per-type basis, or by applying the DataPortalServerResource attribute to a root domain type.

As the client app starts up, the data portal must be configured with mappings from domain types or DataPortalServerResource attribute values to specific server endpoint definitions. For example:

      // set up default data portal for most types
      ApplicationContext.DataPortalProxy = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName;
      ApplicationContext.DataPortalUrlString = "https://default.example.com/dataportal";

      // add mapping for DataPortalServerResource attribute
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        (int)ServerResources.SpecializedAlgorithm,
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://specialized.example.com/dataportal" });

      // add mapping for specific business type
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        typeof(MyImportantType),
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://important.example.com/dataportal" });

Notice that the server-side data portal endpoint is defined not just by a URL, but also by the proxy type, so you can use different network transport technologies to access different data portal servers.

Server-side data portal routing

One of the most common issues we all face when hosting server-side endpoints is dealing with client-side app versioning and having server endpoints that support clients running older software. This is particularly important when deploying mobile apps through the Apple/Google/Microsoft stores, when you can't directly control the pace of rollout for client apps.

Additionally, in a clustered server-side environment such as Kubernetes or Cloud Foundry it is likely that you'll want to organize your containers based on various characteristics, such as CPU consumption, memory requirements, or the need for specialized hardware.

The HTTP data portal (Csla.Server.Hosts.HttpPortalController) now supports a solution for both scenarios via server-side data portal routing. It is now possible to set up a public gateway server that hosts a data portal endpoint, which in turn routes all calls to other data portal endpoints within your server environment. Typically these other endpoints are not public, forcing all client apps to route all data portal requests through that gateway router endpoint.

Note that using technologies like Kubernetes it is entirely realistic for the gateway router to be composed of multiple container instances to provide scaling and fault tolerance.

To minimize overhead, the data portal router does not deserialize the inbound data stream from the client. Instead, it uses a header value to determine how to route the request to a worker node where the data stream is deserialized and processed. That header value is created by the client-side data portal by combining two concepts.

First is an optional application version value. If this value is set by the client-side app it is then passed through to the server-side data portal where the value is used to route requests to a worker node running a corresponding version of the server-side app. This allows you to leave versionA nodes running while also running versionB nodes, and once all client devices have migrated off versionA then you can shut down those versionA nodes with no disruption to your users.

Second is a DataPortalServerRoutingTag attribute you can apply to root domain types. This attribute provides a routing tag value that is also passed to the server-side data portal. The intent of this attribute is that you can specify that certain root domain types have characteristics that should be used for server-side routing. For example, you might tag domain types that you know will require a lot of CPU or memory on the server so they are routed to Kubernetes pods that are hosted on fast or large physical servers (Kubernetes nodes), while all other requests go to pods running on normal sized Kubernetes nodes.

Another example would be where the server-side processing requires access to specialized hardware. In that case your Kubernetes nodes would have a taint indicating that they have that hardware, and the DataPortalServerRoutingTag is used to tell the data portal router to send requests for that root domain type only to server-side data portal instances running on those nodes.

What makes this data portal feature a little complex is that the data portal router uses both the version and routing tag to route calls. This is obviously required to support both scenarios at the same time, but it does mean you need to define your routes such that they consider both version and routing tag elements.

The client-side data portal passes a routing tag to the server with the following format:

  • null - neither routing tag nor version are specified
  • routingTag-versionTag - both routing tag and version are specified
  • routingTag- - a routing tag is specified, but no version
  • -versionTag - a version is specified, but no routing tag

Your server-side data portal gateway router is configured on startup with code like this (in the data portal controller class):

  public class DataPortalController : HttpPortalController
  {
    public DataPortalController()
    {
      RoutingTagUrls.Add("routingTag-versionTag", "https://serviceName:36123/api/DataPortal");
    }
  }

You can specify as many routing tag/version keys as necessary to handle routing to the data portal endpoints running in your server environment.

If a key can't be found, or if the target URL is localhost then the request will be processed directly on the data portal router instance. Of course you may choose not to deploy any of your business DLLs to the router instance, in which case all such requests will fail, returning an exception to the client.

Offline mode

A common feature request, especially for mobile devices and laptops, is for the application to force itself into "offline mode" where all data portal calls are directed only to the local data portal. The client-side data portal now supports this via the Csla.ApplicationContext.IsOffline property.

If you set IsOffline to true the client-side data portal will immediately route all data portal requests to the local app rather than a remote server. It is as if all your data portal methods had the RunLocal attribute applied.

Clearly you need to build your DataPortal_XYZ methods to behave properly when they are running on the client (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Client) vs the server (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Server). Typically you'd have your code delegate to a DAL provider specific to the client or server based on execution location using the encapsulated invocation model as described in Using CSLA 4: Data Access.

Save and Merge

CSLA has included the GraphMerger type for some time now. This type allows you to merge the results of a SaveAsync call back into the existing domain object graph. The BusinessBase and BusinessListBase types now implement a simpler SaveAndMergeAsync method you can use to save a root domain object such that the results are automatically merged into the object graph.

DataPortalFactory in ASP.NET Core dependency injection

ASP.NET Core includes a dependency injection model based on IServiceCollection, with dependencies defined in the ConfigureServices method of the Startup.cs file. CSLA .NET now supports that scenario via an AddCsla extension method for IServiceCollection.

    public void ConfigureServices(IServiceCollection services)
    {
      // ...
      services.AddCsla();
    }

The only service that is defined is a new IDataPortalFactory, implemented by DataPortalFactory. The factory is a singleton, and is used to create instances of the data portal for use in a page. For example, in a Razor Page you might have code like this:

  public class EditModel : PageModel
  {
    private IDataPortal<Customer> dataPortal;

    public EditModel(IDataPortalFactory factory)
    {
      dataPortal = factory.GetPortal<Customer>();
    }
    
    // ...
  }

Configuration Enhancements

An important design pattern for success in both DevOps and container-based server deployment is to store config in the environment. The .NET Core configuration subsystem supports this pattern, and now CSLA .NET does as well. Not only that, but we now support an optional fluent configuration model if you want to set configuration in code (common in Xamarin apps for example), and integration with the .NET Core and ASP.NET Core configuration models.

.NET Core configuration subsystem integration

The .NET Core configuration subsystem supports modern configuration concepts, while still supporting legacy concepts like config files. It is also extensible, allowing providers to load configuration values from many different sources.

The base type used in .NET Core configuration is a ConfigurationBuilder. An instance of this type is provided to you by ASP.NET Core, or you can create your own instance for use in console apps. CSLA .NET provides an extension method to integrate into this configuration model. For example, in a console app you might write code like this:

  var config = new ConfigurationBuilder()
     .AddJsonFile("appsettings.coresettings.test.json")
     .Build()
     .ConfigureCsla();

In ASP.NET Core you'd simply use the ConfigurationBuilder instance already avalable to invoke the ConfigureCsla method.

The result of the ConfigureCsla method is that the configuration values are loaded for use by CSLA .NET based on any configuration sources defined by the ConfigurationBuilder. In many cases that includes a config file, environment values, and command line parameters. It might also include secrets loaded from Azure, Docker, or whatever environment is hosting your code. That's all outside the scope of CSLA itself - those are features of .NET Core. The point is that the ConfigureCsla method takes the results of the Build method and maps the config values for use by CSLA .NET.

Note that nothing we've done here has any impact on .NET Framework code. All your existing .NET 4.x code will continue to function against web.config/app.config files.

Fluent configuration API

Although it is best to follow the 12 Factor guidance around config, there are times when you just need to set config values through code. This is particularly true in Xamarin and WebAssembly scenarios, where the configuration systems are weak or non-existant.

In the past you've been able to directly set propery values on Csla.ApplicationContext, various data portal types, and elsewhere in CSLA. That's confusing and inconsistent, and is a source of bugs in many people's code.

There's a new fluent API for configuration you can use when you need to set configuration through code. The entry point for this API is Csla.Configuration.CslaConfiguration, which implements Csla.Configuration.ICslaConfiguration. The model uses extension methods for sub-configuration, extending the ICslaConfiguration type.

Basic configuration is done like this:

      new Csla.Configuration.CslaConfiguration()
        .PropertyChangedMode(Csla.ApplicationContext.PropertyChangedModes.Windows)
        .PropertyInfoFactory("a,b")
        .RuleSet("abc")
        .UseReflectionFallback(false)
        .SettingsChanged();

Data portal configuration can be chained like this:

      new Csla.Configuration.CslaConfiguration()
        .DataPortal().AuthenticationType("custom")
        .DataPortal().AutoCloneOnUpdate(false)
        .DataPortal().ActivatorType(typeof(TestActivator).AssemblyQualifiedName)
        .DataPortal().ProxyFactoryType("abc")
        .DataPortal().DataPortalReturnObjectOnException(true)
        .DataPortal().DefaultProxy(typeof(HttpProxy), "https://myserver.com/api/DataPortal")
        .DataPortal().ExceptionInspectorType("abc")
        .DataPortal().FactoryLoaderType("abc")
        .DataPortal().InterceptorType("abc")
        .DataPortal().ServerAuthorizationProviderType("abc")
        .SettingsChanged();

Client-side data portal descriptors for client-side data portal routing (see above) can be configured like this:

      new CslaConfiguration()
        .DataPortal().ProxyDescriptors(new List<Tuple<string, string, string>>
        {
          Tuple.Create(typeof(TestType).AssemblyQualifiedName, 
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, 
                       "https://example.com/test"),
          Tuple.Create(((int)ServerResources.SpecializedAlgorithm).ToString(),
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName,
                       "https://example.com/test")
        });

Configuration of Csla.Data options:

      new CslaConfiguration()
        .Data().DefaultTransactionIsolationLevel(Csla.TransactionIsolationLevel.RepeatableRead)
        .Data().DefaultTransactionTimeoutInSeconds(123);

Security options:

      new CslaConfiguration()
        .Security().PrincipalCacheMaxCacheSize(123);

You should consider using this new fluent API instead of the old/fragmented configuration options at all times - for setting config values. CSLA .NET itself continues to rely on Csla.ApplicationContext to read config values for use throughout the code.

Conclusion

CSLA .NET version 4.9 is very exciting for server-side developers, or people using a remote data portal where versioning or advanced routing scenarios are important. Between the data portal enhancements and more modern configuration techniques available in this version, your life should be improved as a CSLA .NET developer.

Wednesday, December 12, 2018 1:38:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer