Rockford Lhotka

 Tuesday, June 30, 2020

Although this initiative from Microsoft is focused on COVID, we know that the pandemic has impacted certain communities far more than others from an economic perspective. So this, in my mind, is also an excellent move from a diversity perspective, in that it may provide opportunities for a lot of folks to get free training and up-skill their careers.

The company today announced a wide-ranging, global portal for free skills training for people who are out of work. Alongside that, Microsoft said it plans to disperse $20 million in grants to nonprofit organizations that are working to help those who have lost jobs due to COVID-19 and subsequent shifts in the economy, and with a specific emphasis on those that are working with groups that are underrepresented in the tech world.

https://techcrunch.com/2020/06/30/microsoft-to-distribute-20m-in-grants-to-non-profits-offers-free-skills-training-via-linkedin/

Tuesday, June 30, 2020 2:18:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 5, 2019

After 15+ years of self-hosting my blog with dasBlog I've decided to shift over to using GitHub and Jekyll for a more modern experience - primarily for you as a reader. I think the new site offers a much better reading experience.

As an author I've been using MarkdownMonster to write and publish my content for the last 18-24 months. I imagine I'll continue with that approach, but now the content will flow through GitHub instead of direct-posting to dasBlog.

This does mean a new blog URL: https://blog.lhotka.net

The old http://lhotka.net/weblog will remain in place as an archive, at least for now. I suppose I'll eventually take it down, but I'm not in a hurry.

So friends, farewell to this old blog location. I'll see you over at the new Rockford Lhotka blog.

Android | CSLA .NET | iOS | UWP | WebAssembly
Thursday, September 5, 2019 1:24:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, September 4, 2019

I recently blogged about the new support coming in CSLA 5 for Blazor. Shipping in .NET Core 3, Blazor is an HTML-based UI framework for WebAssembly.

There's another very cool UI framework for WebAssembly that sits on top of .NET: Uno Platform.

Uno Platform CSLA

This UI framework relies on XAML (specifically the UWP dialect) to not only reach WebAssembly, but also Android and iOS, as well as Windows 10 of course. In short, the Uno approach allows you to write one codebase that can run on:

  1. Any modern browser via WebAssembly
  2. Android devices
  3. iOS devices
  4. Windows 10

Uno is similar to Xamarin Forms, in that it leverages the mono runtime to run code on iOS, Android, and WebAssembly. The primary differences are that Xamarin Forms has its own dialect of XAML (vs the UWP dialect), and doesn't target WebAssembly.

Solution Structure

When you create an Uno solution you get a number of projects:

  1. A project with the implementation shared across all platforms
  2. An iOS project
  3. An Android project
  4. A UWP (Windows 10) project
  5. A wasm (WebAssembly) project

You can see a working example of this in the CSLA UnoExample sample app.

Following typical CSLA best practices, you'll also add a .NET Standard 2.0 Class Library project for your business domain types, and at least one other for your data access layer implementation. You'll also normally have an ASP.NET Core project that acts as your application server, because a typical business app needs to interact with server-side resources like databases.

It is important to understand that, like Xamarin Forms, the platform-specific projects for iOS, Android, UWP, and wasm have almost no code. They exist to bootstrap the app on each type of platform. This is true of the app server code also, it is just there to provide an endpoint for the apps. All the real code is in the shared project, your business library, and your data access library.

Business Layer

The purpose of CSLA .NET is to provide a home for business logic. This is done by enabling the creation of business domain classes that encapsulate all business logic, and that's supported through consistent coding structures and a rules engine.

To improve developer productivity CSLA also abstracts many platform differences around data binding to various types of UI and interactions with app servers and data access layers.

ℹ Much more information about CSLA is available in the free Using CSLA 2019: CSLA Overview ebook.

As a demo, nothing in the UnoExample is terribly complex, but it does demonstrate some very basic types of rule: informational messaging, warning messaging, and validation errors. It doesn't demonstrate things like calculated values, cross-object rules, etc. There's a lot more info about the rules engine in the CSLA RuleTutorial sample.

The BusinessLayer project has a couple custom rules: CheckCase and InfoText, and it relies on the Required attribute from System.ComponentModel.DataAnnotations for a simple validation rule.

The PersonEdit class relies on these rules to enable basic creation and editing of a "person" domain concept:

  [Serializable]
  public class PersonEdit : BusinessBase<PersonEdit>
  {
    public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(nameof(Id));
    public int Id
    {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
    }

    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(nameof(Name));
    [Required]
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    protected override void AddBusinessRules()
    {
      base.AddBusinessRules();
      BusinessRules.AddRule(new InfoText(NameProperty, "Person name (required)"));
      BusinessRules.AddRule(new CheckCase(NameProperty));
    }
    
    // data access abstraction below...
  }

The PersonEdit class also leverages the CSLA data portal to abstract the concept of an application server (or not) and prescribes how to interact with the data access layer. This code also leverages the new CSLA version 5 support for method-level dependency injection:

    // properties and business rules above...

    [Create]
    private void Create()
    {
      Id = -1;
      BusinessRules.CheckRules();
    }

    [Fetch]
    private void Fetch(int id, [Inject]DataAccess.IPersonDal dal)
    {
      var data = dal.Get(id);
      using (BypassPropertyChecks)
        Csla.Data.DataMapper.Map(data, this);
      BusinessRules.CheckRules();
    }

    [Insert]
    private void Insert([Inject]DataAccess.IPersonDal dal)
    {
      using (BypassPropertyChecks)
      {
        var data = new DataAccess.PersonEntity
        {
          Name = Name
        };
        var result = dal.Insert(data);
        Id = result.Id;
      }
    }

    [Update]
    private void Update([Inject]DataAccess.IPersonDal dal)
    {
      using (BypassPropertyChecks)
      {
        var data = new DataAccess.PersonEntity
        {
          Id = Id,
          Name = Name
        };
        dal.Update(data);
      }
    }

As is normal with CSLA, any interaction with the database is managed by the data access layer, and management of private fields or data within the domain object is managed in these data portal methods, for a clean separation of concerns.

Data Access Layer

I am not going to dive into the data access layer (DAL) in any depth. The implementation in the sample is a pure in-memory model relying on LINQ and a static collection as a stand-in for a database Person table.

This approach is valuable for two scenarios:

  1. A demo or example where I want the code to "just work" without forcing you to create a database
  2. Integration testing scenarios where automated integration or user acceptance testing should be fast, and should be performed on a known set of data - without having to reinitialize a real database each time

This sample code isn't quite at the level to accomplish the second goal, but it is easily achieved, especially given that the DAL is injected into the code via DI.

App Server

The AppServer project is an ASP.NET Core project using the empty API template. So it just contains a Controllers folder and some configuration. It also references the business and data access layers so they are available on the hosting server (maybe IIS, maybe in a Docker container in Kubernetes - thanks to .NET Core there's a lot of flexibility here).

Configuration

In the Startup class CSLA and the DAL are added to the available services in the ConfigureServices method:

      services.AddCsla();
      services.AddTransient(typeof(DataAccess.IPersonDal), typeof(DataAccess.PersonDal));

And the Configure method configures CSLA:

      app.UseCsla();

It is also important to note that the app server is configured for CORS, because the wasm client runs in a browser, and isn't deployed from the app server. Without CORS configuration the app server would reject HTTP requests from the wasm client app.

Data Portal Controllers

The reason for the app server is to expose endpoints for use by the client apps on iOS, Android, Windows, and wasm. CSLA has a component called the data portal that abstracts the entire idea of an app server and the network transport used to interact with any app server. As a result, an app server exposes "data portal endpoints" using components supplied by CSLA.

In the Controllers folder are DataPortalController and DataPortalTextController classes. The DataPortalController looks like this:

  [Route("api/[controller]")]
  [ApiController]
  public class DataPortalController : Csla.Server.Hosts.HttpPortalController
  {
    [HttpGet]
    public string Get()
    {
      return "Running";
    }
  }

The Get method is entirely optional. I tend to implement one because it makes it easier to troubleshoot my web server. But the real behavior of a data portal endpoint is in its ability to handle a POST request, and that's already provided via the HttpPortalController base class.

The only difference in the DataPortalTextController is one line in the constructor:

    public DataPortalTextController()
    {
      UseTextSerialization = true;
    }

This is due to a current limitation of .NET running in WebAssembly in the browser: the HttpClient can't transfer binary data, only text data.

Normally the CSLA data portal transfers data as binary, often compressed. The whole point is to minimize data over the network and maximize performance. However, in the case of a wasm client app that binary data needs to be Base64 encoded into text for transfer over the network.

The result are two data portal endpoints, one binary, the other text. Otherwise they do the same thing.

Uno UI Apps

The remaining projects in the solution rely on Uno to implement a common XAML-based UI with apps for iOS, Android, Windows, and wasm. Each of these apps is called a "head", and they all rely on a common implementation from the UnoExample.Shared project.

Platform-Specific UI Projects

Each head project is a bootstrap for the client app, providing platform or operating system specific startup code and then handing off to the shared code for all the "real work". I'm not going to explore the various head apps, because they are template code - nothing I wrote.

⚠ The current Uno templates start with an Assets folder in the shared project. That'll cause compiler warnings from Android. Move that Assets folder from the shared project to the UWP project to solve this problem.

⚠ Do not update the console logging dependency in NuGet. It starts at version 1.1.1, and if you upgrade that'll cause a runtime issue with threading in the wasm head.

⚠ You may need to update the current target OS versions for Android and UWP, as the Uno template targets older versions of both.

Shared Project

The way Uno works is that you implement nearly all of your application's UI logic in a shared project, and that project is compiled and deployed via each platform-specific head project. In other words, the code in the shared project is compiled into the iOS project, and into the Android project, and into the Windows project, and into the wasm project.

The result, is that as long as you don't do anything platform-specific, all your UI client code ends up in this one shared project.

Configuration

When each platform-specific head app starts up it hands off control to the shared code as soon as any platform-specific details are handled. The entry point to the shared project is via App.xaml and any code behind in App.Xaml.cs.

CSLA is configured in the constructor of the App class. In the UnoExample code there are two sets of configuration, only one of which should be uncommented at a time.

Use Text-based Data Transfer for WebAssembly

As I mentioned when discussing the app server, the HttpClient object in .NET on WebAssembly can't currently transfer binary data. This means that CSLA needs to be configured to Base64 encode the data sent to the app server. In the constructor of the App class there is this code:

#if __WASM__
      Csla.DataPortalClient.HttpProxy.UseTextSerialization = true;
#endif

This uses a compiler directive that is predefined by the wasm UI project to conditionally compile a line of code. In other words, this line of code is only compiled into the wasm project, and it is totally ignored for all the other project types.

Run "App Server" In-Process

After that, the constructor configures CSLA to run the "app server" code in-proc on the client. This is useful for demo purposes as it means each app will "just run" without you having to set up a real app server:

      CslaConfiguration.Configure().
        ContextManager(new Csla.Xaml.ApplicationContextManager());
      var services = new ServiceCollection();
      services.AddCsla();
      services.AddTransient(typeof(DataAccess.IPersonDal), typeof(DataAccess.PersonDal));

CSLA is configured to use an ApplicationContextManager designed for Uno. This kind of gets into the internals of CSLA, but CSLA relies on different context managers for different runtime environments, because in some cases context is managed by HttpContext, others on a per-thread basis, and still others via static fields.

The use of ServiceCollection configures the .NET Core dependency injection subsystem so CSLA can inject the correct implementation of the data access layer upon request.

ℹ In a real app of this sort the data access layer would not be deployed to, or available on, the client apps. It would only be deployed to the app server. But this is a demo, and it is far more convenient for you to be able to just run the various head apps without first having to set up an app server.

Invoke a Remote App Server

For the UWP and wasm heads you can easily comment out the "local app server" configuration and uncomment the configuration for the actual app server:

      string appserverUrl = "http://localhost:60223/api/dataportal";
      if (Csla.DataPortalClient.HttpProxy.UseTextSerialization)
        appserverUrl += "Text";
      CslaConfiguration.Configure().
        ContextManager(new Csla.Xaml.ApplicationContextManager()).
        DataPortal().
          DefaultProxy(typeof(Csla.DataPortalClient.HttpProxy), appserverUrl);

This code sets up a URL for the app server endpoint, appending "Text" to the controller name if text-based encoding should be used.

It then configures the application context manager, and also tells the CSLA data portal to use the HttpProxy type to communicate with the app server, along with the endpoint URL.

⚠ This only works with the Windows and wasm client apps. It won't work with the iOS or Android client apps because they won't have access to your localhost. If you want to use an app server with the Android or iOS apps you'll need to deploy the AppServer project to a real app server.

Notice that there is no configuration of the data access layer in this scenario. That's because the data access layer is only invoked on the app server, not on the client. This is a more "normal" scenario, and in this case the client-side head projects would not reference the DataAccess project at all, so that code wouldn't be deployed to the client devices.

UI Implementation

I'm not going to walk through the UI implementation in great detail. If you know XAML it is pretty straightforward, and if you don't know XAML there are tons of resources out there about how UWP apps work.

The really cool thing though, is how Uno manages to provide a largely consistent experience for UWP-style XAML across four different platforms. In particular, I found building this example quite enjoyable because I could use standard debugging against the UWP head, and then just run the code in the wasm head. Usually that means the wasm UI "just works", and that's wonderful!

ℹ In the cases where I had difficulties, the Uno team is active on the Uno Gitter channel (like a public Teams or Slack), and between the team and the community I got over any hurdles very rapidly.

Loading List of People

The MainPage displays a list of people in the database, and has a button to add a new person. It also has a couple text output lines to show status and errors. I don't claim that this is a nice or pretty user interface, but it demonstrates important concepts 😊

The important bit is in the RefreshData method, where the CSLA data portal is used to retrieve the list of people:

    private async Task RefreshData()
    {
      this.InfoText.Text = "Loading ...";
      try
      {
        DataContext = await DataPortal.FetchAsync<PersonList>();
        this.InfoText.Text = "Loaded";
      }
      catch (Exception ex)
      {
        OutputText.Text = ex.ToString();
      }
    }

This demonstrates a key feature of CSLA: location transparency. The data portal call will work regardless of whether the data portal was configured to run the app server code in-process on the client, or remotely on a server. Even better, though this example app uses HTTP as a transport, you could configure the data portal to use gRPC or RabbitMQ or other network transports, and the code here in the UI wouldn't be affected at all!

ℹ When editing XAML or codebehind a page you might find that you get all sorts of Intellisense errors. This can be resolved by making sure the Project dropdown in the upper-left of the code editor is set to UnoExample.UWP. It'll often default to some other project, and that causes the errors.

In this example notice that the Project dropdown is set to UnoExample.Droid, and that is why the code editor is all confused.

Editing a PersonEdit Object

The EditPerson page is a typical forms-over-data scenario. As the page loads it is data bound to a new or existing domain object:

    public int PersonId { get; set; } = -1;

    protected override async void OnNavigatedTo(NavigationEventArgs e)
    {
      if (e.Parameter != null)
        PersonId = (int)e.Parameter;

      PersonEdit person;
      this.InfoText.Text = "Loading ...";
      if (PersonId > -1)
        person = await DataPortal.FetchAsync<PersonEdit>(PersonId);
      else
        person = await DataPortal.CreateAsync<PersonEdit>();
      DataContext = person;
      this.InfoText.Text = "Loaded";
    }

The user is then able to edit the Name property, which has rules associated with it from the business library. Details about those rules (and other metastate about the domain object and individual properties) can be displayed to the user via data binding. The Csla.Xaml.PropertyInfo type provides data binding with access to all sorts of metastate about a specific property, and that is used in the XAML:

    <TextBox Text="{Binding Name, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" />
    <csla:PropertyInfo x:Name="NameInfo" Property="{Binding Name, Mode=TwoWay}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=Value}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=IsValid}" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=InformationText}" Foreground="Blue" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=WarningText}" Foreground="DarkOrange" />
    <TextBlock Text="{Binding ElementName=NameInfo, Path=ErrorText}" Foreground="Red" />

Again, I don't claim to be a UX designer, but this does demonstrate some of the capabilities available to a UI developer given the rich metastate provided by CSLA. The result looks like this:

Once the user has edited the values on the page, they can click the button to save the person. That also relies on CSLA to provide location transparent code:

    private async void SavePerson(object sender, RoutedEventArgs e)
    {
      try
      {
        var person = (PersonEdit)DataContext;
        await person.SaveAsync();
        var rootFrame = Window.Current.Content as Frame;
        rootFrame.Navigate(typeof(MainPage));
      }
      catch (Exception ex)
      {
        OutputText.Text = ex.ToString();
      }
    }

The SaveAsync method uses the data portal to have the DAL insert or update (or delete) the data associated with the domain object. In this case, once the object's data has been saved the user is navigated to the list of people.

Conclusion

I am very excited about WebAssembly and the ability to run native .NET code in any modern browser. The Uno Platform offers a powerful UI framework for building apps that run native on mobile devices, in Windows, and in any modern browser.

CSLA .NET has always been about providing a home for your business logic, allowing you to write your logic once and to then leverage it on any platform or environment supported by .NET. Thanks to .NET running in WebAssembly, this means that you can take your business logic directly into any modern browser on any device or platform.

Android | CSLA .NET | iOS | UWP | WebAssembly
Wednesday, September 4, 2019 10:07:18 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, September 2, 2019

I'm excited about two things in our industry right now: containers (specifically Kubernetes) and WebAssembly (specifically Blazor and Uno).

CSLA .NET version 4.11 got a lot of new and exciting features to support container and Kubernetes scenarios, and there are some more coming in CSLA version 5 as well.

But over the past few days I've been building a new Csla.Blazor package to provide some basic UI support when using CSLA 5 with Blazor. Specifically client-side Blazor, which is the really exciting part, though this should all work fine on server-side Blazor as well.

Application Context Manager

Part of this is a context manager that helps simplify configuration and context management within the Blazor client environment. Specifically:

  1. The HttpProxy is set to use text-based serialization, because Blazor (wasm) doesn't currently support passing binary data via HttpClient
  2. The User property is maintained in a static, just like in all other smart client scenarios

Configuring a Blazor Client App

Blazor relies on the standard Startup class like ASP.NET Core for configuring a client app. CSLA supports this model via an AddCsla method and the fluent CslaConfiguration system. As a result, basic configuration looks like this:

using Csla;
using Csla.Configuration;
using Microsoft.AspNetCore.Components.Builder;
using Microsoft.Extensions.DependencyInjection;

namespace BlazorExample.Client
{
  public class Startup
  {
    public void ConfigureServices(IServiceCollection services)
    {
      services.AddCsla();
      services.AddTransient(typeof(IDataPortal<>), typeof(DataPortal<>));
      services.AddTransient(typeof(Csla.Blazor.ViewModel<>), typeof(Csla.Blazor.ViewModel<>));
    }

    public void Configure(IComponentsApplicationBuilder app)
    {
      app.AddComponent<App>("app");

      CslaConfiguration.Configure().
        ContextManager(typeof(Csla.Blazor.ApplicationContextManager)).
        DataPortal().
          DefaultProxy(typeof(Csla.DataPortalClient.HttpProxy), "/api/DataPortal");
    }
  }
}

Blazor defaults to providing HttpClient as a service, and this code adds mappings for IDataPortal<T> and ViewModel<T>.

Notice that it also configures the app to use the ApplicationContextManager designed to support Blazor.

The HttpProxy data portal channel will gain access to the environment's HttpClient via dependency injection.

Data Portal Server Needs to Use Text

As noted above, the HttpClient implementation currently used by .NET in wasm can't transfer binary data. As a result both client and server need to be configured to use text-based data transfer (basically Base64 encoded binary data). This is automatic on the Blazor client, but the data portal server controller needs to use text as well. Here's the controller code from the server's endpoint:

  [Route("api/[controller]")]
  [ApiController]
  public class DataPortalController : Csla.Server.Hosts.HttpPortalController
  {
    public DataPortalController()
    {
      UseTextSerialization = true;
    }
  }

If your data portal needs to support Blazor and non-wasm clients, you'll need two controllers, one for wasm clients and one for everything else.

ViewModel Type

The new Csla.Blazor.ViewModel type provides basic support for creating a Razor Page that binds to a business domain object via the viewmodel.

As with all the previous XAML-based viewmodel types, this one exposes the domain object via a Model property, because the CSLA-based domain object already fully supports data binding. It would be a waste of code (to write, debug, test, and maintain) to duplicate all the properties from a CSLA-based domain class in a viewmodel.

Also, like the previous XAML-based viewmodel types, this one supports some basic verbs/operations that are likely to be triggered by the UI. Specifically the create/fetch and save operations.

Finally, the viewmodel type exposes a set of metastate methods designed to allow the Razor Page to easily understand and bind to the state of the business object. For example, is the object currently saveable? What are the information/warning/error validation messages for a given property? Is a property currently running any async business rules?

You can use all these metastate values to create a rich UI, much like in XAML, with no code. The Blazor data binding model, combined with the new ViewModel typically provide everything necessary.

To use this type in a page, make sure to add it to the services in Startup as shown earlier. Then inject it into the page:

@inject Csla.Blazor.ViewModel<PersonEdit> vm

And in the @code block call the RefreshAsync method:

@code {
  protected override async Task OnInitializedAsync()
  {
    await vm.RefreshAsync();
  }
}

The RefreshAsync method has various default behaviors around how it knows to fetch or create an instance of the business domain type:

  1. If the domain type is read-only it always does a fetch
  2. If the domain type is editable and no criteria parameters are provided it does a create
  3. If the domain type is editable and criteria is provided it does a fetch

You can override this behavior, but these defaults work well in many cases.

BlazorExample Sample

You can look at the Samples/BlazorExample sample to see how this comes together. That sample is the basic Blazor start template, plus the ability to add/edit person objects, and get a list of people in the "database" (a mock in-memory data store).

Monday, September 2, 2019 10:41:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 22, 2019

I recently posted about the new gRPC data portal channel coming in CSLA 5.

I've also been working on a data portal channel based on using RabbitMQ as the underlying transport.

Now this might seem odd, because the CSLA .NET data portal is essentially a "synchronous" model, much like HTTP. The caller sends a message to the server, and waits for a response. One of:

  1. A response message indicating some result (success or failure)
  2. An exception due to the transport failing
  3. A timeout due to the server not responsing

This makes sense with gRPC and HTTP, because they both follow that bi-directional communication model. But (by themselves) queues don't. Queues are "fire and forget", providing a one-way message protocol.

However, it has been a common practice for decades to use queues for bi-directional messaging through the use of a reply queue.

In this model callers (logical client-side code) sends requests to the data portal by sending a message to the data portal server's queue.

The data portal server processes those messages exactly as though they came in via HTTP or gRPC. The calls are routed to your business code, which can do whatever it wants on the server (typically talk to a database). When your business code is done, the response is sent back to each caller's respective reply queue.

This seems pretty intuitive and straightforward. The various request/response pairs are coordinated using something called a correlation id, which is just a unique value for each original request. Also, each request includes the name of its reply queue, making it easy to respond to the original caller.

The data portal server can handle many inbound requests at the same time, because they are all uniquely identified via correlation id and reply queue. In fact there are some amazing benefits to this approach:

  1. If the data portal server crashes and comes back up it'll pick up where it left off - a valuable attribute in an environment such as Kubernetes
  2. Multiple instances of the data portal server can run at the same time to spread the workload across multiple servers - useful in traditional data centers and in Kubernetes
  3. Fault tolerance can be achieved by configuring RabbitMQ itself to run in a redundant clustered environment

It is also the case that the caller might be something like a web server. So a given caller might send multiple concurrent requests to the data portal. And that's fine, because each request has a unique correlation id, allowing replies from the data portal server to be mapped back to the original requester.

The one primarily limitation is that if a caller crashes then its "client-side" state is lost. This is an inherent part of the bi-directional, caller-driven model used by the data portal.

You can think of it as being no different from an HTTP caller (e.g. a browser) shutting down after making a request to a web server and before the server responds. The server may complete its work, but even if the user opens a new browser window they'll never get the response from the server.

The same thing is true in this implementation of the data portal using RabbitMQ. So it has total parity with HTTP or gRPC in this regard.

The great thing is how CSLA abstracts the use of RabbitMQ, just like it does for HTTP, gRPC, and any other network transport.

Identifying the RabbitMQ Service

Everything assumes you have a RabbitMQ instance running. It might be a single node or a cluster; either way RabbitMQ has an IP address and a port. The data portal also requires that you provide a name for the data portal server queue, and you can optionally manually name the reply queues.

To make this fit within the URL-based model for other transports, CSLA relies on a URI for the RabbitMQ service. It looks like this:

rabbitmq://servername/queuename

And optionally on the client, if you want to manually specify the reply queue name:

rabbitmq://servername/queuename?reply=replyqueuename

In advanced scenarios you can use more of the URI scheme:

rabbitmq://username:password@servername:port/queuename

Think of this like a URL for an HTTP or gRPC endpoint.

Implementing a Client

On the client all that's needed is:

  1. Reference the Csla.Channels.RabbitMq NuGet package (CSLA v5.0.0-R19082201 or higher)
  2. Configure the data portal to use the new channel:
  CslaConfiguration.Configure().
    DataPortal().
      DefaultProxy(typeof(Csla.Channels.RabbitMq.RabbitMqProxy), "rabbitmq://localhost/rmqserver");

This configures the data portal to use the RabbitMQ channel, and to find the server using the provided URI.

Implementing the Server

Unlike with HTTP and gRPC where the server is probably hosted in ASP.NET Core, RabbitMQ servers are usually implemented as a console app. This is ideal for hosting in lightweight containers in Docker or Kubernetes, as there's no need for the overhead of ASP.NET.

  1. Create a console app (.NET Core 2.0 or higher)
  2. Create an instance of the data portal host
  3. Tell the data portal to start listening for requests

Here's a complete implementation:

using System;
using System.Threading.Tasks;

namespace rmqserver
{
  class Program
  {
    static async Task Main(string[] args)
    {
      Console.WriteLine("Start listener; ctl-c to exit");
      var host = new Csla.Channels.RabbitMq.RabbitMqPortal("rabbitmq://localhost/rmqserver");
      host.StartListening();

      await new Csla.Reflection.AsyncManualResetEvent().WaitAsync();
    }
  }
}

Shared Business Logic

Of course the centerpiece of CSLA .NET is the idea of shared business logic in a common assembly. So any solution would contain the client code as shown above, the server, and a .NET Standard 2.0 Class Library that contains all the business classes that encapsulate business logic.

Both the client and server projects must reference the business class library assembly. That business assembly needs to be available to both client and server code. The data portal takes care of the rest.

In that shared assembly you might have a simple type like this:

using Csla;
using System;
using System.ComponentModel.DataAnnotations;
using System.Threading.Tasks;

namespace ClassLibrary1
{
  [Serializable]
  public class PersonEdit : BusinessBase<PersonEdit>
  {
    public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(nameof(Id));
    public int Id
    {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
    }

    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(nameof(Name));
    [Required]
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    [Create]
    private void Create(int id)
    {
      using (BypassPropertyChecks)
        Id = id;
    }

    [Fetch]
    private async Task Fetch(int id)
    {
      // TODO: get object's data
    }

    [Insert]
    private async Task Insert()
    {
      // TODO: insert object's data
    }

    [Update]
    private async Task Update()
    {
      // TODO: update object's data
    }

    [DeleteSelf]
    private async Task DeleteSelf()
    {
      await Delete(ReadProperty(IdProperty));
    }

    [Delete]
    private async Task Delete(int id)
    {
      // TODO: delete object's data
    }
  }
}

The client can interact with this type via the data portal. For example:

  var obj = await DataPortal.CreateAsync<PersonEdit>(42);
  obj.Name = "Arnold";
  if (obj.IsSaveable)
  {
    await obj.SaveAndMergeAsync();
  }

And that's it. The data portal takes care of relaying that call to the server (in this case via RabbitMQ). The server creates an instance of PersonEdit and invokes the method marked with the Create attribute so the object can invoke a data access layer (DAL) to initialize itself or do whatever is necessary.

In CSLA 5 those create/fetch/insert/update/delete methods can all accept parameters that are provided via dependency injection, but that's a topic for another blog post. Keep in mind that DI is the appropriate way to gain access to the DAL, and that any interact with databases is encapsulated within the DAL.

Thursday, August 22, 2019 9:31:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, August 21, 2019

The new .NET Core 3 release includes support for the gRPC protocol. This is an efficient binary protocol for making network calls, and so is something that CSLA .NET should obviously support.

CSLA already has an extensible channel-based model for network communication via the data portal. Over the years there have been numerous channels, including:

  • .NET Remoting (obsolete)
  • asmx services (obsolete)
  • WCF (of limited value in modern .NET)
  • Http

I'm sure there have been others as well. The current recommended channel is via Http (using the HttpProxy type), as it best supports performance, routing, and various other features native to HTTP and to the data portal channel implementation.

CSLA .NET version 5.0.0 will include a new gRPC channel. Like all the other channels, this is a drop-in replacement for your existing channel.

⚠ This requires CSLA .NET version 5.0.0-R19082107 or higher

Client Configuration

On the client it requires a new NuGet package reference and a configuration change.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. On app startup, configure the data portal as shown here:
      CslaConfiguration.Configure().
        DataPortal().DefaultProxy(typeof(Csla.Channels.Grpc.GrpcProxy), "https://localhost:5001");

This configures the data portal to use the new GrpcProxy and provides the URL to the service endpoint. Obviously you need to provide a valid URL.

Server Configuration

On the server it requires a new NuGet package reference and a bit of code in Startup.cs to set up the service endpoint.

⚠ This requires an ASP.NET Core 3.0 project.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. In the ConfigureServices method you must configure gRPC: services.AddGrpc();
  3. In the Configure method add the data portal endpoint:
      app.UseRouting();

      app.UseEndpoints(endpoints =>
      {
        endpoints.MapGrpcService<Csla.Channels.Grpc.GrpcPortal>();
      });

General Notes

As usual, both client and server need to reference the same business library assembly, which should be a .NET Standard 2.0 library that references the Csla NuGet package. This assembly contains implementations of all your business domain classes based on the CSLA .NET base classes.

The gRPC data portal channel uses MobileFormatter to serialize and deserialize all object graphs, and so your business classes need to use modern CSLA coding conventions so they work with that serializer.

All the version and routing features added to the Http data portal channel in CSLA version 4.9.0 are also supported in this new gRPC channel, allowing it to take full advantage of container orchestration environments such as Kubernetes.

Also, as with the Http channel, the GrpcProxy and GrpcPortal types have virtual methods you can optionally override to implement compression on the data stream, and (on the client) to support advanced configuration scenarios when creating the underlying HttpClient and gRPC client objects.

Wednesday, August 21, 2019 1:37:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 25, 2019

Containers and Kubernetes (k8s) are useful for building and deploying distributed systems in general. This includes service-based architectures (SOA, microservices) as well as n-tier client/server endpoints.

I do think container-based runtimes and service-based architecture go hand-in-hand. However, a lot of the benefits of container-based runtimes apply just as effectively to a well-architected n-tier client/server application as well.

By “well architected” I mean an n-tier app that has:

  1. Good separation of concerns between interface, interface control, business, data access, and data storage layers
  2. “Chunky” communication between client and server
    1. A limited number of server endpoints - the n-tier services have been well-considered and are cohesive
    2. Effective use of things like the unit of work pattern to minimize calls over the network by bundling “multiple calls” into a single unit of work (single call)
  3. Efficient use of data transfer - no blind use of ORM tools where extraneous data flows over the network just because it “made things easier to implement”
    1. Focus on decoupling over reuse (two sides of the same coin, where reuse leads to coupling and coupling is extremely bad)

In such an n-tier app, the client is often quite smart (whether mobile, Windows, Mac, WebAssembly, or even TypeScript), and the overall solution is architected to enable this smart client to efficiently interact with n-tier endpoints (really a type of service) to leverage server-side behaviors.

These n-tier endpoints are not designed for use by other apps. They are not “open”. This is arguably a good thing, because it means they can (and should) use much more efficient binary serialization protocols, as compared to the abysmally inefficient JSON or XML protocols used by “open” services.

At the end of the day, the key is that nothing stops you from hosting a n-tier endpoint/service in a container. Well, nothing beyond what might stop you from hosting any code in a container.

In other words, if you build (or update) your n-tier endpoint code to follow cloud-native best practices (such as embracing 12factor design and avoiding the fallacies of distributed computing) - just like you must with microservice implementations - your endpoint code can take advantage of container-based runtimes very effectively.

Now a lot of folks tend to look at any n-tier app and think “monolith”. Which can be true, but doesn’t have to be true. People can create good or bad n-tier solutions, just like they can create good or bad microservice solutions. These architectures aren’t silver bullets.

Look at MVC - a great design pattern - unless you put your business logic in the controller. And LOTS of people write business logic in their controllers. Horrible! Totally defeats the purpose of the design pattern. But it is expedient, so people do it.

What I’m saying, is that if you’ve done a good job designing your n-tier endpoints to be cohesive around business behaviors, and you can make them (at least mostly) 12factor-compliant, you can get container-based runtime benefits such as:

  1. Scaling/Elasticity - quickly spin up/down workers based on active load
  2. Functional grouping - have certain endpoints run on designated server nodes based on CPU, IO, or other workload requirements
  3. Manageability - the same management features that draw people to k8s for microservices are available for n-tier endpoints as well
  4. Resiliency - auto-fail over of k8s pods applies to n-tier endpoints just as effectively as microservices
  5. Infrastructure abstraction - k8s is basically the same (from a dev or code perspective) regardless of whether it is in your datacenter, or in Azure, or AWS

I’ll confess that I’m a little biased. I’ve spent many years talking about good n-tier client server architecture, and have over 22 years of experience maintaining the open source CSLA framework based on such an architecture.

The most recent versions of CSLA have some key features that allow folks to truly exploit container-based runtimes. Usually with little or no change to any existing code.

My key point is this: container-based runtimes offer fantastic benefits to organizations, for both service-based and n-tier client/server architectures.

Tuesday, June 25, 2019 1:47:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, April 5, 2019

One thing I’ve observed in my career is something I call the “pit of success”. People (often including me when I was younger) write quick-and-dirty software because what we’re doing is a “simple project” or “for just a couple users” or a one-off or a stopgap or a host of other diminutive descriptions.

What so often happens is that the software works well - it is successful. And months later you get a panicked phone call in the middle of the night because your simple app for a couple users that was only a stop-gap until the “real solution came online” is now failing for some users in Singapore.

You ask how it has users in Singapore, the couple users you wrote it for were in Chicago? The answer: oh, people loved it so much we rolled it out globally and it is used by a few hundred users.

OF COURSE IT FAILS, because you wrote it as a one-off (no architecture or thoughtful implementation) for a couple users. You tell them that fixing the issues requires a complete rearchitect and implementation job. And that it’ll take a team of 4 people 9 months. They are shocked, because you wrote this thing initially in 3 weeks.

This is often a career limiting move, bad news all around.

Now if you’d put a little more thought into the original architecture and implementation, perhaps using some basic separation of concerns, a little DDD or real OOD (not just using an ORM) the original system may well have scaled globally to a hundred users.

Even if you do have to enhance it to support the fact that you fell into the pit of success, at least the software is maintainable and can be enhanced without a complete rewrite.

This "pit of success" concept was one of the major drivers behind the design of CSLA .NET and the data portal. If you follow the architecture prescribed by CSLA you'll have clear separation of concerns:

  1. Interface
  2. Interface control
  3. Business logic
  4. Data access
  5. Data storage

And you'll be able to deploy your initial app as a 1- or 2-tier quick-and-easy thing for those couple users. Better yet, when that emergency call comes in the night, you can just:

  1. Stand up an app server
  2. Change configuration on all clients to use the app server

And now your quick app for a couple users is capable of global scaling for hundreds of users. No code changes. Just an app server and a configuration change.

(and that just scratches the surface - with a little more work you could stand up a Kubernetes cluster instead of just an app server and support tens of thousands of users)

So instead of a career limiting move, you are a hero. You get a raise, extra vacation, and the undying adoration of your users 😃

Friday, April 5, 2019 10:38:52 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

How can a .NET developer remain relevant in the industry?

Over my career I've noticed that all technologies require developers to work to stay relevant. Sometimes that means updating your skills within the technology, sometimes it means shifting to a whole new technology (which is much harder).

Fortunately for .NET developers, the .NET platform is advancing rapidly, keeping up with new software models around containers, cloud-native, and cross-platform client development.

First it is important to recognize that the .NET Framework is not the same as .NET Core. The .NET Framework is effectively now in maintenance mode, and all innovation is occurring in the open source .NET Core now and into the future. So step one to remaining relevant is to understand .NET Core (and the closely related .NET Standard).

Effectively, you should plan for .NET Framework 4.8 to become stable and essentially unchanging for decades to come, while all new features and capabilities are built into .NET Core.

Right now, you should be working to have as much of your code as possible target .NET Standard 2.0, because that makes your code compatible with .NET Framework, .NET Core, and mono. See Migrating from .NET to .NET Standard.

Second, if you are a client-side developer (Windows Forms, WPF, Xamarin) you need to watch .NET Core 3, which is slated to support Windows Forms and WPF. This will require migration of existing apps, but is a way forward for Windows client developers. Xamarin is a cross-platform client technology, and in this space you should learn Xamarin.Forms because it lets you write a single app that can run on iOS, Android, Mac, Linux desktop, and Windows.

Third, if you are a client-side developer (web or smart client) you should be watching WebAssembly. Right now .NET has experimental support for WebAssembly via the open source mono runtime. And also a wasm UI framework called Blazor, and another XAML-based UI framework called the Uno Platform. This is an evolving space, but in my opinion WebAssembly has great promise and is something I’m watching closely.

Fourth, if you are a server-side developer it is important to understand the major industry trends around containers and container orchestration. Kubernetes, Microsoft Azure, Amazon AWS, and others all have support for containers, and containers are rapidly becoming the defacto deployment model for server-side code. Fortunately .NET Core and ASP.NET both have very good support for cloud-native development.

Finally, that’s all from a technology perspective. In a more general sense, regardless of technology, modern developers need good written and verbal communication skills, the ability to understand business and user requirements, at least a basic understanding of agile workflows, and a good understanding of devops.

I follow my own guidance, by the way. CSLA .NET 4.10 (the current version) targets .NET Standard 2.0, supports WebAssembly, and has a bunch of cool new features to help you leverage container-based cloud-native server environments.

Friday, April 5, 2019 10:19:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer