Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Thursday, 23 August 2018

Git can be confusing, or at least intimidating. In particular, if you end up working on a project that relies on a pull request (PR) model, and even more so if forks are involved.

This is pretty common when working on GitHub open source projects. Rarely is anyone allowed to directly update the master branch of the primary repository (repo). The way changes get into master is by submitting a PR.

In a GitHub scenario any developer is usually interacting with three repos:

Forks are created using the GitHub web interface, and they basically create a virtual "copy" of the primary repo in the developer's GitHub workspace. That fork is then cloned to the developer's workstation.

In many corporate environments everyone works in the same repo, but the only way to update master (or dev or a shared branch) is via a PR.

In a corporate scenario developers often interact with just two repos:

The developer clones the primary repo to their workstation.

Whether from a GitHub fork or a corporate repo, cloning looks something like this (at the command line):

$ git clone https://github.com/rockfordlhotka/csla.git

This creates a copy of the repo in the cloud onto the dev workstation. It also creates a connection (called a remote) to the cloud repo. By default this remote is named "origin".

Whether originally from a GitHub fork or a corporate repo, the developer does their work against the clone, what I'm calling the Dev workstation repo in these diagrams.

First though, if you are using the GitHub model where you have the primary repo, a fork, and a clone, then you'll need to add an upstream repo to your dev workstation repo. Something like this:

$ git remote add MarimerLLC https://github.com/MarimerLLC/csla.git

This basically creates a (readonly) connection between your dev workstation repo and the primary repo, in addition to the existing connection to your fork. In my case I've named the upstream (primary) repo "MarimerLLC".

This is important, because you are very likely to need to refresh your dev workstation repo from the primary repo from time to time.

Again, developers do their work against the dev workstation repo. They should do their work in a branch other than master. Mostly work should be done in a feature branch, usually based on some work item in VSTS, GitHub, Jira, or whatever you are using for project and issue management.

Back to creating a branch in the dev workstation repo. Personally I name my branches with the issue number, a dash, and a word or two that reminds me what I'm working on in this branch.

$ git fetch MarimerLLC
$ git checkout -b 123-work MarimerLLC/master

This is where things get a little tricky.

First, the git fetch command makes sure my dev workstation repo has the latest changes from the primary repo. You might think I'd want the latest from my fork, but in most cases what I really want is the latest from the primary repo, because that's where changes from other developers might have been merged - and I want their changes!

The git checkout command creates a new branch named "123-work" based on MarimerLLC/master. So based on the real master branch from the primary repo; the one I just made sure was updated from the cloud to be current.

This means my working directory on my computer is now using the 123-work branch, and that branch is identical to master from the primary repo. What a great starting point for any new work.

Now the developer does any work necessary. Editing, adding, removing files, etc.

One note on moving or renaming files: if you want to keep the file's history intact as you move or rename a file it is best to use git to make the changes.

$ git mv OldFile.cs NewFile.cs

At any point while you are doing your work you can commit your changes to the dev workstation repo. This isn't a "backup", because it is on your computer. But it is a snapshot of your work, and you can always roll back to earlier snapshots. So it isn't a bad idea to commit after you've done some work, especially if you are about to take any risks with other changes!

Personally I often use a Windows shell add-in called TortoiseGit to do my local commits, because I like the GUI experience integrated into the Windows Explorer tool. Other people like different GUI tools, and some like the command line.

At the command line a "commit" is really a two part process.

$ git add .
$ git commit -m '#123 My comment here'

The git add command adds any changes you've made into the local git index. Though it says "add", this adds all move/rename/delete/edit/add operations you've done to any files.

The git commit command actually commits the changes you just added, so they become part of the permanent record within your dev workstation repo. Note my use of the -m switch to add a comment (including the issue number) about this commit. I think this is critical! Not only does it help you and your colleagues, but putting the issue number as a tag allows tools like GitHub and VSTS to hyperlink to the issue details.

OK, so now my changes are committed to my dev workstation repo, and I'm ready to push them up into the cloud.

If I'm using GitHub and a fork then I'll push to my personal fork. If I'm directly using a corporate repo I'll push to the corporate repo. Keep in mind though, that I'm pushing my feature branch, not master!

$ git push origin

This will push my current branch (123-work) to origin, which is the cloud-based repo I cloned to create my dev workstation repo.

GitHub with a fork:

Corporate:

The 123-work in the cloud is a copy of that branch in my dev workstation repo. There are a couple immediate benefits to having it in the cloud

  1. It is backed up to a server
  2. It is (typicaly) visible to other developers on my team

I'll often push even non-working code into the cloud to enable collaboration with other people. At least in GitHub and VSTS, my team members can view my branch and we can work together to solve problems I might be facing. Very powerful!

(even better, but more advanced than I want to get in this post, they can actually pull my branch down onto their workstation, make changes, and create a PR so I can merge their changes back into my working branch)

At this point my work is both on my workstation and in the cloud. Now I can create a pull request (PR) if I'm ready for my work to be merged into the primary master.

BUT FIRST, I need to make sure my 123-work branch is current with any changes that might have been made to the primary master while I've been working locally. Other developers (or even me) may have submitted a PR to master in the meantime, so master may have changed.

This is where terms like "rebase" come into play. But I'm going to skip the rebase concept for now and show a simple merge approach:

$ git pull MarimerLLC master

The git pull command fetches any changes in the MarimerLLC primary repo, and then merges the master branch into my local working branch (123-work). If the merge can be done automatically it'll just happen. If not, I'll get a list of files that I need to edit to resolve conflicts. The files will contain both my changes and any changes from the cloud, and I'll need to edit them in Visual Studio or some other editor to resolve the conflicts.

Once any conflicts are resolved I can move forward. Even if there weren't conflicts I'll need to commit the merged changes from the cloud into my local repo.

$ git add .
$ git commit -m 'Merge upstream changes from MarimerLLC/master'

It is critical at this point that you make sure the code compiles and that your unit tests run locally! If so, proceed. If not, fix any issues, then proceed.

Push your latest changes into the cloud.

$ git push origin

With the latest code in the cloud you can create a PR. A PR is created using the web UI of GitHub, VSTS, or whatever cloud tool you are using. The PR simply requests that the code from your branch be merged into the primary master branch.

In GitHub with a fork the PR sort of looks like this:

In a corporate setting it looks like this:

In many cases submitting a PR will trigger a continuous integration (CI) build. In the case of CSLA I use AppVeyor, and of course VSTS has great build tooling. I can't imagine working on a project where a PR doesn't trigger a CI build and automatic run of unit tests.

The great thing about a CI build at this point is that you can tell that your PR builds and your unit tests pass before merging it into master. This isn't 100% proof of no issues, but it sure helps!

It is really important to understand that there is an ongoing link from the 123-work branch in the cloud to the PR. If I change anything in the 123-work branch in the cloud that changes the PR.

The upside to this is that GitHub and VSTS have really good web UI tools for code reviews and commenting on code in a PR. And the developer can just go change their 123-work branch on the dev workstation to respond to any comments, then

  1. git add
  2. git commit
  3. git push origin

as shown above to get those changes into the cloud-based 123-work branch, thus updating the PR.

Assuming any changes requested to the PR have been made and the CI build and unit tests pass, the PR can be accepted. This is done through the web UI of GitHub or VSTS. The result is that the 123-work branch is merged into master in the primary repo.

At this point the 123-work branch can (and should) be deleted from the cloud and the dev workstation repo. This branch no longer has value because it has been merged into master. Don't worry about losing history or anything, that won't happen. Getting rid of feature branches once merged is necessary to keep the cloud and local repos all tidy.

The web UI can be used to delete a branch in the cloud. To delete the branch from your dev workstation repo you need to move out of that branch, then delete it.

$ git checkout master
$ git branch -D 123-work

Now you are ready to repeat this process from the top based on the next work item in the backlog.

Git
Thursday, 23 August 2018 16:00:41 (Central Standard Time, UTC-06:00)  #    Disclaimer

Does anyone understand how System.Data.SqlClient assemblies get pulled into projects?

I have a netstandard 2.0 project where I reference System.Data.SqlClient. I then reference/use that assembly in a Xamarin project. And this seems to work, but creates a compile-time warning in the Xamarin project

The assembly 'System.Data.SqlClient.dll' was loaded from a different 
  path than the provided path

provided path: /Users/user135287/Library/Caches/Xamarin/mtbs/builds/
  UI.iOS/4a61fb5d59d8c2875723f6d1e7f44ce3/bin/iPhoneSimulator/Debug/
  System.Data.SqlClient.dll

actual path: /Library/Frameworks/Xamarin.iOS.framework/Versions/
  11.6.1.4/lib/mono/Xamarin.iOS/Facades/System.Data.SqlClient.dll

I don't think the warning actually causes any issues - but (like a lot of people) I dislike warnings during my builds. Sadly, I don't know how to get rid of this particular warning.

I guess I also don't know if it has anything to do with my Class Library project using System.Data.SqlClient, or maybe this is just a weird thing with Xamarin iOS?

Thursday, 23 August 2018 15:15:31 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, 27 June 2018

A while back I blogged about how to edit a collection of items in ASP.NET MVC.

These days I've been starting to use Razor Pages, and I wanted to solve the same problem with the newer technology.

In my case I'm also making sure the new CSLA .NET 4.7.200 CslaModelBinder type works well in this, among other, scenarios.

To this end I wrote a CslaModelBinderDemo app.

Most of the interesting parts are in the Pages/MyList directory.

Though this sample uses CSLA, the same concepts should apply to any model binder and collection.

My goal is to be able to easily add, edit, and remove items in a collection. I was able to implement the edit and remove operations on a single grid-like page.

I chose to do the add operation on a separate page. I first implemented it on the same page, but in that implementation I ran into complications with business rules that make a default/empty new object invalid. By doing the add operation on its own page there's no issue with business rules.

Domain Model

Before building the presentation layer I created the business domain layer (model) using CSLA. These are just two types: a MyList editable collection, and a MyItem editable child type for the objects in the collection.

The MyItem type is a little interesting, because it implements both root and child data portal behaviors. This is because the type is used as a child when in a MyList collection, but is used as a standalone root object by the page implementing the add operation. In CSLA parlance this is called a "switchable object".

Configuring the model binder

In the Razor Pages project it is necessary to configure the app to use the correct model binder for CSLA types. The default model binders for MVC and now .NET Core all assume model objects are dumb DTO/entity types - public read/write properties, no business rules, etc. Very much not the sort of model you get when using CSLA.

The new CslaModelBinder for AspNetCore fills the same role as this type has in previous ASP.NET MVC versions, but AspNetCore has a different binding model under the hood, so this is a totally new implementation.

To use this model binder add code in Startup.cs in the ConfigureServices method:

      services.AddMvc(config =>
        config.ModelBinderProviders.Insert(0, new Csla.Web.Mvc.CslaModelBinderProvider(CreateInstanceAsync, CreateChild))
        ).SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

An app can have numerous model binders. The model binder providers indicate which types a binder should handle. So the CslaModelBinderProvider ensures that the CslaModelBinder is used for any editable business object types (basically BusinessBase or BusinessListBase subclasses).

Notice that two parameters are provided to CslaModelBinderProvider: something to create root objects, and something to create child objects.

These are optional. If you don't provide them, CslaModelBinder will directly create instances of the appropriate types. But if you want to have some control over how the instances are created then you need to provide these parameters (and related implementations).

Root and Child instance creators

In my case I want to make sure when my root collection is instantiated, that it contains all existing data.

Remember that the model binder is invoked on page postback, when the data is flowing from the browser back into the Razor Page on the server. All the collection data is in the postback, but it also exists in the database.

Basically what we're doing in this scenario is merging the changed data from the browser into the data from the database. I could maintain the collection in some sort of Session store, but in this app I'm choosing to load it from the database each time:

    private async Task<object> CreateInstanceAsync(Type type)
    {
      object result;
      if (type.Equals(typeof(Pages.MyList.MyList)))
        result = await Csla.DataPortal.FetchAsync<Pages.MyList.MyList>();
      else
        result = Csla.Reflection.MethodCaller.CreateInstance(type);
      return result;
    }

Of course the collection contains child objects, and the postback provides an array of data, with each row in the array corresponding to an object that exists in the collection.

On postback, step 1 is that the root collection gets created (via the FetchAsync call), and then each row in the postback array needs to be mapped into an existing (or new) child object in the collection.

The CreateChild method grabs the Id value for the current row from the postback and uses that value to find the existing child object in the collection. If that child exists it is returned to CslaModelBinder for binding. If it isn't in the collection then a new instance of the type is created so that child can be bound and added to the collection.

    private object CreateChild(System.Collections.IList parent, Type type, Dictionary<string, string> values)
    {
      object result = null;
      if (type.Equals(typeof(Pages.MyList.MyItem)))
      {
        var list = (Pages.MyList.MyList)parent;
        var idText = values["Id"];
        int id = string.IsNullOrWhiteSpace(idText) ? -1 : int.Parse(values["Id"]);
        result = list.Where(r => r.Id == id).FirstOrDefault();
        if (result == null)
          result = Csla.Reflection.MethodCaller.CreateInstance(type);
      }
      else
      {
        result = Csla.Reflection.MethodCaller.CreateInstance(type);
      }
      return result;
    }

The result is that CslaModelBinder "creates" a new collection, but really it gets a pre-loaded instance with current data. Then it "creates" a new child object for each row of data in the postback, but really it gets pre-existing instances of each child object with existing data, and then the postback data is used to set each property on the object.

The beauty here is that if the postback value is the same as the value already in the child object's property, then CSLA will ignore the "new" value. But if the values are different then the child object's IsDirty property will be true so it will be saved to the database.

Adding a new child to the collection

It is certainly possible to add a new child object to the collection like I did in the previous ASP.NET MVC blog post. The drawback to that approach is that this new child may have business rules that complicate matters if it is created "blank" and added to the list.

So in this case I decided a better overall experience might be to have the user add an item via a create page, and do edit/remove operations on the index page.

The Create.cshtml page is perhaps the simplest scenario. The Razor was created by scaffolding. Nothing in this view is unique to this problem space or CSLA. It is just a standard create page.

The Create.cshtml.cs code behind the page is a little different from code you might find for Entity Framework, because I'm using CSLA domain objects. This just means that the OnGet method uses the data portal to retrieve the domain object.

    public async Task<IActionResult> OnGet()
    {
      MyItem = await Csla.DataPortal.CreateAsync<MyItem>();
      return Page();
    }

And the OnPostAsync method calls the SaveAsync method to save the domain object.

    public async Task<IActionResult> OnPostAsync()
    {
      if (!ModelState.IsValid)
      {
        return Page();
      }

      MyItem = await MyItem.SaveAsync();

      return RedirectToPage("./Index");
    }

Finally, the MyItem property is a standard data bound Razor Pages property.

    [BindProperty]
    public MyItem MyItem { get; set; }

The important thing to understand is that MyItem is a subclass of BusinessBase and so the CslaModelBinderProvider will direct data binding to use CslaModelBinder to do the binding for this object. Because CslaModelBinder understands how to correctly bind to CSLA types, everything works as expected.

Editing and removing items in the collection

Now we get to the fun part: creating a page that displays the collection's contents and allows the user to edit multiple items, mark items for deletion, and then click a button to commit the changes.

Interestingly enough, the Index.cshtml.cs code isn't complex. This is because most of the work is handled by CslaModelBinder and the two methods we already implemented in Startup.cs. This code just gets the domain object in OnGetAsync and saves it in OnPostAsync.

    [BindProperty]
    public MyList DataList { get; set; }

    public async Task OnGetAsync()
    {
      DataList = await Csla.DataPortal.FetchAsync<MyList>();
    }

    public async Task<IActionResult> OnPostAsync()
    {
      foreach (var item in DataList.Where(r => r.Remove).ToList())
        DataList.Remove(item);
      DataList = await DataList.SaveAsync();
      return RedirectToPage("Index");
    }

Notice how the Remove property is used to identify the child objects that are to be removed from the collection. Because this is a CSLA collection, this code just needs to remove these items, and when SaveAsync is called to persist the domain object's data those items will be deleted, and any changed data will be updated or inserted as necessary.

The Index.cshtml page is a bit different from a standard page, in that it needs to display the input fields to the user, and make sure everything is properly connected to each item in the collection such that a postback can form all the data into an array.

The key part is the for loop that creates those UI elements in a table.

  @for (int i = 0; i < Model.DataList.Count; i++)
  {
    <tr>
      <td>
        <input type="hidden" asp-for="DataList[i].Id" />
        <input asp-for="DataList[i].Name" class="form-control" />
        <span asp-validation-for="DataList[i].Name" class="text-danger"></span>
      </td>
      <td>
        <input asp-for="DataList[i].City" class="form-control" />
        <span asp-validation-for="DataList[i].City" class="text-danger"></span>
      </td>
      <td>
        <input asp-for="DataList[i].Remove" type="checkbox" />
        <label class="control-label">Select</label>
      </td>
    </tr>
  }

Instead of a foreach loop, this uses an index to go through each item in the collection, allowing the use of asp-for to create each UI control.

Make special note of the hidden element containing the Id property. Although this isn't displayed to the user, the value needs to round-trip so it is available to the server as part of the postback, or the CreateChild method implemented earlier wouldn't be able to reconcile existing child object instances with the data in the postback array.

Summary

Quick and easy editing of a collection is a very common experience users expect from apps. Although the standard CRUD scaffolding implements all the right behaviors, as a user it is tedious to edit several rows of data if you have to navigate to multiple pages for each row. The approach in this post doesn't solve every UX need, but when quick editing of multiple rows is required, this is a good answer.

Thanks to Razor Pages data binding, implementing this approach is not difficult.

Wednesday, 27 June 2018 16:27:06 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 18 June 2018

In my microservices presentations at conferences I talk about APIs like this. I go into more depth in my presentations in terms of the background, but these are the high level points of that section of the talk.

From 1996 with the advent of MTS, Jaguar, and EJB, a lot of people create a public service API with endpoints like this pseudo-code:

int MyService(int x, double y)

That is not a service, that is RPC (remote procedure call) modeling. It is horrible. But people understand it, and the technologies have supported it forever (going back decades, and rolling forward to today). So LOTS of people create "services" that expose that sort of endpoint. Horrible!!

A better endpoint would be this:

Response MyService(Request r)

At least in this case the Request and Response concepts are abstract, and can be thought of as message definitions rather than types. Not that hardly anybody thinks that way, but they should think that way.

With this approach you can at least apply the VB6 COM rules for evolving an API (which is to say you can add new stuff, but can't change or remove any existing stuff) without breaking clients.

However, that is still a two-way synchronous API definition, so achieving things like fault tolerance, scaling, and load balancing is overly complex and WAY overly expensive.

So the correct API endpoint is this:

void MyService(Request r)

In this case the service is a one-way call, that can easily be made async (queued). That helps the mental adjustment that Request is a message definition. It also makes it extremely easy and cost-effective to get fault tolerance, scaling, and load balancing, because the software architecture directly enables those concepts.

Monday, 18 June 2018 09:54:09 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 11 June 2018

Windows Server is a wonderful server operating system. However, I think it is closing in on END OF LINE (with a nod toward Tron fans).

Why do I say this? Here's my train of thought.

  1. .NET Core runs on either Windows or Linux interchangeably
  2. Linux servers are cheaper to run than Windows Servers (especially in public clouds)
  3. Docker is the future of deployment
    1. Linux containers are more mature and generally better than Windows containers
    2. Linux containers are cheaper to run
    3. Azure runs Linux and Windows Server, and Microsoft seems to care more about you using Azure than which OS you use on Azure
  4. If you are writing new server code, why wouldn't you write it in .NET Core?
  5. If you are writing .NET Core code, why wouldn't you run in (cheaper) Linux containers on (cheaper) Linux hosts?

Now I get it. You say that you have tons of full .NET 1.x, 2.x, 3.x, or 4.x code. That stuff can only run on Windows, not Linux. So obviously Windows Server isn't EOL.

I agree. It isn't yet. But neither is the green-screen AS/400 software my auto mechanic uses to file tickets when I bring my car in to get the oil changed. Has that software been updated in the past 20 years? Probably not. Does it still work? Yes, clearly. Is it the vibrant present or future of software? Hahahahahahahaa NO!

When I say Windows Server is headed toward EOL I mean it is headed toward the same place as the AS/400, the VAX, and more platforms. It'll continue to run legacy software for decades until it eventually becomes cost effective to rewrite the software running on those servers into the then-current technologies.

But if I were starting a new project today, you'd have to come up with some terribly compelling reasons why I wouldn't

  1. Write it in .NET Core (really in netstandard)
  2. Deploy it via Linux Docker containers
  3. Use a Linux Docker host

That's not to say there might not be some compelling, if short term, arguments. Such as

  1. Your IT staff only knows Windows (a career limiting move (CLM) for them!!)
  2. Your IT infrastructure is centered around Windows deployment (Docker and Kubernetes will eat you for dinner, sorry)
  3. Your IT infrastructure is centered around Windows management (valid for a while, but also a CLM)
  4. You value that Windows Server can run both Linux and Windows Docker containers (valid argument imo, for the host)

To reiterate, as a .NET developer I feel comfortable saying that the future of server-side code is .NET Standard, .NET Core, and the ability to run my code on Linux or Windows equally. And I feel comfortable saying that Docker is the best server-side deployment scenario I've yet seen in my 30+ year career.

So I guess at the end of the day, the future of Windows Server rests entirely in the hands of IT Pros, who'll be using some host OS to run my Linux Docker containers with my .NET Core apps.

Either Windows Server or Linux will offer a better overall value proposition. Which is to say that one of them will be cheaper to run on a per-hour basis in a cloud. Today that's Linux. Barring change, Windows Server is headed toward END OF LINE.

Monday, 11 June 2018 21:05:44 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, 05 June 2018

Hello all!

I'm coordinating a codeathon on Saturday, June 23, to work on the Mobile Kids Id Kit app.

This project is sponsored by the Humanitarian Toolbox and Missing Children Minnesota, and provides you with a literal opportunity to make the world a better place for children and parents.

The app is partially complete, and has been created with Xamarin Forms. There are a number of important backlog items remaining to be completed, and we need to do the work to get the apps into the Apple, Google, and Microsoft stores.

If you are looking for a way to use your Xamarin (or ASP.NET Core) skills to make the world a better place, this is your chance! Sign up here.

Tuesday, 05 June 2018 15:45:52 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 14 May 2018

At Build 2018 Satya Nadella announced a new "AI for Accessibility" program. Having had a few days since then, I have a couple ways I think about this.

One is personal. A little over 2 years ago I had major surgery, including about 15% chance of being partially paralyzed as a result. Fortunately the surgery went well and paralysis didn’t happen, but it got me thinking about the importance of technology and software as an equalizer in life. Something like partial paralysis often ends people’s lives as they know it. With technology though, there’s the very real possibility of people with severe medical conditions living life at a level they never could without those technologies.

The other way I think about this is from the perspective of Magenic. We build custom software for our clients. Sometimes that software is fairly run-of-the-mill business software. Sometimes it is part of a solution that has direct impact on making people’s lives better, in big or small ways. When you get to work on a project that makes people’s lives better, that’s amazingly rewarding!

Every time I’ve had the opportunity to talk to Satya I’ve been impressed by his thoughtfulness and candor. As a result, when I hear him talk about ethical and responsible computing and AI, I believe he’s sincerely committed to that goal. And I very much appreciate that perspective on his part.

Again, veering a bit into my personal life, I read a lot of science fiction, speculative fiction, and cyberpunk. A great deal of that literature deals with the impacts of unfettered technology and AI; consequences both miraculous and terrifying. I also don’t dismiss the (non-fiction) concept of a potential “singularity”, where technology (via transhumanism or AI) results in a whole new class of being that is beyond simple humanity. Yeah, I know, I might have gone off the deep end there, but I think there’s a very real probability that augmented humans and/or AI will transform the world in ways we can’t currently comprehend.

Assuming any of this comes to pass, we can’t understate the need to approach AI and human augmentation in an ethical and thoughtful manner. Our goal must be to improve the state of mankind through the application of these technologies.

The fact that Microsoft is publicly having these conversations, and is putting money behind their words, is an important step in the right direction.

Monday, 14 May 2018 09:23:38 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, 20 April 2018

On Saturday April 21, 2018 I'm giving a talk at TCCC about WebAssembly (my current favorite topic).

Friday, 20 April 2018 15:14:44 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, 08 April 2018

Overview

As you've probably noticed from my recent blogging, I'm very excited about the potential of WebAssembly, Blazor, and Ooui. A not-insignificant part of my enthusiasm is because the CSLA .NET value proposition is best when a true smart client is involved, and these technologies offer a compelling story about having rich smart client code running on Windows, Mac, Linux, and every other platform via the browser.

This weekend I decided to see if I could get CSLA running in Blazor. I'm pleased to say that the experiment has been a success!

You can see my log of experiences and how to get the code working via this GitHub issue: https://github.com/MarimerLLC/csla/issues/829

Issues

At first glance it would appear that CSLA should already "just work" because you can reference a netstandard 2.0 assembly from Blazor, and the CSLA-Core-NS NuGet package is netstandard 2.0. However, it is important to remember that Blazor is experimental, and it is running on an experimental implementation of mono for wasm. So not everything quite works as one might hope. In particular, I ran into some issues:

  1. System.Linq.Expressions is unavailable in Blazor, which is how CSLA (and Newtonsoft.Json) avoid use of reflection in many cases (https://github.com/aspnet/Blazor/issues/513)
  2. The DataContractSerializer doesn't work "out of the box" in Blazor (https://github.com/aspnet/Blazor/issues/511)
  3. The Blazor solution template for a .NET Core host with Blazor client doesn't work if you reference one version of Csla.dll on the server, and a different version of Csla.dll on the client - so you have to use the same Csla.dll in both projects (https://github.com/aspnet/Blazor/issues/508)
  4. The HttpClient type isn't fully implemented in mono-wasm, and it only supports passing string data, not a byte array
  5. You can't just create an instance of HttpClient, you need to use the instance provided via injection into each Blazor page

To address these issues I've created a new Csla.Wasm project that builds Csla.dll specifically to work in Blazor.

Issue 1 wasn't so bad, because CSLA used to use reflection on some platforms where System.Linq.Expressions wasn't available. I was able to use compiler directives to use that older code for Csla.Wasm, thus eliminating any use of Linq. There's a performance hit of course, but the upside is that things work at all!

Issue 2 was a bit more complex. It turns out there is a workaround to get the DCS working in Blazor (see issue 511), but before learning about that I used Newtonsoft.Json as a workaround. Fortunately this only impacts the MobileList type in CSLA.

Now keep in mind that Newtonsoft.Json doesn't universally work in Blazor either, because when it serializes complex types it also uses System.Linq.Expressions and thus fails. But it is capable of serializing primitive types such as a List<string>, and that's exactly the behavior I required.

Issue 3 is kind of a PITA, but for an experiment I'm ok with referencing the wasm implementation of Csla.dll on the server. Sure, it uses reflection instead of Linq, but this is an experiment and I'll live with a small performance hit. Remember that the wasm version of Csla.dll targets netstandard 2.0, so it can run nearly anywhere - just with the minor changes needed to make it work on mono-wasm.

Issue 4 required tweaking the data portal slightly. Well, the right answer is to create a new proxy/host channel for the data portal, but for this experiment I directly tweaked the HttpProxy type in CSLA - and that'll need to be corrected at some point. Really no change to the actual data portal should be required at all.

Issue 5 required tweaking the CSLA HttpProxy type to make it possible for the UI code to provide the data portal with an instance of the HttpClient object to use. This isn't a bad change overall, because I could see how this would be useful in other scenarios as well.

The BlazorExample project

The end result is a working Blazor sample, and you can see the code here: https://github.com/rockfordlhotka/csla/tree/blazor/Samples/BlazorExample

This solution is mostly the Microsoft template.

  • BlazorExample.Client is the Blazor client app that runs in the browser
  • BlazorExample.Server is an ASP.NET Core app running on the server, from which the client app is deployed, and it also hosts the CSLA data portal endpoint
  • BlazorExample.Shared is a netstandard 2.0 class library referenced by both client and server, so any code here is available to both

Code shared by client and server

In BlazorExample.Shared you'll find a Person class - just a simple CSLA business domain class:

using System;
using System.Collections.Generic;
using System.Text;
using Csla;

namespace BlazorExample.Shared
{
  [Serializable]
  public class Person : BusinessBase<Person>
  {
    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(c => c.Name);
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    private static int _count;

    private void DataPortal_Fetch(string name)
    {
      using (BypassPropertyChecks)
      {
        _count++;
        Name = name + _count.ToString();
      }
    }
  }
}

This type is available to the client and server, enabling the normal CSLA mobile object behaviors via the data portal. Anyone using CSLA over the years should see how this is familiar and makes sense.

Also notice that there's nothing unique about this code, this is exactly what you'd write for Windows Forms, ASP.NET, Xamarin, WPF, UWP, etc. One of the key benefits of CSLA - reuse your business classes across every platform where you can run .NET code.

The Blazor client app

In the client project there's a Program.cs file with the app's startup code. Here's where I configure the data portal and ensure there's a serializable principal object available:

Csla.DataPortal.ProxyTypeName =
  typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName;
Csla.DataPortalClient.HttpProxy.DefaultUrl = 
  "/api/DataPortal";
Csla.ApplicationContext.User = 
  new Csla.Security.UnauthenticatedPrincipal();

This is standard CSLA initialization code that you'll find in nearly any modern app. Same as WPF, UWP, Xamarin, etc.

I chose to do my UI experiments in the Pages/Counter.cshtml page.

The real highlight here, from a CSLA perspective, is the LoadPerson method; a handler for the "Load person" button:

async void LoadPerson()
{
  try
  {
    // Provide injected HttpClient to data portal proxy
    Csla.DataPortalClient.HttpProxy.SetHttpClient(Http);
    // Get person object from server
    person = await Csla.DataPortal.FetchAsync<BlazorExample.Shared.Person>("Fred");
  }
    catch (Exception ex)
  {
    errorText = ex.Message + ":: " + ex.ToString();
  }
  StateHasChanged();
}

The unique thing here is where the SetHttpClient method is called to provide the data portal proxy with access to the HttpClient object injected at the top of the page:

@inject HttpClient Http

This particular HttpClient instance has been initialized by Blazor, so it has all the correct settings to talk easily to the deployment web server, which is also where I hosted the data portal endpoint.

The page also makes use of Blazor data binding. In particular, there's a person field available to the Razor code:

BlazorExample.Shared.Person person = new BlazorExample.Shared.Person();

And then in the Razor "html" this is used to display the business object's Name property:

<p>Name: @person.Name</p>

Because the LoadPerson method is async, it is necessary to tell Blazor's data binding to refresh the UI when the data has been retrieved. That call to StateHasChanged at the bottom of the method is what triggers the data binding UI refresh.

The ASP.NET Core web/app server

The server project has a couple unique things.

I had to work around the fact that a byte array can't be passed over the network from Blazor. So there's a modification to the CSLA HttpProxy class (client-side) to pass base64 encoded data to/from the server. For example:

//httpRequest.Content = new ByteArrayContent(serialized);
httpRequest.Content = new StringContent(System.Convert.ToBase64String(serialized));

Then in the server project there's a custom HttpPortalController class, copied from CSLA and also tweaked to work with base64 encoded data. For example:

string requestString;
using (var reader = new StreamReader(requestStream))
  requestString = await reader.ReadToEndAsync();
var requestArray = System.Convert.FromBase64String(requestString);
var requestBuffer = new MemoryStream(requestArray);

This controller is then exposed as an API endpoint via the DataPortalController class in the Controllers folder:

using Microsoft.AspNetCore.Mvc;

namespace BlazorExample.Server.Controllers
{
  [Route("api/[controller]")]
  public class DataPortalController : Csla.Server.Hosts.HttpPortalController
  {
  }
}

This is no different from hosting the data portal in ASP.NET Core (or ASP.NET MVC) in any other setting - except that it is using that custom controller base class that works against base64 strings instead of byte arrays like normal.

Because the BlazorExample.Shared assembly is referenced by the server project, the data portal automatically has access to the same Person type that's being used by the client, so again, the normal CSLA mobile object concept just works as expected.

Summary

I estimate I spent around 20 hours fighting through the various issues listed in this blog post. As per normal, most of the solutions weren't that hard in the end, but isolating the problems, researching possible solutions, testing the various solutions, and settling on the answer - that takes time and persistence.

Also - the support from the Blazor community on gitter is really great! And the team itself via GitHub - also really great!

One comment on this - there's no debugger support in Blazor right now, hence my tweet in the middle of my work:

That did make things a lot more tedious than normal modern development. It was like a throwback to 1990 or something!

The end result though, totally worth the effort! It is so cool to see normal CSLA code running in Blazor, data bound to a UI, and interacting with an app server via the data portal!

Sunday, 08 April 2018 13:23:57 (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2018, Marimer LLC

Send mail to the author(s) E-mail



Sign In