Rockford Lhotka

 Thursday, August 22, 2019

I recently posted about the new gRPC data portal channel coming in CSLA 5.

I've also been working on a data portal channel based on using RabbitMQ as the underlying transport.

Now this might seem odd, because the CSLA .NET data portal is essentially a "synchronous" model, much like HTTP. The caller sends a message to the server, and waits for a response. One of:

  1. A response message indicating some result (success or failure)
  2. An exception due to the transport failing
  3. A timeout due to the server not responsing

This makes sense with gRPC and HTTP, because they both follow that bi-directional communication model. But (by themselves) queues don't. Queues are "fire and forget", providing a one-way message protocol.

However, it has been a common practice for decades to use queues for bi-directional messaging through the use of a reply queue.

In this model callers (logical client-side code) sends requests to the data portal by sending a message to the data portal server's queue.

The data portal server processes those messages exactly as though they came in via HTTP or gRPC. The calls are routed to your business code, which can do whatever it wants on the server (typically talk to a database). When your business code is done, the response is sent back to each caller's respective reply queue.

This seems pretty intuitive and straightforward. The various request/response pairs are coordinated using something called a correlation id, which is just a unique value for each original request. Also, each request includes the name of its reply queue, making it easy to respond to the original caller.

The data portal server can handle many inbound requests at the same time, because they are all uniquely identified via correlation id and reply queue. In fact there are some amazing benefits to this approach:

  1. If the data portal server crashes and comes back up it'll pick up where it left off - a valuable attribute in an environment such as Kubernetes
  2. Multiple instances of the data portal server can run at the same time to spread the workload across multiple servers - useful in traditional data centers and in Kubernetes
  3. Fault tolerance can be achieved by configuring RabbitMQ itself to run in a redundant clustered environment

It is also the case that the caller might be something like a web server. So a given caller might send multiple concurrent requests to the data portal. And that's fine, because each request has a unique correlation id, allowing replies from the data portal server to be mapped back to the original requester.

The one primarily limitation is that if a caller crashes then its "client-side" state is lost. This is an inherent part of the bi-directional, caller-driven model used by the data portal.

You can think of it as being no different from an HTTP caller (e.g. a browser) shutting down after making a request to a web server and before the server responds. The server may complete its work, but even if the user opens a new browser window they'll never get the response from the server.

The same thing is true in this implementation of the data portal using RabbitMQ. So it has total parity with HTTP or gRPC in this regard.

The great thing is how CSLA abstracts the use of RabbitMQ, just like it does for HTTP, gRPC, and any other network transport.

Identifying the RabbitMQ Service

Everything assumes you have a RabbitMQ instance running. It might be a single node or a cluster; either way RabbitMQ has an IP address and a port. The data portal also requires that you provide a name for the data portal server queue, and you can optionally manually name the reply queues.

To make this fit within the URL-based model for other transports, CSLA relies on a URI for the RabbitMQ service. It looks like this:

rabbitmq://servername/queuename

And optionally on the client, if you want to manually specify the reply queue name:

rabbitmq://servername/queuename?reply=replyqueuename

In advanced scenarios you can use more of the URI scheme:

rabbitmq://username:password@servername:port/queuename

Think of this like a URL for an HTTP or gRPC endpoint.

Implementing a Client

On the client all that's needed is:

  1. Reference the Csla.Channels.RabbitMq NuGet package (CSLA v5.0.0-R19082201 or higher)
  2. Configure the data portal to use the new channel:
  CslaConfiguration.Configure().
    DataPortal().
      DefaultProxy(typeof(Csla.Channels.RabbitMq.RabbitMqProxy), "rabbitmq://localhost/rmqserver");

This configures the data portal to use the RabbitMQ channel, and to find the server using the provided URI.

Implementing the Server

Unlike with HTTP and gRPC where the server is probably hosted in ASP.NET Core, RabbitMQ servers are usually implemented as a console app. This is ideal for hosting in lightweight containers in Docker or Kubernetes, as there's no need for the overhead of ASP.NET.

  1. Create a console app (.NET Core 2.0 or higher)
  2. Create an instance of the data portal host
  3. Tell the data portal to start listening for requests

Here's a complete implementation:

using System;
using System.Threading.Tasks;

namespace rmqserver
{
  class Program
  {
    static async Task Main(string[] args)
    {
      Console.WriteLine("Start listener; ctl-c to exit");
      var host = new Csla.Channels.RabbitMq.RabbitMqPortal("rabbitmq://localhost/rmqserver");
      host.StartListening();

      await new Csla.Reflection.AsyncManualResetEvent().WaitAsync();
    }
  }
}

Shared Business Logic

Of course the centerpiece of CSLA .NET is the idea of shared business logic in a common assembly. So any solution would contain the client code as shown above, the server, and a .NET Standard 2.0 Class Library that contains all the business classes that encapsulate business logic.

Both the client and server projects must reference the business class library assembly. That business assembly needs to be available to both client and server code. The data portal takes care of the rest.

In that shared assembly you might have a simple type like this:

using Csla;
using System;
using System.ComponentModel.DataAnnotations;
using System.Threading.Tasks;

namespace ClassLibrary1
{
  [Serializable]
  public class PersonEdit : BusinessBase<PersonEdit>
  {
    public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(nameof(Id));
    public int Id
    {
      get { return GetProperty(IdProperty); }
      set { SetProperty(IdProperty, value); }
    }

    public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(nameof(Name));
    [Required]
    public string Name
    {
      get { return GetProperty(NameProperty); }
      set { SetProperty(NameProperty, value); }
    }

    [Create]
    private void Create(int id)
    {
      using (BypassPropertyChecks)
        Id = id;
    }

    [Fetch]
    private async Task Fetch(int id)
    {
      // TODO: get object's data
    }

    [Insert]
    private async Task Insert()
    {
      // TODO: insert object's data
    }

    [Update]
    private async Task Update()
    {
      // TODO: update object's data
    }

    [DeleteSelf]
    private async Task DeleteSelf()
    {
      await Delete(ReadProperty(IdProperty));
    }

    [Delete]
    private async Task Delete(int id)
    {
      // TODO: delete object's data
    }
  }
}

The client can interact with this type via the data portal. For example:

  var obj = await DataPortal.CreateAsync<PersonEdit>(42);
  obj.Name = "Arnold";
  if (obj.IsSaveable)
  {
    await obj.SaveAndMergeAsync();
  }

And that's it. The data portal takes care of relaying that call to the server (in this case via RabbitMQ). The server creates an instance of PersonEdit and invokes the method marked with the Create attribute so the object can invoke a data access layer (DAL) to initialize itself or do whatever is necessary.

In CSLA 5 those create/fetch/insert/update/delete methods can all accept parameters that are provided via dependency injection, but that's a topic for another blog post. Keep in mind that DI is the appropriate way to gain access to the DAL, and that any interact with databases is encapsulated within the DAL.

Thursday, August 22, 2019 9:31:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, August 21, 2019

The new .NET Core 3 release includes support for the gRPC protocol. This is an efficient binary protocol for making network calls, and so is something that CSLA .NET should obviously support.

CSLA already has an extensible channel-based model for network communication via the data portal. Over the years there have been numerous channels, including:

  • .NET Remoting (obsolete)
  • asmx services (obsolete)
  • WCF (of limited value in modern .NET)
  • Http

I'm sure there have been others as well. The current recommended channel is via Http (using the HttpProxy type), as it best supports performance, routing, and various other features native to HTTP and to the data portal channel implementation.

CSLA .NET version 5.0.0 will include a new gRPC channel. Like all the other channels, this is a drop-in replacement for your existing channel.

⚠ This requires CSLA .NET version 5.0.0-R19082107 or higher

Client Configuration

On the client it requires a new NuGet package reference and a configuration change.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. On app startup, configure the data portal as shown here:
      CslaConfiguration.Configure().
        DataPortal().DefaultProxy(typeof(Csla.Channels.Grpc.GrpcProxy), "https://localhost:5001");

This configures the data portal to use the new GrpcProxy and provides the URL to the service endpoint. Obviously you need to provide a valid URL.

Server Configuration

On the server it requires a new NuGet package reference and a bit of code in Startup.cs to set up the service endpoint.

⚠ This requires an ASP.NET Core 3.0 project.

  1. Reference the new Csla.Channels.Grpc NuGet package
  2. In the ConfigureServices method you must configure gRPC: services.AddGrpc();
  3. In the Configure method add the data portal endpoint:
      app.UseRouting();

      app.UseEndpoints(endpoints =>
      {
        endpoints.MapGrpcService<Csla.Channels.Grpc.GrpcPortal>();
      });

General Notes

As usual, both client and server need to reference the same business library assembly, which should be a .NET Standard 2.0 library that references the Csla NuGet package. This assembly contains implementations of all your business domain classes based on the CSLA .NET base classes.

The gRPC data portal channel uses MobileFormatter to serialize and deserialize all object graphs, and so your business classes need to use modern CSLA coding conventions so they work with that serializer.

All the version and routing features added to the Http data portal channel in CSLA version 4.9.0 are also supported in this new gRPC channel, allowing it to take full advantage of container orchestration environments such as Kubernetes.

Also, as with the Http channel, the GrpcProxy and GrpcPortal types have virtual methods you can optionally override to implement compression on the data stream, and (on the client) to support advanced configuration scenarios when creating the underlying HttpClient and gRPC client objects.

Wednesday, August 21, 2019 1:37:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 25, 2019

Containers and Kubernetes (k8s) are useful for building and deploying distributed systems in general. This includes service-based architectures (SOA, microservices) as well as n-tier client/server endpoints.

I do think container-based runtimes and service-based architecture go hand-in-hand. However, a lot of the benefits of container-based runtimes apply just as effectively to a well-architected n-tier client/server application as well.

By “well architected” I mean an n-tier app that has:

  1. Good separation of concerns between interface, interface control, business, data access, and data storage layers
  2. “Chunky” communication between client and server
    1. A limited number of server endpoints - the n-tier services have been well-considered and are cohesive
    2. Effective use of things like the unit of work pattern to minimize calls over the network by bundling “multiple calls” into a single unit of work (single call)
  3. Efficient use of data transfer - no blind use of ORM tools where extraneous data flows over the network just because it “made things easier to implement”
    1. Focus on decoupling over reuse (two sides of the same coin, where reuse leads to coupling and coupling is extremely bad)

In such an n-tier app, the client is often quite smart (whether mobile, Windows, Mac, WebAssembly, or even TypeScript), and the overall solution is architected to enable this smart client to efficiently interact with n-tier endpoints (really a type of service) to leverage server-side behaviors.

These n-tier endpoints are not designed for use by other apps. They are not “open”. This is arguably a good thing, because it means they can (and should) use much more efficient binary serialization protocols, as compared to the abysmally inefficient JSON or XML protocols used by “open” services.

At the end of the day, the key is that nothing stops you from hosting a n-tier endpoint/service in a container. Well, nothing beyond what might stop you from hosting any code in a container.

In other words, if you build (or update) your n-tier endpoint code to follow cloud-native best practices (such as embracing 12factor design and avoiding the fallacies of distributed computing) - just like you must with microservice implementations - your endpoint code can take advantage of container-based runtimes very effectively.

Now a lot of folks tend to look at any n-tier app and think “monolith”. Which can be true, but doesn’t have to be true. People can create good or bad n-tier solutions, just like they can create good or bad microservice solutions. These architectures aren’t silver bullets.

Look at MVC - a great design pattern - unless you put your business logic in the controller. And LOTS of people write business logic in their controllers. Horrible! Totally defeats the purpose of the design pattern. But it is expedient, so people do it.

What I’m saying, is that if you’ve done a good job designing your n-tier endpoints to be cohesive around business behaviors, and you can make them (at least mostly) 12factor-compliant, you can get container-based runtime benefits such as:

  1. Scaling/Elasticity - quickly spin up/down workers based on active load
  2. Functional grouping - have certain endpoints run on designated server nodes based on CPU, IO, or other workload requirements
  3. Manageability - the same management features that draw people to k8s for microservices are available for n-tier endpoints as well
  4. Resiliency - auto-fail over of k8s pods applies to n-tier endpoints just as effectively as microservices
  5. Infrastructure abstraction - k8s is basically the same (from a dev or code perspective) regardless of whether it is in your datacenter, or in Azure, or AWS

I’ll confess that I’m a little biased. I’ve spent many years talking about good n-tier client server architecture, and have over 22 years of experience maintaining the open source CSLA framework based on such an architecture.

The most recent versions of CSLA have some key features that allow folks to truly exploit container-based runtimes. Usually with little or no change to any existing code.

My key point is this: container-based runtimes offer fantastic benefits to organizations, for both service-based and n-tier client/server architectures.

Tuesday, June 25, 2019 1:47:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, April 5, 2019

How can a .NET developer remain relevant in the industry?

Over my career I've noticed that all technologies require developers to work to stay relevant. Sometimes that means updating your skills within the technology, sometimes it means shifting to a whole new technology (which is much harder).

Fortunately for .NET developers, the .NET platform is advancing rapidly, keeping up with new software models around containers, cloud-native, and cross-platform client development.

First it is important to recognize that the .NET Framework is not the same as .NET Core. The .NET Framework is effectively now in maintenance mode, and all innovation is occurring in the open source .NET Core now and into the future. So step one to remaining relevant is to understand .NET Core (and the closely related .NET Standard).

Effectively, you should plan for .NET Framework 4.8 to become stable and essentially unchanging for decades to come, while all new features and capabilities are built into .NET Core.

Right now, you should be working to have as much of your code as possible target .NET Standard 2.0, because that makes your code compatible with .NET Framework, .NET Core, and mono. See Migrating from .NET to .NET Standard.

Second, if you are a client-side developer (Windows Forms, WPF, Xamarin) you need to watch .NET Core 3, which is slated to support Windows Forms and WPF. This will require migration of existing apps, but is a way forward for Windows client developers. Xamarin is a cross-platform client technology, and in this space you should learn Xamarin.Forms because it lets you write a single app that can run on iOS, Android, Mac, Linux desktop, and Windows.

Third, if you are a client-side developer (web or smart client) you should be watching WebAssembly. Right now .NET has experimental support for WebAssembly via the open source mono runtime. And also a wasm UI framework called Blazor, and another XAML-based UI framework called the Uno Platform. This is an evolving space, but in my opinion WebAssembly has great promise and is something I’m watching closely.

Fourth, if you are a server-side developer it is important to understand the major industry trends around containers and container orchestration. Kubernetes, Microsoft Azure, Amazon AWS, and others all have support for containers, and containers are rapidly becoming the defacto deployment model for server-side code. Fortunately .NET Core and ASP.NET both have very good support for cloud-native development.

Finally, that’s all from a technology perspective. In a more general sense, regardless of technology, modern developers need good written and verbal communication skills, the ability to understand business and user requirements, at least a basic understanding of agile workflows, and a good understanding of devops.

I follow my own guidance, by the way. CSLA .NET 4.10 (the current version) targets .NET Standard 2.0, supports WebAssembly, and has a bunch of cool new features to help you leverage container-based cloud-native server environments.

Friday, April 5, 2019 10:19:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 12, 2018

CSLA .NET 4.9.0 includes some exciting new features, primarily focused on server-side capabilities for containers, .NET Core, and ASP.NET Core. There are a number of data portal enhancements, as well as powerful new configuration options based around the .NET Core configuration subsystem.

Data Portal Enhancements

Many of the enhancements are focused on enabling powerful cloud/container based scenarios with the data portal. Most notably, the data portal now supports two different types of routing to enable the use of multiple server instances. It also has the option to track recent activity with the intent of supporting a basic health dashboard, the ability to force a client "offline", and support for the .NET Core IoC/DI model.

Client-side data portal routing

One common request is that it would be nice if a subset of data portal requests could be routed to some server endpoint other than the default. This idea supports a number of scenarios, including security, logging, multi-tenant server farms, and I'm sure many others.

When you configure the client-side data portal on a web server, mobile device, PC, Mac, or Linux desktop you can now provide mappings so specific business domain types (root object types) cause the data portal to call server-side endpoints other than the default. This can be done on a per-type basis, or by applying the DataPortalServerResource attribute to a root domain type.

As the client app starts up, the data portal must be configured with mappings from domain types or DataPortalServerResource attribute values to specific server endpoint definitions. For example:

      // set up default data portal for most types
      ApplicationContext.DataPortalProxy = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName;
      ApplicationContext.DataPortalUrlString = "https://default.example.com/dataportal";

      // add mapping for DataPortalServerResource attribute
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        (int)ServerResources.SpecializedAlgorithm,
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://specialized.example.com/dataportal" });

      // add mapping for specific business type
      Csla.DataPortalClient.DataPortalProxyFactory.AddDescriptor(
        typeof(MyImportantType),
        new Csla.DataPortalClient.DataPortalProxyDescriptor
        { ProxyTypeName = typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, DataPortalUrl = "https://important.example.com/dataportal" });

Notice that the server-side data portal endpoint is defined not just by a URL, but also by the proxy type, so you can use different network transport technologies to access different data portal servers.

Server-side data portal routing

One of the most common issues we all face when hosting server-side endpoints is dealing with client-side app versioning and having server endpoints that support clients running older software. This is particularly important when deploying mobile apps through the Apple/Google/Microsoft stores, when you can't directly control the pace of rollout for client apps.

Additionally, in a clustered server-side environment such as Kubernetes or Cloud Foundry it is likely that you'll want to organize your containers based on various characteristics, such as CPU consumption, memory requirements, or the need for specialized hardware.

The HTTP data portal (Csla.Server.Hosts.HttpPortalController) now supports a solution for both scenarios via server-side data portal routing. It is now possible to set up a public gateway server that hosts a data portal endpoint, which in turn routes all calls to other data portal endpoints within your server environment. Typically these other endpoints are not public, forcing all client apps to route all data portal requests through that gateway router endpoint.

Note that using technologies like Kubernetes it is entirely realistic for the gateway router to be composed of multiple container instances to provide scaling and fault tolerance.

To minimize overhead, the data portal router does not deserialize the inbound data stream from the client. Instead, it uses a header value to determine how to route the request to a worker node where the data stream is deserialized and processed. That header value is created by the client-side data portal by combining two concepts.

First is an optional application version value. If this value is set by the client-side app it is then passed through to the server-side data portal where the value is used to route requests to a worker node running a corresponding version of the server-side app. This allows you to leave versionA nodes running while also running versionB nodes, and once all client devices have migrated off versionA then you can shut down those versionA nodes with no disruption to your users.

Second is a DataPortalServerRoutingTag attribute you can apply to root domain types. This attribute provides a routing tag value that is also passed to the server-side data portal. The intent of this attribute is that you can specify that certain root domain types have characteristics that should be used for server-side routing. For example, you might tag domain types that you know will require a lot of CPU or memory on the server so they are routed to Kubernetes pods that are hosted on fast or large physical servers (Kubernetes nodes), while all other requests go to pods running on normal sized Kubernetes nodes.

Another example would be where the server-side processing requires access to specialized hardware. In that case your Kubernetes nodes would have a taint indicating that they have that hardware, and the DataPortalServerRoutingTag is used to tell the data portal router to send requests for that root domain type only to server-side data portal instances running on those nodes.

What makes this data portal feature a little complex is that the data portal router uses both the version and routing tag to route calls. This is obviously required to support both scenarios at the same time, but it does mean you need to define your routes such that they consider both version and routing tag elements.

The client-side data portal passes a routing tag to the server with the following format:

  • null - neither routing tag nor version are specified
  • routingTag-versionTag - both routing tag and version are specified
  • routingTag- - a routing tag is specified, but no version
  • -versionTag - a version is specified, but no routing tag

Your server-side data portal gateway router is configured on startup with code like this (in the data portal controller class):

  public class DataPortalController : HttpPortalController
  {
    public DataPortalController()
    {
      RoutingTagUrls.Add("routingTag-versionTag", "https://serviceName:36123/api/DataPortal");
    }
  }

You can specify as many routing tag/version keys as necessary to handle routing to the data portal endpoints running in your server environment.

If a key can't be found, or if the target URL is localhost then the request will be processed directly on the data portal router instance. Of course you may choose not to deploy any of your business DLLs to the router instance, in which case all such requests will fail, returning an exception to the client.

Offline mode

A common feature request, especially for mobile devices and laptops, is for the application to force itself into "offline mode" where all data portal calls are directed only to the local data portal. The client-side data portal now supports this via the Csla.ApplicationContext.IsOffline property.

If you set IsOffline to true the client-side data portal will immediately route all data portal requests to the local app rather than a remote server. It is as if all your data portal methods had the RunLocal attribute applied.

Clearly you need to build your DataPortal_XYZ methods to behave properly when they are running on the client (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Client) vs the server (Csla.ApplicationContext.ExecutionLocation == ExecutionLocations.Server). Typically you'd have your code delegate to a DAL provider specific to the client or server based on execution location using the encapsulated invocation model as described in Using CSLA 4: Data Access.

Save and Merge

CSLA has included the GraphMerger type for some time now. This type allows you to merge the results of a SaveAsync call back into the existing domain object graph. The BusinessBase and BusinessListBase types now implement a simpler SaveAndMergeAsync method you can use to save a root domain object such that the results are automatically merged into the object graph.

DataPortalFactory in ASP.NET Core dependency injection

ASP.NET Core includes a dependency injection model based on IServiceCollection, with dependencies defined in the ConfigureServices method of the Startup.cs file. CSLA .NET now supports that scenario via an AddCsla extension method for IServiceCollection.

    public void ConfigureServices(IServiceCollection services)
    {
      // ...
      services.AddCsla();
    }

The only service that is defined is a new IDataPortalFactory, implemented by DataPortalFactory. The factory is a singleton, and is used to create instances of the data portal for use in a page. For example, in a Razor Page you might have code like this:

  public class EditModel : PageModel
  {
    private IDataPortal<Customer> dataPortal;

    public EditModel(IDataPortalFactory factory)
    {
      dataPortal = factory.GetPortal<Customer>();
    }
    
    // ...
  }

Configuration Enhancements

An important design pattern for success in both DevOps and container-based server deployment is to store config in the environment. The .NET Core configuration subsystem supports this pattern, and now CSLA .NET does as well. Not only that, but we now support an optional fluent configuration model if you want to set configuration in code (common in Xamarin apps for example), and integration with the .NET Core and ASP.NET Core configuration models.

.NET Core configuration subsystem integration

The .NET Core configuration subsystem supports modern configuration concepts, while still supporting legacy concepts like config files. It is also extensible, allowing providers to load configuration values from many different sources.

The base type used in .NET Core configuration is a ConfigurationBuilder. An instance of this type is provided to you by ASP.NET Core, or you can create your own instance for use in console apps. CSLA .NET provides an extension method to integrate into this configuration model. For example, in a console app you might write code like this:

  var config = new ConfigurationBuilder()
     .AddJsonFile("appsettings.coresettings.test.json")
     .Build()
     .ConfigureCsla();

In ASP.NET Core you'd simply use the ConfigurationBuilder instance already avalable to invoke the ConfigureCsla method.

The result of the ConfigureCsla method is that the configuration values are loaded for use by CSLA .NET based on any configuration sources defined by the ConfigurationBuilder. In many cases that includes a config file, environment values, and command line parameters. It might also include secrets loaded from Azure, Docker, or whatever environment is hosting your code. That's all outside the scope of CSLA itself - those are features of .NET Core. The point is that the ConfigureCsla method takes the results of the Build method and maps the config values for use by CSLA .NET.

Note that nothing we've done here has any impact on .NET Framework code. All your existing .NET 4.x code will continue to function against web.config/app.config files.

Fluent configuration API

Although it is best to follow the 12 Factor guidance around config, there are times when you just need to set config values through code. This is particularly true in Xamarin and WebAssembly scenarios, where the configuration systems are weak or non-existant.

In the past you've been able to directly set propery values on Csla.ApplicationContext, various data portal types, and elsewhere in CSLA. That's confusing and inconsistent, and is a source of bugs in many people's code.

There's a new fluent API for configuration you can use when you need to set configuration through code. The entry point for this API is Csla.Configuration.CslaConfiguration, which implements Csla.Configuration.ICslaConfiguration. The model uses extension methods for sub-configuration, extending the ICslaConfiguration type.

Basic configuration is done like this:

      new Csla.Configuration.CslaConfiguration()
        .PropertyChangedMode(Csla.ApplicationContext.PropertyChangedModes.Windows)
        .PropertyInfoFactory("a,b")
        .RuleSet("abc")
        .UseReflectionFallback(false)
        .SettingsChanged();

Data portal configuration can be chained like this:

      new Csla.Configuration.CslaConfiguration()
        .DataPortal().AuthenticationType("custom")
        .DataPortal().AutoCloneOnUpdate(false)
        .DataPortal().ActivatorType(typeof(TestActivator).AssemblyQualifiedName)
        .DataPortal().ProxyFactoryType("abc")
        .DataPortal().DataPortalReturnObjectOnException(true)
        .DataPortal().DefaultProxy(typeof(HttpProxy), "https://myserver.com/api/DataPortal")
        .DataPortal().ExceptionInspectorType("abc")
        .DataPortal().FactoryLoaderType("abc")
        .DataPortal().InterceptorType("abc")
        .DataPortal().ServerAuthorizationProviderType("abc")
        .SettingsChanged();

Client-side data portal descriptors for client-side data portal routing (see above) can be configured like this:

      new CslaConfiguration()
        .DataPortal().ProxyDescriptors(new List<Tuple<string, string, string>>
        {
          Tuple.Create(typeof(TestType).AssemblyQualifiedName, 
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName, 
                       "https://example.com/test"),
          Tuple.Create(((int)ServerResources.SpecializedAlgorithm).ToString(),
                       typeof(Csla.DataPortalClient.HttpProxy).AssemblyQualifiedName,
                       "https://example.com/test")
        });

Configuration of Csla.Data options:

      new CslaConfiguration()
        .Data().DefaultTransactionIsolationLevel(Csla.TransactionIsolationLevel.RepeatableRead)
        .Data().DefaultTransactionTimeoutInSeconds(123);

Security options:

      new CslaConfiguration()
        .Security().PrincipalCacheMaxCacheSize(123);

You should consider using this new fluent API instead of the old/fragmented configuration options at all times - for setting config values. CSLA .NET itself continues to rely on Csla.ApplicationContext to read config values for use throughout the code.

Conclusion

CSLA .NET version 4.9 is very exciting for server-side developers, or people using a remote data portal where versioning or advanced routing scenarios are important. Between the data portal enhancements and more modern configuration techniques available in this version, your life should be improved as a CSLA .NET developer.

Wednesday, December 12, 2018 1:38:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer