Rockford Lhotka

 Friday, April 7, 2017

Trying to figure out the core "meat" behind blockchain is really difficult. I'm going to try and tease out all the hype, and all the references to specific use cases to get down to the actual technology at play here.

(this is my second blockchain post, in my previous post I compare blockchain today to XML in the year 2000)

There are, of course, tons of articles about blockchain out there. Nearly all of them talk about the technology only in the context of specific use cases. Most commonly bitcoin and distributed ledgers.

But blockchain the technology is neither a currency nor a distributed ledger: it is a tool used to implement those two types of application or use case.

There's precious little content out there about the technology involved, at least at the level of the content you can find for things like SQL Server, MongoDb, and other types of data store. And the thing is, blockchain is basically just a specialized type of data store that offers a defined set of behaviors. Different in the specifics from the behaviors of a RDBMS or document database, but comparable at a conceptual level.

I suspect the lack of "consumable" technical information is because blockchain is very immature at the moment. Blockchain seems to be where RDBMS technology was around 1990. It exists, and uber-geeks are playing with it, and lots of business people see $$$ in their future with it. But it will take years for the technology to settle down and become tractable for mainstream DBA or app dev people.

Today what you can find are discussions about extremely low-level mathematics, cryptography, and computer science. Which is fine, that's also what you find if you dig deep enough into Oracle's database, SQL Server, and lots of other building-block technologies on top of which nearly everything we do is created.

In other words, only hard-core database geeks really care about how blockchain is implemented - just like only hard-core geeks really care about how an RDBMS is implemented. Obviously we need a small number of people to live and breathe that deep technology, so the rest of us can build cool stuff using that technology.

So what is blockchain? From what I can gather, it is this: a distributed, immutable, persistent, append-only linked list.

Breaking that down a bit:

  1. A linked list where each node contains data
  2. Immutable
    1. Each new node is cryptographically linked to the previous node
    2. The list and the data in each node is therefore immutable, tampering breaks the cryptography
  3. Append-only
    1. New nodes can be added to the list, though existing nodes can't be altered
  4. Persistent
    1. Hence it is a data store - the list and nodes of data are persisted
  5. Distributed
    1. Copies of the list exist on many physical devices/servers
    2. Failure of 1+ physical devices has no impact on the integrity of the data
    3. The physical devices form a type of networked cluster and work together
    4. New nodes are only appended to the list if some quorum of physical devices agree with the cryptography and validity of the node via consistent algorithms running on all devices
      1. This is why blockchain is often described as a "trusted third party", because the cluster is self-policing

Terminology-wise, where I say "node" you can read "block". And where I say "data" a lot of the literature uses the term "transaction" or "block of transactions". But from what I've been able to discover, the underlying technology at play here doesn't really care if each block contains "transactions" or other arbitrary blobs of data.

What we build on top of this technology then becomes the question. Thus far what we've seen are distributed ledgers for cryptocurrencies (e.g. bitcoin) and proof of concept ledgers for banking or other financial scenarios.

Maybe that's all this is good for - and if so it is clearly still very valuable. But I strongly suspect that, as a low level foundational technology, blockchain will ultimately be used for other things as well.

I'm also convinced that blockchain is almost at the top of the hype cycle and is about to take a big plunge as people figure out what it can and can't actually do.

Finally, I believe that blockchain, assuming there's money to be made in the technology, will become part of established platforms such as Azure, AWS, and GCP. And there might be some other niche players left, but the majority of the many, many blockchain tech providers out there today will ultimately be purchased by the big players or will just vanish.

Friday, April 7, 2017 10:02:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, April 5, 2017

It seems to me that blockchain today is where XML was at the beginning. A low level building block on which people are constructing massive hopes and dreams, mentally bypassing the massive amounts of work necessary to get from there to the goals. What I mean by this is perhaps best illustrated by a work environment I was in just prior to XML coming on the scene.

The business was in the bio-chemical agriculture sector, so they dealt with all the major chemical manufacturer/providers in the world. They'd been part of an industry working group composed of these manufacturers and various competitors for many years at that point. The purpose of the working group was to develop a standard way of describing the products, components, parts, and other aspects of the various "products" being manufactured, purchased, resold, and applied to farm fields.

You'll note that I used the word "product" twice, and put it in quotes. This is because, after all those years, the working group never did figure out a common definition for the word "product".

One more detail that's relevant, which is that everyone had agreed to transfer data via COBOL-defined file structures. I suppose that dated back to when they traded reels of magnetic tape, but carried forward to transferring files via ftp, and subsequently the web.

Along comes XML, offering (to some) a miracle solution. Of course XML only solved the part of the problem that these people had already solved, which was how to devise a common data transfer language. Was XML better than COBOL headers? Probably. Did it solve the actual problem of what the word "product" meant? Not at all.

I think blockchain is in the same position today. It is a distributed, append-only, reliable database. It doesn't define what goes in the database, just that whatever you put in there can't be altered or removed. So in that regard it is a lot like XML, which defined an encoding structure for data, but didn't define any semantics around that data.

The concept of XML then, and blockchain today, is enough to inspire people's imaginations in some amazing ways.

Given amazing amounts of work (mostly not technical work either, but at a business level) over many, many years XML became something useful via various XML languages (e.g. XAML). And a lot of the low-level technical benefits of XML have been superseded by JSON, but that's really kind of irrelevant, since all the hard work of devising standardized data definitions applies to JSON as well as XML.

I won't be at all surprised if the same general path isn't followed by blockchain. We're at the start of years and years of hard non-technical work to devise ways to use the concept of a distributed, append-only, reliable database. Along the way the underlying technology will become standardized and will merge into existing platforms like .NET, Java, AWS, Azure, etc. And I won't be surprised if some better technical solution is discovered (like JSON was) along the way, but that better technical solution probably won't really matter because the hard work is in figuring out the business-level models and data structures necessary to make use of this underlying database concept.

Wednesday, April 5, 2017 11:40:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, March 30, 2017

I really like the new VS 2017 tooling.

However, it has some real problems – it is far from stable for netstandard.

I have a netstandard class library project. Here are issues I’m facing.

  1. Every time I edit the compiler directives in the Build tab in project properties it adds YET ANOTHER set of compiler constants for RELEASE;NETSTANDARD1_6 – those duplicates add up fast!
  2. The output path acts really odd – always insists on appending something like \netstandard1.5\ to the end of the output path – even if the output path already ends with \netstandard1.5\ - in NO case can I get it to use the path I actually want!! This should act like normal projects imo – not arbitrarily appending crap to my path!
  3. I have one netstandard class library referencing another via a project reference and this doesn’t seem to be working at all – none of my types from the first class library project seem available in the second
  4. The Add References dialog doesn’t show existing references to Shared Projects – the only way to know that the reference is already there is to look at the csproj file in a text editor

We’re going in (what I think) is a good direction with the tooling, but right now it is hard/impossible to integrate netstandard projects into a normal workflow because the tooling is pretty buggy.

Thursday, March 30, 2017 4:39:30 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, February 20, 2017

According to this article, the smartphone boom is over, and the "next big thing" isn't really here yet. I would argue that's good. We need a breather to catch up with all the changes from the past several years.

In a sense, there have been two periods in my career that were really fun from the perspective of solving business problems (as opposed to other points that were equally fun from the perspective of learning new tech).

One was a couple years before and after 1990, when the minicomputer ecosystem was generally stable (HP 3000, Unix, VAX were common options). The other period was the six years when VB6 was dominant, while .NET was still nascent, VB had matured, and Windows was the defacto target for all client software.

In both those cases there was a 5-6 year window when the platforms were slow-changing, the dev tools were mature, and disruption was around the fringes, not in the mainstream. From a "learn new tech" perspective those were probably pretty boring periods of time. But from a "solve big business problems" perspective they were amazing periods of time, because everyone felt pretty comfortable using the platforms/tools at hand to actually do something useful for end users.

The iPad turned the world on its ear, and we're just now back to a point where it is clear that the platform is .NET/Java on the server and Angular on the client (regardless of the client OS). The server tooling has been fine for years, but I think we can see real stability for client development in the near future - whew!

So if the chaos we've been suffering through for the past several years (decade?) is coming to an end, and there's no clear "next big thing", then with any luck we'll find ourselves in a nice period of actual productivity for a little while. And I think that'd be refreshing.

Monday, February 20, 2017 1:29:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, February 8, 2017

The concept of identity with Microsoft services is a mess, something I probably don't have to tell any Microsoft developer.

Some services can only be attached to a personal Microsoft Account (MSA), and other services can only be used from an AD account. For example, MSDN can only be directly associated with an MSA, while Office 365 can only be associated with an AD account. Some things, like VSTS, can be associated with either one depending on the scenario.

I used to have the following:

  • - MSA with my MSDN associated
  • - Magenic AD account
  • - MSA with nothing attached (I created this long ago and forgot about it)

That was a total pain when I started using O365 and an AD-linked VSTS site with my AD account, because Microsoft couldn't automatically distinguish between my AD and MSA accounts; both named As a result, every time I tried to log into one of these sites they'd ask if this was a personal or work/school account.

Fortunately you can rename an MSA to a different email address. I renamed my (essentially unused) account to a dummy email address so now I really just have two identities:

  • - MSA with my MSDN associated
  • - Magenic AD account

This way Microsoft automatically knows that when I use my AD login that it is a work/school account and I don't have to mess with that confusion.

There's still the issue of having MSDN attached to an MSA, and also needing to have some connection from my AD account to my MSDN subscription. This is required because we have VSTS sites associated with Magenic's AD, so I need to log in with my AD account, but still need to ensure VSTS knows I'm a valid MSDN user.

Here's info on how to link your work account to your MSDN/MSA account.

At the end of the day, if I'd never created that MSA account (many years ago) my life would have been much simpler to start. Fortunately the solution to that problem is to rename the MSA email to something else and remove the confusion between AD and MSA.

The MSDN linking makes sense, given that you need an MSA for MSDN, and many of us need corporate AD credentials for all our work sites.

Wednesday, February 8, 2017 12:51:54 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, January 5, 2017

I was reading an HBR article about Why Being Unpredictable Is a Bad Strategy and all I could think about was the Windows 8 debacle.

Leading up to the development and release of Windows 8 Microsoft switched from an open and predictable model to a very closed and secretive model. Sure, they'd waffled back and forth in the years prior to Windows 8, but it wasn't until that point in their history that they went "entirely dark" about something as important as Windows itself.

Personally I think they were copying Apple, because at that point in time Apple was ascendant with the iPad and Microsoft was worried. The thing is, a secretive model works for Apple because nobody relies on their long-term vision for stability. Their target are consumers, who like fun stuff and care little if things break every couple years.

Microsoft's primary customer base are small, medium, and large enterprises who spend millions or billions on IT. They don't like fun, they like predictable roadmaps that minimize cost and risk. The last thing a business wants is a version of Windows that comes out of the blue and breaks all their software, or requires complete retraining of their entire user base.

Worse yet, Microsoft not only increased risk for all of its business customers with Windows 8, they totally cut off all avenues for feedback and improvement of the product until after it was released. After it was too late to address the numerous major issues with the new OS.

Fortunately Windows 10 has been a whole other story. Microsoft not only returned to their original open communication model, but they've actually became more open than they've ever been in their history. And it shows, in that the business world now has a predictable roadmap, and Windows has never been so closely shaped by real-world customer feedback.

The result is that Windows 10 adoption is proceeding at a rapid pace, and Microsoft is (very so) slowly rebuilding trust with its customers.

Thursday, January 5, 2017 4:04:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 3, 2017

We're having a conversation on Magenic's internal forum where we're discussing the current JavaScript community reaction to all these frameworks. Some people in the industry are looking at the chaos of frameworks and libraries and deciding to just write everything in vanilla js - eschewing the use of external dependencies.

And I get that, I really do. However, I'm also an advocate of using frameworks - which shouldn't be surprising coming from the author of CSLA .NET.

Many years ago I spoke at a Java conference (they were trying to expand into the .NET space too).

At lunch I listened to a conversation between some other folks at the table; they were discussing the use of Spring (which was fairly new at the time).

Their conclusion was that although Spring did a ton of useful and powerful things, it was too big/complex and so they'd rather not use it and solve all those problems themselves (the problems solved by Spring).

I see the same thing all the time with CSLA .NET. People look at it and see something that is big and complex, and thing "those problems can't be that hard to solve", so they end up rewriting (usually poorly) large parts of CSLA.

I say "usually poorly" because their job isn't to create a well-tested and reusable framework. Their job is to solve some business problem. So they solve some subset of each problem that Spring or CSLA solves in-depth, and then wonder why their resulting app is unreliable, or performs badly, or whatever.

As the author of a widely used OSS framework, my job is to create a framework that solves and abstracts away key problems that business developers would otherwise encounter. Because of this, I'm able to solve those problems in a broader and deeper way than a business developer, who's goal is to put as little effort into solving the lower level problem, because it is just a distraction from solving the actual business problem.

So yeah, I do understand that some of these frameworks, like Angular, Spring, CSLA .NET, etc. are complex, and they have their own learning curve. But they exist because they solve a bunch of lower level non-business related problems that you will otherwise have to solve yourself. And the time you spend solving those problems provides zero business value, and does ultimately add to the long-term maintenance cost of your resulting business software.

There's not a perfect answer here to be sure. But for my part, I like to think that the massive amounts of time and energy spent by framework authors to truly understand solve those hard non-business problems is time well spent, allowing business developers to be more focused on solving the problems they are actually paid to address.

Tuesday, January 3, 2017 10:53:41 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 28, 2016

In a previous blog post I related a coding standards horror story from early in my career. A couple commenters asked for part 2 of the story, mostly to see how my boss, Mark, dealt with the chaos we found in the company who acquired us.

There are two things I think are fortunate that relate to the story.

First, they bought our company because they wanted our software and because they wanted Mark. It is quite possible that nobody in the world understood the vertical industry and had the software dev chops that Mark provided, so he had a lot of personal value.

Second, before the acquisition I'd been tasked with writing tooling to enable globalization support for our software. Keep in mind that this was VT terminal style software, and all the human readable text shown anywhere on the screen came from text literals or strings generated by our code. The concept of a resx file like we have in Windows didn't (to our knowledge) exist, and certainly wasn't used in our code. Coming right out of university, the concept of lexical parsing and building compilers was fresh in my mind, so my solution was to write a relatively simplistic parser that found all the text in the code and externalized it into what today we'd call a resource file.

That project was one of the most fun things I've probably ever done. And one of the few times in my career where a university compiler design course directly applied to anything in the real world.

Because Mark was so well regarded by the new company, he ended up in charge of the entire new development team. As such, he had the authority to impose his coding standards on the whole group, including the team of chaos-embracing developers. Not that they were happy, but this was the late 1980s and jobs weren't plentiful, and my recollection is that they grumbled about it, and the fact that it was a "damn Yankee" imposing his will on the righteous people of The South. But they went along with the change.

However, that still left massive amounts of pre-existing code that was essentially unreadable and unmaintainable. To resolve that, Mark took my parser as a base and wrote tools (I don't remember if it was one tool, or one per coding "style") that automatically reformatted all that existing code into Mark's style. That was followed by a ton of manual work to fix edge cases and retest the code to make sure it worked. In total I think this process took around 2 months, certainly not longer.

I wasn't directly involved in any of that fix-up process, as I had been assigned to do the biggest project yet in my young career: building support for an entire new product line in a related vertical to the focus of our original software. Talk about a rush for someone just a year out of university!

Wednesday, December 28, 2016 2:05:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer