.net, Code Quality, Code Tips, Practices

Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
    public ConceptAs(T value)
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;

    public T Value { get; init; }

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.


When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.


I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);


Legacy C# gRPC package +  M1

I recently upgraded to a new MacBook with the  M1 CPU in it. In one of the projects I’m working on @ work we have a third party dependency that is still using the legacy package of gRPC and since we’ve started using .NET Core 6, which supports the M1 processor fully you get a runtime error when running M1 native and through the Roseatta translation. This is because the package does not include the OSX64-ARM64 version of the needed native .dylib for it to work. I decided to package up a NuGet package that includes this binary only so that one can add the regular package and this new one on top and make it work on the M1 CPUs. You can find the package here and the repository here.


In addition to your Grpc package reference, just add a reference to this package in your .csproj file:

  <PackageReference Include="Grpc" Version="2.39.1" />
  <PackageReference Include="Contrib.Grpc.Core.M1" Version="2.39.1" />

If you’re leveraging another package that implicitly pulls this in, you might need to explicitly include a package-reference to the Grpc package anyways – if your library works with the version this package is built for.


Although this package now exists, the future of gRPC and C# lies with a new implementation that does not need a native library; read more here. Anyone building anything new should go for the new package and hopefully over time all existing solutions will be migrated as well.

.net, C#

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->

This will then include this reusable file for the compiler.

.net, C#

Machine Specifications – .NET Core

I’ve been working on a particular project, mostly in the design phase – but leading up to implementation I quickly hit a snag; my favorite framework and tools for running my tests – or rather, specs, are not in the .NET Core space yet. After kicking and screaming for my self for the most part, I decided to do something about it and contribute something back after having been using the excellent Machine.Specifications Specification by Example framework and accompanying tools for years.

The codebase was not really able to directly build on top of .NET Core – and I started looking at forking it and just #ifdefing my way through the changes. This would be the normal way of contributing in the open source space. Unfortunately, it quickly got out of hand – there simply are too many differences in order for me to work fast enough and achieve my own goals right now. So, allthough not a decision I took lightly; I decided to just copy across into a completely new repository the things needed to be able to run it on .NET Core. It now lives here.

Since .NET Core is still in the flux, and after the announcement of DNX being killed off and replaced by a new .NET CLI tool called dotnet – I decided to for now just do the simplest thing possible and not implement a command or a test framework extension. This will likely change as the tools mature over time. Focused on my own feedback loop right now.

Anywho, the conclusion I’ve come to is that I will have my own test/spec project right now be regular console apps with a single line of code executing the all the specs in the assembly. This is far from ideal, but a starting point so I can carry on. The next logical step is to look at improving this experience with something that runs the specs affected by a change either in the unit under test or the spec itself. If you want a living example, please have a look here.

Basically – the needed bits are Nuget packages that you can find here, here and here.
The first package do include a reference to the others. But right now the tooling is too flaky to predict wether or not intellisense actually works using things like OmniSharp with VSCode or similar, so I have been explicitly taking all three dependencies.

The next thing you need is to have a Program with a Main method that actually runs it by calling the AssemblyRunner that I’ve put in for now.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-55-21.png&quot; data-mce-src="2016-04-16_23-55-21.png"

Once you have this you can do a dotnet run and the output will be in the console.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-40-52.png&quot; data-mce-src="2016-04-16_23-40-52.png"

.NET Core Version

Important thing to notice is that I’ve chosen to be right there on the bleeding edge of things, taking dependencies on packages and runtime versions from the MyGet feeds. The reason behind this is that some of the things that I’m using only exist in the latest bits. Scott Hanselman has a great writeup with regards to where we are today with .NET Core.


Well, I’m not yet knee deep into this and not focusing my effort on this project. I’ll be building what I need, but of course – totally open to anyone wanting to contribute to this project. But if I were to say anything about my own vision and steps I can see right now that would be natural progressions for this it would be that I’d love to see the first step be an auto-watching CLI tool that will run the appropriate tests according to files being changed. I would not go all in and do a full analysis of call stacks and all to figure out what is changing, but rather have an approximation approach to it – similar to what we put in place for our test runner project for JavaScript called Forseti. The approximation was basically based on configurable conventions mapping a relationship between the systems under test and the tests – or specs as I prefer to refer to them as. After that I can see integration with VSCode – which is my favorite editor at the moment. Something similar to WallabyJS would be very cool.

.net, JavaScript, MVVM

Slashing away the hash bang in single-page applications in Bifrost

One of the things that has been discussed the last couple of years for single-page applications are how to deal with routing since rendering and composition in those kind of applications is being dealt with on the client. Many have been pointing to using the hash (#) as a technique since this is for one possible to use to change history in the browser without post-backing for browsers that does not have the new HTML5 history API. This works fine for applications and is very easy to respond to in the client, but the URLs become unfamiliar and looks a bit weird for deep-linking. In order for search crawlers to crawl content properly, a specification exist that also states the inclusion of a ! (bang) yielding a #! (hash bang) as the separator for the specific route.  In Bifrost we have been working quite a bit the last six months in order to get a model that we believe in for single-page applications. Many models build on relying partially on the server for rendering, but composing the rendered parts in the client. With Bifrost, we wanted a different model, we wanted everything to be based on regular static HTML files sitting on the server, without having any load on the server for rendering – just serve the files as is. Instead, we wanted to compose the application from files on the server. One of the challenges we wanted to crack was to have regular URLs without any # or #!, even for HTML4 browsers – or at least in any anchors linking inside the app. In order for this to work, you need to deal with requests coming to the server with routes that has no meaning for any server code running. One of the motivations were also to not have to do anything specific for any routes on the server, meaning that you wouldn’t need to configure anything for any new routes you wanted – everything should be done only once on the client.

The problem

The nature of a single-page application is that it basically has a start page, and all requests should go to this page. This page represents the composition of the application, it has enough information and scripts on it to be able to render the remainder of the app based on any URLs coming in. In order to accomplish this, the server must be able to take any URLs coming in and pretty much ignore the URL – unless of course there is something configured or a file exist at the specific URL. 


Our solution

Bifrost is for now built for .net and more specifically ASP.net, so we had to dig into that platform specifically to figure out a way to deal with this. What we discovered is a part of the request pipeline in ASP.net that we could hook into and do a rewrite of the path during a request. (The implementation can be found here.)

URL Flow


What about the client?

Another challenge is to deal with history without post-backing. The way we’ve built everything is that you as a developer or web designer does not have to think about wether or not this is a Bifrost app or just a regular HTML app, you just create your anchor tags as usual. Just create your links as before, no hash-bangs nothing special – have your full paths sit there. This also makes it work just great with search-engines and they will be able to crawl your content and get the proper deep-links and index you. But we need to hook into the browser still, so we don’t do a post-back to the server for any URL changes. The way we’ve chosen to deal with this is to hook into the body and deal with it through event-bubbling. Any click events occurring inside the document will then be captured and if it happens to be an anchor tag that is the source, we pull out the URL and rewrites history inside the browser. Since we’re using Baluptons History.js, we get support for HTML4 browsers as well. History.js will use hash in those scenarios were it can’t rewrite the path, but your URLs does not have to change – it will just use it internally and it will all be abstracted away.



.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.


Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :


A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.


Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :


Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.


Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :


Then you need to relocate the views master location formats for the view engines in ASP.net MVC :


(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.


ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

.net, 3D, Balder, C#, Cloud, Community, JavaScript, Personal, Silverlight

GeekRider – the goal, technical perspective

As I briefly mentioned earlier I am endeavoring on a project which is going to demand a lot from me physically, but also from a technical perspective. I have a lot of things on my plate, during daytime I’m 100% engaged with work at clients, nighttime is the time I have to squeeze in a lot of activities into. For one, I have two kids that needs my attention – and I have a golden rule of engaging with them from the time I get back from work till they’re in bed. This leaves some 2-4 hours left per day to do all the things I do. I therefor have to be smart with my time and make the most of it. Adding things into the schedule is hard and if I add something, it in general must have a synergy with something already in my schedule. In my schedule I have a couple of open-source projects that I focus a lot of my energy on; BalderBifrost and Forseti, so pretty much anything I put in must relate to these in some fashion. Geekrider arose concretely from this need of synergy. I need to focus more on physical exercise and brought in Geekrider with the synergy of pushing forward development on the open-source projects I’m involved in forward. Balder will hopefully serve the purpose of 3D visualization and bringing forward the a few features that I want to have in that project. As a general web platform, I could have gone for anything already out there, but I wanted to push forward features in Bifrost, I therefor decided to build the site from scratch on top of it and also push into the cloud by hosting it on AppHarbor. Since the site will become very JavaScript intensive, and I pretty much get allergic reactions when I don’t write tests or BDD style specifications for my code, the last project also will get some love; Forseti. The reasoning behind the project is that most test runners out there has so many moving parts in the form of dependencies to get up and running and they’re also very focused on running things in a browser. Forseti is aiming towards something very different, a headless runner for JavaScript tests based on Env.js not using by default any browsers to execute the tests/specs.

One of the goals for Bifrost is to make it easier for developers to create rich web based applications, promoting good software development practices. Today, the RIA space is rapidly changing and for the most part moving away from plugin technologies such as Flash or Silverlight and focusing more on the open standards found in HTML, CSS and JavaScript/EcmaScript.

From a fronted development perspective, Bifrost is taking on this latter part. Traditionally one would compose the resulting web page that is handed over to the client on the server. Multiple solutions exist out there for doing so, and specifically in the .net space, ASP.net and its derivatives are the most popular ones. Rendering, as this is often referred to, adds an extra load onto the server – not only is the server responsible for dealing with the request from the user, wether it is getting data or performing an action, but it also has to transform the result into something the client can show. On top of all this, it has to deal with security. This pattern is a very proven pattern, but in my opinion not the pattern we want to be doing moving forward, and therefor Bifrost will focus on a different pattern. Sure, Bifrost will not only be compatible, but also support out of the box the traditional route – but for now in an opinionated fashion by only supporting ASP.net MVC. The technique that Bifrost will be focusing in on is the Single Page Applications, were you basically hand over the “rendering” to the client and let the client compose the page by swapping in and out elements at runtime. This is in fact nothing new, ever since AJAX became the big thing, we’ve pretty much been doing this – but only for parts at a time and even letting parts of our page be swapped out for new versions being rendered by the server dynamically.

Bifrost will have a composition technique that is based on, as most things in the framework, conventions. The focus will be on Features and one can point to a feature simply by adding a <div/> tag and give it the attribute data-feature=”[name of feature]”. Based on the configurable convention, Bifrost will find the necessary files representing the feature. Looking at the page from Geekrider as it is at the time of writing this post, we’ll have the following.


So, back on track. Now that we have this, what is the next logical step?  Up till now, Bifrost has been very server side rendering focused, sporting an extension for ASP.net MVC and taking advantage of that stack. That is about to change, or should I say, the fact that it has been the only way to use Bifrost is about to change. A set of REST endpoints will be exposed from Bifrost, enabling any client to interact with the framework. From a Web developer perspective, this is not good enough, we’re therefor working on bringing in a JavaScript library that will just nicely integrate with all this.

In addition to the goals summarized thus far, I’ve also got another goal for me personally; I want to become more productive with tools other than what I’m used to. I recently bought a MacBook Air, an impressive piece of hardware – but it doesn’t sport the same specs as my MacBook Pro or my iMac, I’ve therefor decided not to put any virtualization software on it to run Windows. This means I have to start using other tools than Microsofts Visual Studio for my development. For .net development, I’m for now using MonoDevelop and for general HTML, JavaScript and CSS development, I’m using TextMate. My long term goal is to be using TextMate for everything.

Summarizing, Geekrider will be the proof of concept for features added to Balder and Bifrost – driving forward with new thoughts and ideas. I will try to blog about the progress as much as my schedule can permit. This means I should keep myself from playing around or doing unnecessary stuff.


.net, 3D, Balder, C#, Silverlight

Balder – where is it, and where is it going?

In 2007 I started something called Balder, a 3D engine for the Web using Silverlight. Back then with Silverlight 1.1 Alpha and later 2.0, there really weren’t that many options to make it especially feature rich, nor fast. My first goal when I started this whole thing was basically to achieve 3D rendering on the Web across multiple platforms (Windows + Mac, later Moonlight on Linux) and have a declarative programming model for 3D. This is still the motivation for Balder; to be cross-platform on the Web and be declarative in its nature. I’ve never had the idea of being a 3D engine to compete with the big engines out there that are both free and commercial, this has been partly a research project for me to learn Silverlight properly from the ground up and also maintain a foot within the graphics programming industry which I still hold dear. Even though my day to day job is something completely different these days, I’ve always had side projects that kept me somewhat close to what was happening in the games industry that I left behind some 10 years ago.

Back in 2007 with SL1.1 and 2.0, you basically had to either built on top of the built in primitives, which were super slow or you had to do magic like create yourself a realtime PNG encoder and draw pixels manually with code and then hand over the generated PNG to an Image control inside Silverlight.

With Silverlight 3 came along the WriteableBitmap that enabled to skip the PNG step and Balder started to pick up performance and features were a lot easier to create. This story continued through for Silverlight 4 and Balder has received many a make over of both its rendering pipeline, but also internal architecture while I’ve learnt more and more in-depth of how to do things in the best way inside Silverlight. With the announcement of Silverlight 5 came a long a low level interface for drawing 3D utilizing the GPU sitting inside your graphics adapter. Finally Balder could shine with great performance and still maintain its declarative, Silverlighty way of doing things.

For SL5, that was the gold-plated story – the reality is quite different. Balder needed yet another architectural make-over in order to achieve hardware rendering with Silverlight. The main problem being that rendering now had to be done on a specific rendering event that sits on a completely different thread than the regular UI thread inside Silverlight. This poses quite a few problems when Balder is built to be declarative and have all the binding capabilities that Silverlight offers.

Most of the architectural change has been done – but far from being finished. There are still some holes that needs to be filled internally in Balder that has to do with code-smell basically. At one point in time, development of Balder went forward too fast – and code-quality was during this period suppressed in favor of number of features per day that could be implemented. I know, I’m not proud, but it was the reality for the period that things needed to get done and they needed to be done fast. Another aspect of Balder development has been the lack of being able to properly test things with unit tests all the way. Quite a few times I’ve tried to do the effort to retrofit tests without succeeding all the way. Silverlight is basically too hard to test if you want to be lightweight with your tests. Sure, there are things out there that mock out the runtime and you can make most of your code so that you don’t have dependencies to the Silverlight runtime and just run the tests on the desktop framework. I’ve done all the techniques out there but never been happy with the flow. This is an ongoing things.

Are we there yet?
Phew.. So, were are we at?  Balder has a default branch which is still on the SL4 level where I left off over a year ago and there is a parallel branch for the SL5 parts and all the refactorings and changes that had to be done to bring Balder to SL5. There are some bugs and quirks in it and development is not going as fast as I would have hoped. There are a few reasons for that, one is the lack of tests and not a good story for retro fitting them. I’ve started doing it with MSpec and writing in a specification way instead, made it a lot easier but there is still a lot of work to be done there. The second reason things are going slow is that Microsoft has yet to come up with a good cross platform story for the 3D bits, in fact, thus far there is none – it only works on Windows for now. I’ve been probing them to get an answer, but haven’t gotten one yet. To be honest, it has for a while halted my motivation for moving forward at the same pace, until last week when I had a breakthrough in rendering that increases performance quite a bit and also the quality for software rendering, which will then prove as the fallback solution for Mac.

Deferred software rendering

In addition the latest versions of the most used browsers on Mac supports WebGL, which is something that can be used directly from Silverlight as well. After getting the software rendering fallback done, I’ll start looking at the WebGL approach as a second fallback scenario for those browsers that supports it.

Another aspect of the SL5 codebase is that Microsoft changed security from the beta released at Mix to the version released at Build which lead to in-browser shaders not being allowed to do loops. Balder supported 5 light sources with the pixel shaders I wrote for the Beta version, but can only do one with the latest version of SL5. In order to fully support an arbitrary number of lights I’ll have to move over to deferred rendering and that has quite a few implications on how Balder works as well. But a job I’ve started and will make it the default rendering method across the board for now.

But all that being said, there are some critical issues that needs to be solved – they are issues that really makes it hard for me as a developer to get the velocity I want, so I will be going back and forth researching and bringing back the code quality I want to feel comfortable.


So, what about the tag-line of Balder and devices?
A couple of years ago I saw the opportunity to bring Balder onto more devices and started optimizing the code-base and extension points to be able to bring to things like the Windows Phone 7, iOS, Android and others. This is something I’d still love to do, but will not focus on it for quite a while. There is still too much work on the Silverlight side to justify focusing on devices just yet. There is a version of Balder for WP 7, but not for the latest Mango and to be honest, WP7 is in fact the hardest of these devices to get any proper rendering on since one is not allowed to write shaders for that device.


As you might understand, Balder is still active – not just at the same pace as before, hoping to pick this up a little bit moving forward. There are some code-rot that needs fixing, increase of code quality and things like that holding back development a bit, but also technical challenges with the platform. Since I’m relentless with the cross-platform part and am not willing to budge on it, I will focus my energy on getting that working and hopefully working good. If this was a commercial product, I would probably not go to the lengths I am to get the cross-platform parts working, but one has to remember that my original motivation for going down the road of creating Balder in the first place was based upon cross-platform – take that out and personally I will lose the biggest motivation I’ve had with the project.

I’ve established a Trello board were people can see what I’m focusing on these days for Balder.


.net, C#

MVVM in Windows Forms

Not everyone has the luxury of jumping on new technologies, it can be historical reasons, don’t have time to learn, deployment reasons – or flat out you simply don’t like anything new 🙂 .

Anyways, I was doing a codereview on a project last week, which was written in Windows Forms. The developers had some pain, Windows Forms being one of them – and a constant feeling that they were barking up the wrong tree when using Windows Forms. Its been a couple of years since my last Windows Forms project, and I must admit I remember the feeling of constantly banging my head against the wall without getting the code quality I wanted when doing UI.

One of the things I saw that the project could benefit from, was unit testing – to help them fight problems with regression, and get a constant quality indicator on the project. Having done a lot of WPF and Silverlight over the years and leaving Windows Forms behind, it just felt bad not having a proper UI pattern and I’m loving MVVM and the way it works – the simplicity of it. So I decided to do a spike, how could one implement MVVM in Windows Forms and get all the niceness of testability and full separation.

Keep in mind, I haven’t done a full implementation of a framework or anything, just the basics to get started. The one thing I wanted was to be able for the view (Form or UserControl) to be able to observe changes from a ViewModel. Windows Forms does in fact recognize the interfaces we are so familiar with from WPF and Silverlight (INotifyPropertyChanged and INotifyCollectionChanged), so it was just a matter of figuring out how to get the ViewModel easily accessible in the designer in Windows Forms. I came up with something called ViewModelBindingSource. A very simple component that can be dragged onto the designer surface:

 public class ViewModelBindingSource : BindingSource
 public ViewModelBindingSource(IContainer container)
 : base(container)

private object _viewModel;
private Type _viewModelType;

[Category(&quot;MVVM&quot;), DefaultValue((string)null), AttributeProvider(typeof(IListSource))]
public Type ViewModel
 get { return _viewModelType; }
 _viewModelType = value;
 _viewModel = Activator.CreateInstance(_viewModelType);

public override object this[int index]
 return _viewModel;
 base[index] = value;

The next thing I wanted to accomplish, was the ability to hook up Commands for Buttons and such, supporting the ICommand interface found in WPF. I came up with a CommandExtenderProvider:

[ProvideProperty(&quot;Command&quot;, typeof(Button))]
public class CommandExtenderProvider : Component, IExtenderProvider
private readonly Dictionary _buttonsWithCommands;
private readonly List _uninitializedButtons;
public CommandExtenderProvider()
_buttonsWithCommands = new Dictionary();
_uninitializedButtons = new List();
public bool CanExtend(object extendee)
return extendee is Button;
private BindingSource _viewModelSource;
public BindingSource ViewModelSource
get { return _viewModelSource; }
_viewModelSource = value;
private void InitializeUninitializedButtons()
foreach( var button in _uninitializedButtons )
var commandName = _buttonsWithCommands[button];
SetCommand(button, commandName);
[Editor(typeof(CommandEditor), typeof(UITypeEditor))]
public string GetCommand(Button button)
if (!_buttonsWithCommands.ContainsKey(button))
return null;
var command = _buttonsWithCommands[button];
return command;
public void SetCommand(Button button, string commandName)
_buttonsWithCommands[button] = commandName;
if (null == ViewModelSource ||
null == ViewModelSource.Current)
var property = ViewModelSource.Current.GetType().GetProperty(commandName);
if (null != property)
var command = property.GetValue(ViewModelSource.Current, null) as ICommand;
if (null != command)
button.Enabled = command.CanExecute(null);
command.CanExecuteChanged += (s, e) =&gt; button.Enabled = command.CanExecute(null);
button.Click += (s, e) =&gt;
if (command.CanExecute(null))

In the designer, you’ll get two new components you can then drag onto your Windows Forms design surface:

Simply start by dragging in the ViewModelBindingSource and look in the Properties of it. Under the MVVM category, you’ll find the property ViewModel – a dropdown were you can select the ViewModel.

If you then drag onto the design surface a TextBox that you want to have databound to a property in the ViewModel and look in the Properties window for the TextBox. There you’ll find your ViewModelBindingSource, expanding it will show all the properties available in the ViewModel:

For commands, we need the CommandExtenderProvider. That too can be dragged onto the surface. It has a property for selecting the ViewModelBindingSource:

Now you can add a button to the surface and go to properties, you’ll find a property called Command. There you can select the command you want to be executed during Click.

There is still a bunch of things to be desired in order for this to fully support all aspects of MVVM. But it proves its possible to think in this way, even though your technology is not necessarily state of the art.

Hope anyone finds this interesting.

The source code can be found here.


.net, Silverlight

Silverlight 4 “bug” in release mode in Safari / Chrome on OSX

Yesterday I released a new version of the Sample Browser for Balder – but some of the samples just didn’t work on Safari or Chrome on OSX, but worked in FireFox and Opera and all browsers on Windows (including Safari and Chrome).

Turns out there is an accuracy issue with using the System.Math.Log() method. It produces different results on OSX in Safari and Chrome when the code is compiled in release mode. When compiled in debug, it produces the correct result.

The method in Balder in question:

By casting the Log() result to float you get past the problem.