.net, Code Quality, Code Tips, Practices

Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
{
    public ConceptAs(T value)
    {
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;
    }

    public T Value { get; init; }
}

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.

Serialization

When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.

Summary

I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);

Standard
Code Tips, Practices

Specifications in xUnit

TL;DR

You can find a full implementation with sample here.

Testing

I wrote my first unit test in 1996, back then we didn’t have much tooling and basically just had executables that ran automatic test batteries, but it wasn’t until Dan North introduced the concept of Behavior-Driven Development in 2006 it truly clicked into place for me. Writing tests – or specifications that specify the behavior of a part of the system or a unit made much more sense to me. With Machine.Specifications (MSpec for short) it became easier and more concise to express your specifications as you can see from this post comparing an NUnit approach with MSpec.

The biggest problem MSpec had and still has IMO is its lack of adoption and community. This results in lack of contributors giving it the proper TLC it deserves, which ultimately lead to a lack of good consistent tooling experience. The latter has been a problem ever since it was introduced and throughout the years the integrated experience in code editors or IDEs has been lacking or buggy at best. Sure, running it from terminal has always worked – but to me it stops me a bit in the track as I’m a sucker for feedback loops and loves being in the flow.

xUnit FTW

This is where xUnit comes in. With a broader adoption and community, the tooling experience across platforms, editors and IDEs is much more consistent.

I set out to get the best of breed and wanted to see if I could get close to the MSpec conciseness and get the tooling love. Before I got sucked into the not invented here syndrom I researched if there were already solutions out there. Found a few posts on it and found the Observation sample In the xUnit samples repo to be the most interesting one. But I couldn’t get it to work with the tooling experience in my current setup (.NET 6 preview + VSCode on my Mac).

From this I set out to create something of a thin wrapper that you can find as a Gist here. The Gist contains a base class that enables the expressive features of MSpec, similar wrapper for testing exceptions and also extension methods mimicking Should*() extension methods that MSpec provides.

By example

Lets take the example from the MSpec readme:

class When_authenticating_an_admin_user
{
    static SecurityService subject;
    static UserToken user_token;

    Establish context = () => 
        subject = new SecurityService();

    Because of = () =>
        user_token = subject.Authenticate("username", "password");

    It should_indicate_the_users_role = () =>
        user_token.Role.ShouldEqual(Roles.Admin);

    It should_have_a_unique_session_id = () =>
        user_token.SessionId.ShouldNotBeNull();
}

With my solution we can transform this quite easily, maintaining structure, flow and conciseness. Taking full advantage of C# expression body definition (lambda):

class When_authenticating_an_admin_user : Specification
{
    SecurityService subject;
    UserToken user_token;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             user_token = subject.Authenticate("username", "password");

    [Fact] void should_indicate_the_users_role() =>
        user_token.Role.ShouldEqual(Roles.Admin);

    [Fact] void should_have_a_unique_session_id() =>
        user_token.SessionId.ShouldNotBeNull();
}

Since this is pretty much just standard xUnit, you can leverage all the features and attributes.

Catching exceptions

With the Gist, you’ll find a type called Catch. Its purpose is to provide a way to capture exceptions from method calls to be able to assert that the exception occurred or not. Below is an example of its usage, and also one of the extension methods provided in the Gist – ShouldBeOfExactType<>().

class When_authenticating_a_null_user : Specification
{
    SecurityService subject;
    Exception result;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

Contexts

With this approach one ends up being very specific on behaviors of a system/unit, this leads to multiple classes specifying different aspects of the same behavior in different contexts or different behaviors of the system/unit. To avoid having to do the setup and teardown of these within each of these classes, I like to reuse these by leveraging inheritance. In addition, I tend to put the reused contexts in a folder/namespace that is called given; yielding a more readable result.

Following the previous examples we now have two specifications and both requiring a context of the system being in a state with no user authenticated. By adding a file in the given folder of this unit and then adding a namespace segment og given as well, we can encapsulate the context as follows:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();
}

From this we can simplify our specifications by removing the establish part:

class When_authenticating_a_null_user : given.no_user_authenticated
{
    Exception result;

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

The Gist supports multiple levels of inheritance recursively and will run all the lifecycle methods such as Establish from the lowest level in the hierarchy chain and up the hierarchy (e.g. no_user_authenticated -> when_authenticating_a_null_user).

Teardown

In addition to Establish, there is its counterpart; Destroy. This is where you’d typically cleanup anything needing to be cleaned up – typically if you need to clean up some global state that was mutated. Take our context for instance and assuming the SecurityService implements IDisposable:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();

    void Destroy() => subject.Dispose();

}

Added benefit

One of the problems that has been with the MSpec approach is that its all based on statics since it is using delegates as “keywords”. Some of the runners have problems with this and async models, causing havoc and non-deterministic test results. Since xUnit is instance based, this problem goes away and every instance of the specification is in isolation.

Summary

This is probably just yet another solution to this and I’ve probably overlooked implementations out there, if that’s the case – please leave me a comment, would love to not have to maintain this myself 🙂. It has helped me get to a tighter feedback loop as I now can run or debug tests in context of where my cursor is in VSCode with a keyboard shortcut and see the result for that specification only. My biggest hope for the future is that we get a tooling experience in VSCode that is similar to Wallaby is doing for JS/TS testing. Windows devs using full Visual Studio also has the live unit testing feature. With .NET 6 and the hot reload feature I’m very optimistic on tooling going in this direction and we can shave the feedback loop even more.

Standard
Practices

MCTS: Microsoft Silverlight 4 Development – Short book review

I’ve had the pleasure of getting early hands on for the MCTS certification guide for Silverlight 4, and figured I’d do a short review of it.

NewImage

 

The purpose of the book is to give you a guide of what to expect to know for becoming a certified Silverlight developer. Personally I haven’t done the certification, so I won’t be able to say if it is sufficient or not, but I would imagine that the certification itself has been used as a guide for what is needed.

Silverlight 4 is a pretty large subject on its own, and quite ambitious to cover in just one book and at the same time provide guidance towards a certification. I feel that this book really accomplished this in a good manner, it is straight to the point but at the same time not leaving out any explanations. I even picked up some basic knowledge I was not aware of myself. 

Being a guidance for certification, it does just that, so its not going to go outside of that promise. Which probably makes it feel a little light-weight on real world scenarios and it stays true to the original Microsoft messaging. It mentioned MVVM to some extent, but it could probably have dove into that subject a bit more, in my opinion. But I totally understand why it is not brought up. Being very focused on code quality, testability and such I feel that it is important to get this message straight, even though it might not be 100% relevant to a certification. 

All in all I found the book to be as close to a page turner as these kind of books can be, I learned some basics myself and brushed up on my own knowledge of Silverlight while reading it. And I’m pretty confident it will serve as a good guide for a developer wanting to certify him/her-self as a Silverlight developer.

Standard
.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.

Flow

Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :

NewImage

A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.

Validation

Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :

NewImage

Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.

Features

Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :

NewImage

Then you need to relocate the views master location formats for the view engines in ASP.net MVC :

NewImage

(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.

Conclusion

ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

Standard
C#, CQRS, Patterns, Practices

CQRS : The awakening

Ever have that feeling when all of a sudden you’re completely getting something, like an awakening. You might have been doing something that you think you were doing right, but all of sudden you get this eureka moment that not only tells you that you’re doing things wrong, but also raises your awareness to a whole new level. This is what we experienced on the first project were we applied CQRS. In fact we had quite a few of these moments, this post is about trying to describe what we went through.

The background

Before we can start, I think it is important to give a bit of a background of what we were used to be doing. Like most people in the development community, we tried following the right people on Twitter, read blogs, read books and in general just try to keep up the pace and learn new things. Some of us had really fallen in love over the years with the concept of TDD, BDD, clean code, agile, domain driven design, the S.O.L.I.D principles and what not. We tried to write our software according to the principles we loved. We were pretty convinced we were doing a great job at it as well. Even though we felt at times there were something wrong with a few things, we carried on.

The way we used to do things was to basically have a relational database sitting at the core with pretty much just tables and relations and the odd view every now and then. On top of that we would model objects that represented those tables and relationships and we would add more business logic to these objects and we would then slam a stamp on these objects and call them our domain model. This domain model would only have one representation of every type, so that we didn’t have to repeat ourself. From this we modeled repositories that enabled us to work with getting the domain objects, change them, update them and every now and then we needed to delete them; good old CRUD operations. We would even throw in some services to explicitly model functionality that was outside the responsibility of the repository, but more to capture business logic that was more involved. From my experience working in different projects and what I’ve seen doing consultancy, this is not way off what people tend to do. Of course there is more to it, you have your factories, your managers and what not, and you might even have different representations of your domain model for the view were you need to map back and forth between the different layers. Sure. More involved, but you get the picture.

Relationships

Often you find yourself writing these relationships between things, and it is just so logical that there is a certain relation between the two entities – they belong together in a symbiosis somehow. Take for instance a shopping cart, it is just sitting there screaming for content in the form of some items, there is no other way around it. We just get a craving for amodel similar to this : 

NewImage

Looks innocent enough and really makes sense. You might even throw in some read only properties that gives you convenience for counting the number of items, summarizing all the items and so forth. And then you realize, hang on – that CartItem is not referring to any product. 

NewImage

So now we have first of all a One-To-Many relationship, then we have a One-To-One relationship. Whenever we want to just check how many items we have in our cart we have to pull in this entire thing. Wait you might be saying now, with modern ORM we can specify how it is supposed to fetch things, by demand – typically what is referred to as lazy loading. Sure, we could do that for Product, and probably also for the CartItem, but the second you would ask for something like quantity it would still need to fetch all the CartItems. This adds quite a bit of complexity to your solution. All of a sudden you have a situation were your application is governed by how the objects are configured for the ORM and how they should be fetched. Potentially you could end up with 3 round-trips to the database if the model is used in a particular part of your Cart visualization that is showing the product names. Ones for getting the Cart from the database, ones for getting the cart items it is enumerating, and once for getting the product. What happens if we throw price into the mix. The product can’t have a price directly, because there might be discounts that is relevant for the customer, so yet another object just to display something that seems very simple and straightforward.

Now you might be saying, why don’t we just flatten this all out in a database view – my response to that; exactly. That is exactly what we should be doing, but not in the database. This is at the core of the benefits CQRS can give you; the ability to flatten things out – especially if you buy into the EventSourcing story, storing state as a series of fine grained events. Sure, you don’t need EventSourcing, but it adds for greater flexibility when you want to fully take advantage of the ability to flatten things out after something has already happened. More on this in a bit.

So views and flattening things out. That was our first moment of clarity. We really didn’t need any relationships, in fact we didn’t even need a Cart object, we needed something called CartSummary – it was a very different beast. And so much simpler in its nature.

 NewImage

So what happened here?

Instead of having to go to the database and get all those details as mentioned earlier every time we wanted to get to the summary, we had this simple object. Whenever the user performed an action that said add something to the cart, or remove something from the cart, we would have 2 subscribers for these events. One subscriber would be in charge of updating the underlying CartItem, and another one would be in charge of updating the CartSummary by selecting into the CartItems and just setting the values accordingly. Instead of taking the hit every time you needed the detail, we now take the hit only when actions are performed. My claim is that most applications out there has a ratio between reads and executes that are probably around 9 to 1, so why haven’t we been optimizing for this scenario before? 

Anyways, clearly the objects became simpler and more focused – all of a sudden our objects were doing one thing, and one thing only. We were empowered with the ability to really apply the single responsibility principle for our objects.

I mentioned EventSourcing; a technique for storing the events sequentially as they happen so that we can replay them at any given point in time. This is a very powerful tool. Imagine the flattening we just did and lets say you want to keep introduce another object to the mix and you want the historic state to affect that object. With event sourcing it is then just a matter of replaying those events for the event subscribers dealing with the new object and all of a sudden you have your new optimized state. An example of that could in the cart context be something like a realtime report showing all the carts historically aggregated, you could have objects representing only living carts – and more. The best part, these objects are lightweight and you start seeing possibilities for storing these in a very different manner.

Concepts

One of the things we started also realizing was that we were having a lot of IDs as a consequence hanging around, either as integers representing the identity or GUIDs. Neither our Commands nor our Events were reflecting properties on them very well. A standard value type sitting there didn’t say what it was, it was basically just an integer or a GUID. So we started modeling these things that we ended up calling Concepts. These were cross cutting and something we kept in its own project that could be reused all through. It turned out to be the one thing that was shared between things. The types we introduced were basic reusable things, like StoreId, representing a Store, but also more complex things like a PhoneNumber instead of holding a string. EMailAddress is another example of just that and all of a sudden it was so much easier to model all aspects of the system and we got ourself a vocabulary – one of the key elements of domain driven design. You couldn’t go wrong. Another benefit we got from this, purely technically, was that validation was all of a sudden falling into place as the cross-cutting-concern it is. Along side some of these concepts we put validators, we used FluentValidation for this as it let us put validation outside of the thing we were validating, giving us great flexibility in how we applied validation – a whole concept for a different post that I’m hoping to get to; discovery mechanisms based on configurable conventions and hierarchies. Anyhow, with having the concepts and validation of concepts we got DRY in what we consider a very good way. 

LINQ madness

Once you learn something and you really like it, it is hard to change how you do things. This was something we ran into quite a bit, but one incident that comes to mind was specifically tied into LINQ. We had our Cart and CartItems and just wanted to model the CartSummary, but wanted to solve it through querying. Basically we wanted to be able to specify for an object like that how it could aggregate things, totally disregarding that our platform could at this point flatten things out when things changed, not having to aggregate every time. We had a full day of bending things around LINQ and we wanted add some business rules to some of our objects we used in the view to change the outcome. To do this, we wanted to specify things using the specification pattern and turn it into LINQ expressions that we could execute against the datasource. Sure, it might have seemed as a good idea if you didn’t have CQRS in mind. But we did have CQRS and we wanted to be pure. At this point I remember having a feeling that we were just creating a really fancy ORM – and not really doing anything differently, rather just massively overkilling our existing ways of doing things. This day we discovered that it is not really about querying – the Q in CQRS is not about querying. We went back, changed our code so that we flattened things when events were risen. We could then just access everything by keys and that would be it. And this concept can go all the way for just about everything. As long as you have a key, you can go get things. This realization led to us not calling things queries – we changed it to be called view, lacking a better name. But at least it was closer to what things were, we weren’t going to execute complex queries – but get things by a key.

GUID is the key 

In order for CQRS to be applied, you need to start thinking very differently about how your objects gets their identification assigned. In our opinion; its all about being able to generate the Id already from the client. By doing that, things get so much easier. All of a sudden the client knows the identity of the thing it is creating, meaning that it can already at that point, even though it might not be ready due to eventual consistency, use the Id for other purposes related to it. Basically, we can start doing a few assumptions in the client and create better user experiences. We used a GUID throughout the system for everything that was used to identify items. 

Data, what data…

One of the things that we’re thought when working with more classic approaches like explained earlier, is to be very data focused. We’re modeling the data and then we’re creating, updating and deleting it. We’re thought to work with state. When you sit down with a user and ask them to describe to you their needs in the application you’re building for her. Unless she is a power user tainted by developers, she will probably state her needs as behaviors. How often have you heard the “I need a button here, so that…”, and we immediately stop paying attention because we’re so annoyed that the user couldn’t speak in more abstract terms than concretely about the button and we only pick up the stuff that comes after “so that” which is the state bit. Creating software is not about building state, its about exposing behaviors to the user that might end up as state. This is a very core principle in CQRS; Commands. You model your commands to expose the behavior that is needed and you also capture the intent. With a command you represent the What and potentially the Why – you put it in the name of the Command : ChangeAddressBecauseOfMove – now we have the What and the Why. 

This is something we realized more and more throughout the project – it is really all about the behaviors, once we got to that, it was so much easier picking up what the users wanted from the system. All of a sudden the button that the user wanted is not that much of a problematic way of expressing the needs, especially when you start doing things top-down and you can actually work with the user on prototypes, drawings and similar – things that they relate to. A user do not relate to a SQL table or relationship, they relate to the things they do – like clicking a button. 

From this realization came a bunch of other eurekas throughout the project. The biggest and most important thing was that we should step away from the idea that state can only be held in one place. Lets face it, a SQL server is good at many things but not everything, just like any technology; nothing is great at everything. We started realizing that we could start persisting our state that would be most suitable for its use. For one particular feature we put the data as a JSON object model directly onto a CDN and let the client download it when the feature was loaded. With GZIP compression on the server that request turned out to be on an average of 40KB and totally taking the load away from the Web server and the SQL server for performing something that would have been a pretty complex query. Instead we generated the JSON file whenever the underlying data changed, and it was not changed very often – so browser caching would also be a big benefit for returning customers.

We started realizing that we could pretty much go any way with this. A lot of the time, things are close to static – they hardly change. Why are we then dynamically generating things like Web pages – why aren’t we just generating them when things change and put them on a CDN or on the file system of the Web server that hosts the solution and we just sent it directly back to the client. You might be saying at this point that we do have things like caching of web pages for things like this, but I would argue – why would you add that complexity to your solution, why would you also add that pressure onto your hardware in the terms of extra memory needed, for all of your servers in the farm to be able to process this. All that for something that is solved way easier, just render it when needed. 

With all this, we stopped thinking about data – we focused on the behavior, the data – or state as I would rather prefer to call it, could then be represented in any way, in fact – it might even change over time how you represent it.

The bounded contexts

One of the things that Eric Evans mentioned in his book on DDD is the concept of bounded contexts. A concept that is in a bit of conflict with the idea of DRYing up things and the whole idea of having one model to rule them all. The idea is to have context aware representations of concepts found in your domain. Take for instance a Product in an e-commerce setting. For someone in charge of acquiring products for the store to have it available in the shop, they are interested in certain aspects of it the product. The are interested to know about who the vendor is, who is importing that particular product, you are of course interested in the price you as a store can buy it for – so you can figure out what your margins can potentially be on the product. Once the product hits the warehouse, none of those properties are important – they only care about how they can store that product in the warehouse – they don’t even care about the brand. Things like dimensions and weight is key information to the warehouse. Once the user sees the product on the web-shop, they are interested in the retail price, the description, other customers feedback, reviews and a whole lot more. Why model this as one product, chances are that the people in your organization might not be referring to it as a product, at least not in all departments. 

This was a huge realization we had, the minute we discovered this truth we stopped completely with thinking in any relational manner. Sure, we did have an existing database to take care of, because the project was greenfield built on top of existing data that was shared between the old and the new. But our mindset. We started modeling things with the lingo that was for the context, and not try to apply the same model across the board – sort of a compromise model that would suit all the needs. 

Features

Before Greg Yong and Udi Dahan agreed upon the concept of business components, we were well on our way with what we called a feature. But it serves the same purpose. The idea is to isolate features within a bounded context and that feature can not be shared between bounded contexts. It was also a very important realization we had that gave us a structure in the application and the different components it was built on to have this recognizable structure throughout. Naming was consistent, and you isolated things within the feature itself, so no namespace for interfaces that turns out to be a dumping ground for all the interfaces in the project or something like that. For all the tiers, you would find the features name – and all artifacts related to the feature in that tier would be inside that namespace / folder. That way we kept related things close, and it was more maintainable and even bringing new guys into the project made it a lot easier. For the ASP.net MVC part, the front-end tier, we even went ahead and relocated were views and controllers normally are found; we put them together – in fact we put together everything related to a feature inside the same folder – be it controllers, views, javascripts, images and so forth. It was just a matter of configuring things correctly and we could maintain a consistency throughout the project that would prove to be of great value.

Conclusion

As with any software project, these are how we experienced things – these are our conclusions, our interpretations of patterns and ways of doing things. We can not go out and say that our way is the best, but it was the closest thing to best for our team. The conclusions we reached, the realizations we had made our work so much easier. It turns out however for the guys that were part of these realizations and conclusions that it gave a push in a general direction of writing better code. We feel that we are now empowered with the knowledge to not compromise on code quality and be able to deliver on time, something we did again and again on the project. All deadlines we had were met, partly because of the platform we build, but also the way we worked in general. The more of the mindset we got changed, the faster we were able to deliver and we were focusing on the right stuff; the business value. It gave us a way to speak to users, and really translate it into what they were looking for; not the data, but capture the behaviors. 

Standard
Bifrost, C#, CQRS, MVVM, Patterns, Practices

CQRS applied : a summary

Every now and then in a software career you get a chance to write something from scratch and try out new things; a proper greenfield project. I’ve had that luck a couple of times and latest a project that proved to be the complete game-changer for me personally. Game changer in the sense that I gained a knowledge that I am pretty sure I will treasure for, if not the rest of my career, at least for quite a few years moving forward. The knowledge I am talking about can be linked back to applying CQRS, but it is not CQRS in itself that is the knowledge, its the concepts that tag along with it and the gained knowledge of how one can write code that is maintainable in the long run. Its also about the things we discovered during the project, smart ways to work, smart code we wrote – techniques we applied in order to meet requirements, add the needed business value, and at the same time deliver on time with more than was asked for.  

This is a more in-depth post than the talk I did @ NDC 2011

… from the top …

For the last couple of years, till March this year, I had the pleasure of being hired by Komplett Group, the largest e-commerce in Norway. At first I was assigned tasks to maintain the existing solution and was part of the on-premise team to do just that. As a consultant, that is very often what you find yourself doing – unless you’re hired in to be a particular role, like I’ve been in the past; system architect. I helped establish some basic architectural principles at that time, applying a few principles, like IOC and other parts of our favorite acronym; S.O.L.I.D. I remember feeling a bit at awe of just being there, they had a solution that could pretty much take on any number of clients and still be snappy and they never went down. I’ve learned to respect systems like that, even though it requires a lot of work – not necessarily development work, but a lot of the time IT or DevOps help keep systems alive. Anyhow, after a few months, back in 2009 I was asked by the department manager if I wanted to lead a small team on a particular project, an administration-tool for editing order details. With my background earlier as a team lead and also as a department manager myself, I kinda missed that role a bit and jumped at it. It was to be a stand-alone tool, accessible from the other tools they had, but we were given pretty much carte-blanche when it came to how we did it, whatever technology within the .net space we wanted. We settled on applying ASP.net MVC, Silverlight for some parts, WCF for exposing service for the Silverlight parts and nHibernate at the heart as the ORM for our domain. 

Part of the project was also to try out Scrum, having had quite a bit of experience with everything ranging from eXtreme Programming to MSF Agile and later Scrum, that excited me as well. So we applied it as well. 

Half-way through the project we started having problems, our domain was the one thing we shared with the others and we started running into nightmare after nightmare because we worked under the one-model-to-rule-them all idea. Which is really hard to actually get to work properly, and looking back I realize that most projects I’ve been have suffered from this. We ran into issues were for our purpose we needed some things in *-to-many to be eager fetched, which had consequences we could not anticipate in other systems that was using the same model. But we managed to come up with compromises that both systems could live with – still, we weren’t seeing eureka, just brushing up against the problems that a lot of projects meet without seeing that the approach was wrong. A bit after this we started brushing up against something that really got us excited; Commands. Without really knowing about CQRS at this point, but more coming from working with Silverlight and WPF, the concept of modeling behavior through commands. The reason we needed these commands was that we needed to perform actions on objects over a long period of time; potentially days, and at the end commit the changes. We came up with something we called a CommandChain – a chain of commands that we appended to and persisted. Commands represented behavior and modified state for the most part on entities when executed. We came up with a tool were we could debug these chains, and we could inspect which Command was causing problems and not.

NewImage

All in all, we were quite pleased with the project; we had done a lot of new things, applied TDD in a behavioral style, started exploring new corners of the universe we had yet to realize the extent of. Delivered not too badly on time, not perfectly on time – but close enough.

The turning point

After yet another 6 months or so, there were initial talks about the need to expose functionality from the web-shop to other systems used internally, a few design meetings and meetings with management lead to a new project. The scope of the project turned out to be not only exposing some services, but also a new web-shop frontend targeted and optimized for smartphone devices. The project was initiated from a technical perspective and not one with a specific business need in mind. From a technical perspective, the existing codebase had reached a point were it was hard to maintain and something new needed to replace it to gain back velocity and control over the software. It was to be a complete greenfield project, totally throw things overboard and just basically work with existing database but add flexibility enough that even that could be thrown out the door, if one ever wanted to do that. Early on I was vocal about them needing an architect to be able to deliver this project, I pointed in a couple of directions to internal resources they had – but people pointed back to me and I soon found myself as the system architect for the project. 

Requirements

When dealing with e-commerce at this level, there are quite a few challenges. Lets look at a few numbers; in the product catalog there was at the time I got off the project about 13.000 products, there was an order shipped every 21 seconds, in 2011 that amounted up to 1.454.776 orders, ~30.000 living sessions at any given time. Sure, its not Amazon, but for our neck of the woods its substantial. These numbers are of course on an average, but come busy times like Christmas, these numbers are more focused and the pressure is really on for that period in particular.

Decisions, decisions, decisions…

Before we started production, back in November 2010, we needed to get a few things straight; architecture, core values for the project, process and then getting everyone on-board with the decisions. We early on decided that we were going to learn all about CQRS, as it seemed to fit nicely with the requirements – especially for performance, and we were also requiring ourself that we wanted a rich domain model that really expressed all aspects of the system.  We also decided that we wanted to drive it all out applying BDD, and we wanted to be driving the project forward using Scrum and really be true to the process, not make our own version of Scrum. A dedicated product owner was assigned to the project that would have the responsibility for the backlog, make sure that we refined as needed, planned as needed and executed on it. 

Adding the business value

As I mentioned, this project came out of a technical need, not a concrete business need. We had the product owner role in place and he needed to fill the backlog with concrete business value. This was not an easy task to do, basically because the organization as a whole probably didn’t see the need for the project. In their defense, they had a perfectly fine solution today, not entirely optimal for smaller screens like a smartphone, but manageable. To the different store owners that normally provided the needs to the backlog, they were in desperate need of new features on existing solution, rather than this new thing targeting a platform they didn’t see much business value in adding. In combination with the fact that the organization had been in migration mode and all developer resources partly or close to full-time in periods being tied down to work related to migration of systems that was a result of merges and acquisitions, the organization had gotten used to not getting things done anyways. All this didn’t exactly create the most optimal environment for getting the real business value into the project. Something we really wanted. Early on we realized that the project could not be realized if we had user stories that were technical in nature. The first couple of months we did have quite a few technical user stories, and statistically these failed on estimation. We didn’t have any business value to relate them directly back to, and ended up in many cases as over-engineering and way out of their proportions as we as developers got creative and failed at doing our job; add business value. So we came to the conclusion; no technical user stories were allowed – ever. Something that I still today think was one of the wisest decisions we had on the project. It helped us get back and focus on why we were writing code every day; add business value. Even though this project was a spawn of the developers, there was clearly business value to guide us through. The approach became; lets pretend we’re writing an e-commerce solution for the first time. This turned out to be a good decision, it helped us  be naïve in our implementations – keeping in line with core principles of agile processes; the simplest thing that could possibly work. Our product owner was then left with the challenge of dragging the business value out of the business, he did a great job in doing that and at the same time getting them to realize the need for the change of platform that was in reality taking place. Something that became evident further down the line; we were in fact not building an e-commerce front-end for smartphones, but an entire new platform. More on that later.

YES, we did create a framework

One of the realizations we had early on was that we needed to standardize quite a few things. If you’re going to do that many new things and have a half-way chance of getting everyone with you and feel productive in the new environment, you need to get a basis working that people can work with. Back in 2008 I started a project called Bifrost, you can read more here. We looked at it and decided it was a good starting point for what we wanted to achieve. We also wanted the framework to be open-sourced. The philosophy was to create a generic framework to be the infrastructure sitting at the core of the application we were building. It would abstract away all the nitty gritty details of any underlying infrastructure, but also serve as the framework that promoted CQRS and the practices that we wanted. It was to be a framework that guided and assisted you, and very clearly not in your way. I’m not going to go in-depth in the framework, as there are more posts related to it specifically in the making and already out there.

CRUDing our way through CQRS

Well on our way, we had quite a few things we really couldn’t wrap our heads around. Coming from a very CRUD centric world, the thought of decoupling things in the way that CQRS was saying was really hard. And at the same time, there were potential for duplication in the code. I remember being completely freaked out at the beginning of the project. All my neural cells were screaming “NO! STOP!” – but we had to move on and get smarter, get passed the hurdles, learn. At first we really started making a mess out of things, just because we were building it on assumptions – the assumptions that CQRS is similar to doing regular old CRUD with what we used to know as a domain model. It was far from it, and we had a true eureka at one point were we realized something important; we were working hard an entire day on how to represent some queries in a good way so that they would be optimal in the code but also execute optimally – and it hit us as a ton of brick after leaving work that day. We were doing everything wrong, and we even came up with a mantra; “if a problem seems complicated, chances are we’re doing it wrong”. That was the turning point that helped us write code that was simpler, more testable, more focused, faster and we picked up pace in the project like I’ve never experienced before. 

From that point we had our mantra that really proved as a guiding star. Whenever we ran into things we didn’t have an answer to straight away and we started finding advanced solutions to the challenges, we applied the mantra and went back to rethink things.  

Tooling

Early in the project we realized we needed a tool for both visualizing the events being generated, but also be able to republish events. We came up with a tool built in Silverlight, using the pivot control from Microsoft to visualize.

Mimir

The real benefits

Looking back at what we did and trying to find the concrete benefits, I must say we now have gained serious amount of knowledge in a few areas. The thing that CQRS specifically gave us was the ability to model our domain properly. We achieved the separation we wanted between the behavior of the application and the things the behaviors caused changes to, the data on the other side. It helped us achieve greater flexibility, easier maintenance. Since we decided to not only just apply CQRS, but also build a reusable framework sitting at the bottom, we achieved a certain pattern of working that made things really easy to get started with development, and also a recognizable structure that made it easy to know were to put things if the core principles was explained to you.

I think by far the biggest benefit we achieved was the insight into how we should be developing software. Keeping things simple really have huge benefits. Decouple things, staying true to single-responsibility in every sense of the word single.

Another huge realization I had, something I have been saying all the time throughout my career but really got re-enforced with this project; concrete technology doesn’t really matter. Sure things will end up as a certain concrete technology – but stop thinking concretely when designing the system. Try to get down to the actual business needs, model it and let the concrete technology fall into place later. With this approach, you gain another useful possibility; doing top-down development. Start with the user interface, move your way down. Keep the feedback loop as tight as possible with the business. Don’t do more than is needed. This approach is something I know I will be missing the most in future projects. A tight feedback-loop is were the gold is hidden.

Were did we screw up?

This project must come across as a fairly peachy story. And sure, it was by far in my experience the project with the best code-base, the most structured one, the one that I personally learned the most from and also the one project in my career that we really managed to be on schedule and in fact for a couple of the releases we delivered more business value than was asked for. But it came at a price. One of the things we struggled with early on was to spread the knowledge across to the entire team and get everyone excited about the architecture, the new way of working with things and so forth. Personally I didn’t realize how invested people were in their existing solution, and also in the existing way of doing things. Me as the architect, should have seen this before we got started. The problem with not realizing this ended up being a growing problem in the group. You had a divide in the group of people buying into the entire story and those who didn’t or didn’t quite get the entire story. My theory is that we should have given the most invested members of the group a time for mourning. Get time to bury their friend through many years; the old project. We should have realized that we were in fact building for the future and would replace the existing solution at the beginning of the project and this should have been the official line. Instead it kind of organically became the official line. We did at the beginning do training in all the new techniques, and gave people time to learn. Basically didn’t give them any tasks for a few weeks and just pointed them in the general direction of things they should look at. What I think we failed on was that we didn’t point out that these things were not optional, these new ideas were in fact mandatory knowledge. We should have been much clearer about this and been vocal about the expectations. Another thing I think I would have done a bit different; involve more people in the framework part of things. With the risk of stepping on toes, I think it is not wrong of me to say that I was the framework guy. For the most part, I ended up working on the framework. Don’t get me wrong, I love doing that kind of work – but I think the experience, the design decisions got lost in translation and not everyone in the group understood why things were done as they were. 

Conclusion

The project and opportunity that was given to the team was awesome, I really appreciate the trust that was given to me for leading the way in this project. The pace we had, the stuff we did has so far amounted up to be the coolest project I’ve ever worked on – and I am happy to admit it; I miss the project. Hadn’t it been for a great opportunity that was given to me, I would have loved to stay on further. We had ups and downs, as with any software project, but overall I am wildly impressed with our accomplishments as a team and also by the end result.

Ohh… By the way. The end result can be found here.

Standard