Bifrost, C#, Cloud, CQRS

Bifrost roadmap first half 2017 (new name)

This type of post is the first of its kind, which is funnny enough seeing that Bifrost has been in development since late 2008. Recently development has moved forward quite a bit and I figured it was time to jot down whats cooking and what the plan is for the next six months – or perhaps longer.

First of all, its hard to commit to any real dates – so the roadmap is more a “this is the order in which we’re going to develop”, rather than a budget of time.

We’ve also set up a community standup that we do every so often – not on a fixed schedule, but rather when we feel we have something new to talk about. You can find it here.

1.1.3

One of the things we never really had the need for was to scale Bifrost out. This release is focusing on bringing this back. At the beginning of the project we had a naïve way of scaling out – basically supporting a 2 node scale out, no consideration for partitioning or actually checking if events had been processed or not. With this release we’re revisiting this whole thing and at the same time setting up for success moving forward. One of the legacies we’ve been dragging behind us is the that all events where identified by their CLR types, maintaining the different event processors was linked to this – making it fragile if one where to move things around. This is being fixed by identifying application structure rather than the CLR structure in which the event exist in. This will become convention based and configurable. With this we will enable RabbitMQ as the first supported scale out mechanism. First implementation will not include all partitioning, but enabling us to move forward and get that in place quite easily. It will also set up for a more successful way of storing events in an event store. All of this is in the middle of development right now. In addition there are minor details related to the build pipeline and automating everything. Its a sound investment getting all versioning and build details automated. This is also related to the automatic building and deployment of documentation, which is crucial for the future of the project. We’ll also get an Azure Table Storage event store in place for this release, which should be fairly straight forward.

1.1.4

Code quality has been set as the focus for this release. Re-enabling things like NDepend, static code analysis.

1.1.5

Theme of this version is to get the Web frontend sorted. Bifrost has a “legacy” ES5 implementation of all its JavaScript. In addition it is very coupled to Knockout, making it hard to use things like Angular, Aurelia or React. The purpose of this release is to decouple the things that Bifrost bring to the table; proxy generation and frontend helpers such as regions, operations and more. Also start the work of modernizing the code to ES2015 and newer by using BabelJS. Also move away from Forseti, our proprietary JavaScript test runner over to more commonly used runners.

Inbetween minor releases

From this point to the next major – it is a bit fuzzy. In fact, we might prioritize to push the 2.0.0 version rather than do anything inbetween. We’ve defined a version 1.2.0 and 1.3.0 with issues we want to deal with, but might decide to move these to 2.0.0 instead. The sooner we get to 2.0.0, the better in many ways.

2.0.0

Version 2.0.0 is as indicated; breaking changes. First major breaking change; new name. The project will transition over to be called Dolittle as the GitHub organization we already have for it. Besides this, the biggest breaking change is that it will be broken up into a bunch of smaller projects – all separated and decoupled. We will try to independently version them – meaning they will take on a life of their own. Of course, this is a very different strategy than before – so it might not be a good idea and we might need to change the strategy. But for now, thats the idea and we might keep major releases in sync.

The brand Dolittle is something I’ve had since 1997 and own domains such as dolittle.com, dolittle.io and more related to it. These will be activated and be the landing page for the project.

Standard
Bifrost, Code Quality, CQRS

Concepts and more

With Bifrost we’re aligning ourselves more and more with being a platform for doing Domain Driven Design. Introducing more and more artefacts from the building blocks as we go along. When we set out to build Bifrost, we decided early on to be true to not be building anything into it that we didn’t need in a real world scenario. This was after we had started falling into the pattern of what if of software development. We started imagining problems and had to deal with them way before they had actually happened. With the risk of generalising; a fairly common scenario amongst dirty minded tech people. It stems from experience, knowing that there will always be something that can go wrong. Sure, there always is. I digress, I think this could probably be a blogpost on its own. The point being, we were heading down this path and for some reason got jolted back to reality and we started focusing on implementing only the things we needed and rather actually go back and remove things that came out of the “what if game”. On this new path we wanted to also stay focused on implementing things that were aligned with DDD and keep a close eye on the user.

Concepts

With the philosophy of CQRS at heart built with SOLID care we keep a very close eye on being very specific in our modelling. Things that are used in one part of the system is not automatically reused somewhere else, for the DRYness. We don’t believe in DRYing up properties and we favor composition of inheritance. Logic is still kept only once, on the command side of the fence. With all these principles at hand we were missing something that would link it all back together and make things look and feel consistent.

Let’s look at a scenario; say I want to update the address of a person. A command could be something like the following:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

In Bifrost you’d then have a CommandHandler to deal with this and then an AggregateRoot that would probably look like the following:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid personId) : base(personId) {}
   public UpdateAddress(string street, string city, string postalCode, string country)
   {
      // Apply an event
   }
}

The aggregate would then apply an event that looks like the following:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

An event subscriber would pick this up and update a read model that might look like the following:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

That was the artefacts we would typically be dealing with; command, aggregateroot, event and readmodel. For simplicity, these look pretty much the same – but they don’t have to, and in fact; most of the time they don’t. Lets address something here. We’re losing out on a potential in the modelling here. Take the Guid representing the unique identifier for the person. This is in fact something  that is part of the domain vocabulary that we’re losing by just making it a Guid directly.

In Bifrost we have something called ConceptAs that we can use to represent this domain concept. This is a base class that we recognize throughout the system and deals with properly during serialisation between the different out of process places it might go.

using System;
using Bifrost.Concepts;

public class Person : ConceptAs<Guid>
{
   public static implicit operator Person(Guid personId)
   {
      return new Person() { Value = personId };
   }
}

What this does is to wrap up the primitive, giving us a type that represent the domain concept. One modelling technique we applied when doing this is to stop referring to it as an id, so we started calling it the noun in which it represented. For us, this actually became the abstract noun. It doesn’t hold any properties for what it represents, only the notion of it. But codewise, this looks very good and readable.

In the ConceptAs base class we have an implicit operator that is capable of converting from the new type to the primitive, unfortunately C# does not allow for the same implicit operator going the other way in the base class, so this has to be explicitly implemented. With these operators we can move back and forth between the primitive and the concept. This comes very handy when dealing with events. We decided to drop the concepts in the events. The reason for this is that versioning becomes very hard when changing a concept, something you could decide to do. It could also make serialization more complex than you’d hope for with some serializers. Our conclusion is that we keep the events very simple and uses primitives, but everywhere else the concept is used.

The way we structure our source we basically have a domain project with our commands, command handlers and aggregates. Then we have a project for our read side and in between these two projects sit a project for holding the domain events. With this model we don’t get a coupling between the domain and the read, which is one of our primary objectives. The concepts on the other hand, they are going to be reused between the two. We therefor always have a concepts project where we keep our concepts.

Our typical project structure:

2015-02-03_07-43-27.png

So, now that we have our first concept, what did it do? It replaced the Guid reference throughout, introducing some clarity in our models. But the benefit we stumbled upon with this; we now have something to do cross cutting concerns with. By having the type of pipelines we have in Bifrost, we can now start doing things based on the type being used in different artefacts. Take the command for instance, we can now introduce input validation or business rules for it that would be applied automatically whenever used. Our support for FluentValidation has a BusinessValidator type that can be used for this:

using Bifrost.FluentValidation;
using FluentValidation;

public class PersonBusinessValidator : BusinessValidator<Person>
{
   public PersonBusinessValidator()
   {
      RuleFor(p => p.Value)
         .Must(… a method/lambda for checking if a person exist …)
         .WithMessage(“The person does not exist”);
   }
}

As long as you don’t create a specific business validator for the command, this would be automatically picked up. But if you were to create a specific validator for the command you could point it to this validator as a rule for the person property.

The exact same thing can then also be used for an input validator, which then would generate the proper metadata for the client and execute the validator on the client before the server.

It opens up for other cross cutting concerns as well, security for instance.

Value Objects

A second type of object, with the same importance in expressing the domain and opening for solving things in a cross cutting manner are value objects. This is a type of object that actually holds information, attributes that have value. They are useless on their own, but often used in the domain and also on the read side. Their uniqueness is based on all the fields in it. We find these in any domain all the time, they are typical things like money, phone number or in our case address. These are just the off the top of my head type of value objects you’d have, but you’ll find these in many forms. Lets tackle address:

using System;
using Bifrost.Concepts;

public class Address : Value
{
   public string Street { get; set; }
   public string City { get; set; }
   public string Postal { get; set; }
   public string Country { get; set; }
}

 

The Value baseclass implements IEquatable and deals with the property comparisons for uniquness.

With the value object you do get the same opportunities as with the concept for input and business validation, and yet another opportunity for dealing with cross cutting concerns.

If we summarize the sample before with these new building blocks, we would get:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Our event:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

As you can see, we keep it as it was, with the properties all in the event.

Our AggregateRoot:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid person) : base(person) {}

   public UpdateAddress(Address address)
   {
      Apply(new AddressUpdatedForPerson {
         Person = Id,
         Street = address.Street,
         City = address.City,
         Postal = address.Postal,
         Country = address.Country
      });
   }
}

The readmodel then would be:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Conclusion

For someone more familiar with traditional N-tier architecture and modelling your EDM or rather than separating things out like this, this probably raises a few eyebrows and questions. I can totally relate to it, before starting the Bifrost journey – I would have completely done the same thing. It seems like a lot of artefacts hanging around here, but every one of these serves a specific purpose and is really focused. Our experience with this is that we model things more explicitly, we reflect what we want in our model much better. Besides, you stop having things in your domain that can be ambiguous, which is the primary objective of DDD. DDD is all about the modelling and how we reach a ubiquitous language, a language that represent the domain, a language we all speak. From this perspective we’ve found domain concepts and value objects to go along with it to be very useful. With them in place as types, we found it very easy to go and retrofit cross cutting concerns we wanted in our solution without having to change any business logic. When you look at whats involved in doing it, its just worth it. The few lines of code representing it will pay back in ten folds of clarity and opportunities.

Standard
Bifrost, Uncategorized

Bifrost and Proxy generation

One of the things we consider to be one of the most successful things we’ve added to Bifrost is the bridge between the client and the server in Web solutions. Earlier this year we realized that we wanted to be much more consistent between the code written in our “backend” and our “frontend”, bridging the gap between the two.  And out of this realization came generation of proxy objects for artifacts written in C# that we want to have exposed in our JavaScript code. If you’re a node.js developer you’re probably asking yourself; WHY..   Well, we don’t have the luxury to be writing it all in JavaScript right now, but it would be interesting leveraging what we know now and build a similar platform on top of node.js, or for the Ruby world for that matter – but thats for a different post.  One aspect of our motivation for doing this was also that we find types to be very helpful; and yes – JavaScript is a dynamic language but its not typeless, so we wanted the same usefulness that the types have been playing for our backend code in the frontend as well. The types represent a certain level of metadata and we leverage the types all through our system.

Anywho, the principle was simple; use .net reflection for the types we wanted represented in JavaScript and generate pretty much an exact copy of those types in corresponding namespaces in the client. Namespaces, although different between different aspects of the system come together with a convention mechanism built into Bifrost – this also being a post on its own that should be written :), enough with the digressions.

Basically, in the Core library we ended up introducing a CodeGeneration namespace – which holds the JavaScript constructs we needed to be able to generate the proxies we needed.

CodeGeneration_NS

There are two key elements in this structure; CodeWriter and LanguageElement – the latter looking like this:

public interface ILanguageElement
{
    ILanguageElement Parent { get; set; }
    void AddChild(ILanguageElement element);
    void Write(ICodeWriter writer);
}

Almost everything sitting inside the JavaScript namespace are language elements of some kind – to some extent some of them being a bit more than just a simple language element, such as the Observable type we have which is a specialized element for KnockoutJS. Each element has the responsibility of writing themselves out, they know how they should look like – but elements aren’t responsible for doing things like ending an expression, such as semi-colons or similar. They are focused on their little piece of the puzzle and the generator will do the rest and make sure to a certain level that it is legal JavaScript.

The next part os as mentioned the CodeWriter:

public interface ICodeWriter
{
    void Indent();
    void Unindent();
    void WriteWithIndentation(string format, params object[] args);
    void Write(string format, params object[] args);
    void NewLine();
}

Very simple interface basically just dealing with indentation, writing and adding new lines.

In addition to the core framework for building the core structure, we’ve added quite a few helper methods in the form of extension methods to much easier generate common scenarios – plus at the same time provide a more fluent interface for putting it all together without having to have .Add() methods all over the place.

So if we dissect the code for generating the proxies for what we call queries in Bifrost (queries run against a datasource, typically a database):

public string Generate()
{
    var typesByNamespace = _typeDiscoverer.FindMultiple&lt;IReadModel&gt;().GroupBy(t =&gt; t.Namespace);
    var result = new StringBuilder();

    Namespace currentNamespace;
    Namespace globalRead = _codeGenerator.Namespace(Namespaces.READ);

    foreach (var @namespace in typesByNamespace)
    {
        if (_configuration.NamespaceMapper.CanResolveToClient(@namespace.Key))
            currentNamespace = _codeGenerator.Namespace(_configuration.NamespaceMapper.GetClientNamespaceFrom(@namespace.Key));
        else
            currentNamespace = globalRead;

        foreach (var type in @namespace)
        {
            var name = type.Name.ToCamelCase();
            currentNamespace.Content.Assign(name)
                .WithType(t =&gt;
                    t.WithSuper(&quot;Bifrost.read.ReadModel&quot;)
                        .Function
                            .Body
                                .Variant("self", v =>; v.WithThis())
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .WithPropertiesFrom(type, typeof(IReadModel)));
            currentNamespace.Content.Assign("readModelOf" + name.ToPascalCase())
                .WithType(t =>
                    t.WithSuper("Bifrost.read.ReadModelOf")
                        .Function
                            .Body
                                .Variant("self", v => v.WithThis())
                                .Property("name", p => p.WithString(name))
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .Property("readModelType", p => p.WithLiteral(currentNamespace.Name+"." + name))
                                .WithReadModelConvenienceFunctions(type));
        }

        if (currentNamespace != globalRead)
            result.Append(_codeGenerator.GenerateFrom(currentNamespace));
    }

    result.Append(_codeGenerator.GenerateFrom(globalRead));
    return result.ToString();
}

Thats all the code needed to get the proxies for all implementations of an interface called IQueryFor<>, it uses a subsystem in Bifrost called TypeDiscoverer that deals with all types in the running system.

Retrofitting behavior, after the fact..

Another discovery we’ve had is that we’re demanding more and more from our proxies – after they showed up, we grew fond of them right away and just want more info into them. For instance; in Bifrost we have Commands representing the behavior of the system using Bifrost, commands are therefor the main source of interaction with the system for users and we secure these and apply validation to them. Previously we instantiated a command in the client and asked the server for validation metadata for the command and got this applied. With the latest and greatest, all this information is now available on the proxy – which is a very natural place to have it. Validation and security are knockout extensions that can extend observable properties and our commands are full of observable properties. So we introduced a way to extend observable properties on commands with an interface for anyone wanting to add an extension to these properties:

public interface ICanExtendCommandProperty
{
 void Extend(Type commandType, string propertyName, Observable observable);
}

These are automatically discovered as with just about anything in Bifrost and hooked up.

The end result for a command with the validation extension is something like this:

Bifrost.namespace("Bifrost.QuickStart.Features.Employees", {
    registerEmployee : Bifrost.commands.Command.extend(function() {
        var self = this; this.name = &quot;registerEmployee&quot;;
        this.generatedFrom = "Bifrost.QuickStart.Domain.HumanResources.Employees.RegisterEmployee";
        this.socialSecurityNumber = ko.observable().extend({
            validation : {
                "required": {
                    "message":"'{PropertyName}' must not be empty."
                }
            }
        });
        this.firstName = ko.observable();
        this.lastName = ko.observable();
    })
});

Conclusion
As I started with in this post; this has proven to be one the most helpful things we’ve put into Bifrost – it didn’t come without controversy though. We were met with some skepticism when we first started talking about, even with claims such as “… it would not add any value …”. Our conclusion is very very different; it really has added some true value. It enables us to get from the backend into the frontend much faster, more precise and with higher consistency than before. It has increased the quality of what we’re doing when delivering business value. This again is just something that helps the developers focus on delivering the most important thing; business value!

Standard
Bifrost

Bifrost up on Nuget

We are super excited, we finally managed to get Bifrost up on Nuget. We will be publishing packages as soon as we have changes, new features and such. We’ll get back to you on how we’re going to deal with versioning and what our strategies are for continuously deploying to Nuget will be. 

With our push to Nuget we added a QuickStart package that one can use to get up and running quickly, all you need to do after adding the package is to compile and run and you’ll have a simple sample that shows how Bifrost is setup and how you can get started writing your features.

Standard
Bifrost

Bifrost license change

The license for Bifrost used to be shared between Dolittle and Komplett  as a joint venture that began a couple of years ago. As our focus an investment is moving more and more into Bifrost, we have agreed with Komplett that Dolittle is taking ownership of the license and the project. With this we also want to simiplify the licensing, so we’re moving to a standard MIT license without any special clause like we used to have – plain vanilla.

So, what does this mean if you’re using Bifrost?

Well, nothing actually, it means that its a simpler model – there is one party that holds the copyright, no special clauses, a well known and well used license.

 

Standard
.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.

Flow

Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :

NewImage

A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.

Validation

Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :

NewImage

Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.

Features

Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :

NewImage

Then you need to relocate the views master location formats for the view engines in ASP.net MVC :

NewImage

(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.

Conclusion

ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

Standard
Bifrost, C#, CQRS, MVVM, Patterns, Practices

CQRS applied : a summary

Every now and then in a software career you get a chance to write something from scratch and try out new things; a proper greenfield project. I’ve had that luck a couple of times and latest a project that proved to be the complete game-changer for me personally. Game changer in the sense that I gained a knowledge that I am pretty sure I will treasure for, if not the rest of my career, at least for quite a few years moving forward. The knowledge I am talking about can be linked back to applying CQRS, but it is not CQRS in itself that is the knowledge, its the concepts that tag along with it and the gained knowledge of how one can write code that is maintainable in the long run. Its also about the things we discovered during the project, smart ways to work, smart code we wrote – techniques we applied in order to meet requirements, add the needed business value, and at the same time deliver on time with more than was asked for.  

This is a more in-depth post than the talk I did @ NDC 2011

… from the top …

For the last couple of years, till March this year, I had the pleasure of being hired by Komplett Group, the largest e-commerce in Norway. At first I was assigned tasks to maintain the existing solution and was part of the on-premise team to do just that. As a consultant, that is very often what you find yourself doing – unless you’re hired in to be a particular role, like I’ve been in the past; system architect. I helped establish some basic architectural principles at that time, applying a few principles, like IOC and other parts of our favorite acronym; S.O.L.I.D. I remember feeling a bit at awe of just being there, they had a solution that could pretty much take on any number of clients and still be snappy and they never went down. I’ve learned to respect systems like that, even though it requires a lot of work – not necessarily development work, but a lot of the time IT or DevOps help keep systems alive. Anyhow, after a few months, back in 2009 I was asked by the department manager if I wanted to lead a small team on a particular project, an administration-tool for editing order details. With my background earlier as a team lead and also as a department manager myself, I kinda missed that role a bit and jumped at it. It was to be a stand-alone tool, accessible from the other tools they had, but we were given pretty much carte-blanche when it came to how we did it, whatever technology within the .net space we wanted. We settled on applying ASP.net MVC, Silverlight for some parts, WCF for exposing service for the Silverlight parts and nHibernate at the heart as the ORM for our domain. 

Part of the project was also to try out Scrum, having had quite a bit of experience with everything ranging from eXtreme Programming to MSF Agile and later Scrum, that excited me as well. So we applied it as well. 

Half-way through the project we started having problems, our domain was the one thing we shared with the others and we started running into nightmare after nightmare because we worked under the one-model-to-rule-them all idea. Which is really hard to actually get to work properly, and looking back I realize that most projects I’ve been have suffered from this. We ran into issues were for our purpose we needed some things in *-to-many to be eager fetched, which had consequences we could not anticipate in other systems that was using the same model. But we managed to come up with compromises that both systems could live with – still, we weren’t seeing eureka, just brushing up against the problems that a lot of projects meet without seeing that the approach was wrong. A bit after this we started brushing up against something that really got us excited; Commands. Without really knowing about CQRS at this point, but more coming from working with Silverlight and WPF, the concept of modeling behavior through commands. The reason we needed these commands was that we needed to perform actions on objects over a long period of time; potentially days, and at the end commit the changes. We came up with something we called a CommandChain – a chain of commands that we appended to and persisted. Commands represented behavior and modified state for the most part on entities when executed. We came up with a tool were we could debug these chains, and we could inspect which Command was causing problems and not.

NewImage

All in all, we were quite pleased with the project; we had done a lot of new things, applied TDD in a behavioral style, started exploring new corners of the universe we had yet to realize the extent of. Delivered not too badly on time, not perfectly on time – but close enough.

The turning point

After yet another 6 months or so, there were initial talks about the need to expose functionality from the web-shop to other systems used internally, a few design meetings and meetings with management lead to a new project. The scope of the project turned out to be not only exposing some services, but also a new web-shop frontend targeted and optimized for smartphone devices. The project was initiated from a technical perspective and not one with a specific business need in mind. From a technical perspective, the existing codebase had reached a point were it was hard to maintain and something new needed to replace it to gain back velocity and control over the software. It was to be a complete greenfield project, totally throw things overboard and just basically work with existing database but add flexibility enough that even that could be thrown out the door, if one ever wanted to do that. Early on I was vocal about them needing an architect to be able to deliver this project, I pointed in a couple of directions to internal resources they had – but people pointed back to me and I soon found myself as the system architect for the project. 

Requirements

When dealing with e-commerce at this level, there are quite a few challenges. Lets look at a few numbers; in the product catalog there was at the time I got off the project about 13.000 products, there was an order shipped every 21 seconds, in 2011 that amounted up to 1.454.776 orders, ~30.000 living sessions at any given time. Sure, its not Amazon, but for our neck of the woods its substantial. These numbers are of course on an average, but come busy times like Christmas, these numbers are more focused and the pressure is really on for that period in particular.

Decisions, decisions, decisions…

Before we started production, back in November 2010, we needed to get a few things straight; architecture, core values for the project, process and then getting everyone on-board with the decisions. We early on decided that we were going to learn all about CQRS, as it seemed to fit nicely with the requirements – especially for performance, and we were also requiring ourself that we wanted a rich domain model that really expressed all aspects of the system.  We also decided that we wanted to drive it all out applying BDD, and we wanted to be driving the project forward using Scrum and really be true to the process, not make our own version of Scrum. A dedicated product owner was assigned to the project that would have the responsibility for the backlog, make sure that we refined as needed, planned as needed and executed on it. 

Adding the business value

As I mentioned, this project came out of a technical need, not a concrete business need. We had the product owner role in place and he needed to fill the backlog with concrete business value. This was not an easy task to do, basically because the organization as a whole probably didn’t see the need for the project. In their defense, they had a perfectly fine solution today, not entirely optimal for smaller screens like a smartphone, but manageable. To the different store owners that normally provided the needs to the backlog, they were in desperate need of new features on existing solution, rather than this new thing targeting a platform they didn’t see much business value in adding. In combination with the fact that the organization had been in migration mode and all developer resources partly or close to full-time in periods being tied down to work related to migration of systems that was a result of merges and acquisitions, the organization had gotten used to not getting things done anyways. All this didn’t exactly create the most optimal environment for getting the real business value into the project. Something we really wanted. Early on we realized that the project could not be realized if we had user stories that were technical in nature. The first couple of months we did have quite a few technical user stories, and statistically these failed on estimation. We didn’t have any business value to relate them directly back to, and ended up in many cases as over-engineering and way out of their proportions as we as developers got creative and failed at doing our job; add business value. So we came to the conclusion; no technical user stories were allowed – ever. Something that I still today think was one of the wisest decisions we had on the project. It helped us get back and focus on why we were writing code every day; add business value. Even though this project was a spawn of the developers, there was clearly business value to guide us through. The approach became; lets pretend we’re writing an e-commerce solution for the first time. This turned out to be a good decision, it helped us  be naïve in our implementations – keeping in line with core principles of agile processes; the simplest thing that could possibly work. Our product owner was then left with the challenge of dragging the business value out of the business, he did a great job in doing that and at the same time getting them to realize the need for the change of platform that was in reality taking place. Something that became evident further down the line; we were in fact not building an e-commerce front-end for smartphones, but an entire new platform. More on that later.

YES, we did create a framework

One of the realizations we had early on was that we needed to standardize quite a few things. If you’re going to do that many new things and have a half-way chance of getting everyone with you and feel productive in the new environment, you need to get a basis working that people can work with. Back in 2008 I started a project called Bifrost, you can read more here. We looked at it and decided it was a good starting point for what we wanted to achieve. We also wanted the framework to be open-sourced. The philosophy was to create a generic framework to be the infrastructure sitting at the core of the application we were building. It would abstract away all the nitty gritty details of any underlying infrastructure, but also serve as the framework that promoted CQRS and the practices that we wanted. It was to be a framework that guided and assisted you, and very clearly not in your way. I’m not going to go in-depth in the framework, as there are more posts related to it specifically in the making and already out there.

CRUDing our way through CQRS

Well on our way, we had quite a few things we really couldn’t wrap our heads around. Coming from a very CRUD centric world, the thought of decoupling things in the way that CQRS was saying was really hard. And at the same time, there were potential for duplication in the code. I remember being completely freaked out at the beginning of the project. All my neural cells were screaming “NO! STOP!” – but we had to move on and get smarter, get passed the hurdles, learn. At first we really started making a mess out of things, just because we were building it on assumptions – the assumptions that CQRS is similar to doing regular old CRUD with what we used to know as a domain model. It was far from it, and we had a true eureka at one point were we realized something important; we were working hard an entire day on how to represent some queries in a good way so that they would be optimal in the code but also execute optimally – and it hit us as a ton of brick after leaving work that day. We were doing everything wrong, and we even came up with a mantra; “if a problem seems complicated, chances are we’re doing it wrong”. That was the turning point that helped us write code that was simpler, more testable, more focused, faster and we picked up pace in the project like I’ve never experienced before. 

From that point we had our mantra that really proved as a guiding star. Whenever we ran into things we didn’t have an answer to straight away and we started finding advanced solutions to the challenges, we applied the mantra and went back to rethink things.  

Tooling

Early in the project we realized we needed a tool for both visualizing the events being generated, but also be able to republish events. We came up with a tool built in Silverlight, using the pivot control from Microsoft to visualize.

Mimir

The real benefits

Looking back at what we did and trying to find the concrete benefits, I must say we now have gained serious amount of knowledge in a few areas. The thing that CQRS specifically gave us was the ability to model our domain properly. We achieved the separation we wanted between the behavior of the application and the things the behaviors caused changes to, the data on the other side. It helped us achieve greater flexibility, easier maintenance. Since we decided to not only just apply CQRS, but also build a reusable framework sitting at the bottom, we achieved a certain pattern of working that made things really easy to get started with development, and also a recognizable structure that made it easy to know were to put things if the core principles was explained to you.

I think by far the biggest benefit we achieved was the insight into how we should be developing software. Keeping things simple really have huge benefits. Decouple things, staying true to single-responsibility in every sense of the word single.

Another huge realization I had, something I have been saying all the time throughout my career but really got re-enforced with this project; concrete technology doesn’t really matter. Sure things will end up as a certain concrete technology – but stop thinking concretely when designing the system. Try to get down to the actual business needs, model it and let the concrete technology fall into place later. With this approach, you gain another useful possibility; doing top-down development. Start with the user interface, move your way down. Keep the feedback loop as tight as possible with the business. Don’t do more than is needed. This approach is something I know I will be missing the most in future projects. A tight feedback-loop is were the gold is hidden.

Were did we screw up?

This project must come across as a fairly peachy story. And sure, it was by far in my experience the project with the best code-base, the most structured one, the one that I personally learned the most from and also the one project in my career that we really managed to be on schedule and in fact for a couple of the releases we delivered more business value than was asked for. But it came at a price. One of the things we struggled with early on was to spread the knowledge across to the entire team and get everyone excited about the architecture, the new way of working with things and so forth. Personally I didn’t realize how invested people were in their existing solution, and also in the existing way of doing things. Me as the architect, should have seen this before we got started. The problem with not realizing this ended up being a growing problem in the group. You had a divide in the group of people buying into the entire story and those who didn’t or didn’t quite get the entire story. My theory is that we should have given the most invested members of the group a time for mourning. Get time to bury their friend through many years; the old project. We should have realized that we were in fact building for the future and would replace the existing solution at the beginning of the project and this should have been the official line. Instead it kind of organically became the official line. We did at the beginning do training in all the new techniques, and gave people time to learn. Basically didn’t give them any tasks for a few weeks and just pointed them in the general direction of things they should look at. What I think we failed on was that we didn’t point out that these things were not optional, these new ideas were in fact mandatory knowledge. We should have been much clearer about this and been vocal about the expectations. Another thing I think I would have done a bit different; involve more people in the framework part of things. With the risk of stepping on toes, I think it is not wrong of me to say that I was the framework guy. For the most part, I ended up working on the framework. Don’t get me wrong, I love doing that kind of work – but I think the experience, the design decisions got lost in translation and not everyone in the group understood why things were done as they were. 

Conclusion

The project and opportunity that was given to the team was awesome, I really appreciate the trust that was given to me for leading the way in this project. The pace we had, the stuff we did has so far amounted up to be the coolest project I’ve ever worked on – and I am happy to admit it; I miss the project. Hadn’t it been for a great opportunity that was given to me, I would have loved to stay on further. We had ups and downs, as with any software project, but overall I am wildly impressed with our accomplishments as a team and also by the end result.

Ohh… By the way. The end result can be found here.

Standard
Bifrost

Philosophy of Bifrost

Back in 2008 I started as a consultant after having worked at different ISVs since I started my career back in 1994. In the beginning my employer back then sent me to short contracts to get the consultant life under my skin. Moving around from client to client like that I realized something; I am rinsing and repeating a lot of mundane tasks, things I quickly realized that I really didn’t want to be repeating. This is probably a realization most consultants do, but nevertheless I felt I had to do something about it; out came the idea of Bifrost, an open-source library that I would be able to reuse at clients if the client permits. 

The philosophy
At the core of Bifrost and its philosophy lies an theory that although different domains have in general different needs, the abstract concepts that sits as pillars supporting a domain are the same. Bifrost would therefor be the provider of these abstractions. The abstractions should be very lightweight and focused and just support the concepts Bifrost is promoting.

The things that Bifrost aims at doing is to make things simpler, both on the backend as well as the frontend. There are so many things out there that we’re repeating, Bifrost aims to either take away tasks or expose APIs that will make it a lot easier to accomplish things. I will not go into details about all the aspects of Bifrost in this post, as we’re on a constant move and have incorporated quite a lot just the last 6 months. 

With that being said, you’re probably already thinking; geez that must a bloated framework. No! It is not. The reason for it not being bloated in my opinion, is just because of the fact that the different APIs are really focused and are not generalized to support all scenarios out there. Bifrost is opinionated, and will remain so. It is not a one size fits all necessarily. If you want to apply it, you will have to adjust to the philosophy behind it. This does not mean we’re not open for suggestions, improvements and so forth. But it means we’re keeping an eye on the road and we want it not blow up.

One aspect that was really important is to follow good development practices. Creating highly flexible, maintainable code and also highly testable to achieve the best possible quality.


The Evolution
Late 2010, when working for Komplett ASA we revitalized the project as a joint venture between Komplett and my own company; DoLittle Studios. Re-focusing some of the effort and changing around some of the core principles applied to it. For one we wanted it to be more focused around separation and more specifically implement and support CQRS as the preferred backend solution. We had already done an internal project using Commands to express behaviors in the system, but didn’t do the entire CQRS stack at that point, but rather had the commands chained up and replayed whenever we wanted to achieve a state, leaving events out of the equation at that point. What was great about that is that we got a chance to dive into the concept, get our hands dirty without applying the entire stack; get some experience, basically.  

CQRS
Command Query Responsibility Segregation Coined by Greg Young a few years back was something we saw quite a lot of benefits in applying. Although, we now see a bunch of different benefits from doing CQRS; the road leading up to CQRS and what followed, the product we ended up with forced us to learn so much. Everything became much clearer when it comes to identifying the different concerns and responsibilities in the code. The basics of CQRS states that you keep your read side optimized for that purpose only, and the execution is behavioral in nature and expresses in a verbose and explicit way what is to happen to the system, the read side will then be flattened or specialized as a consequence of whatever behavior has been applied.

Once we had it applied we started realizing the power of the concepts that we applied. We started seeing that applications are not about data, but rather have a very rich domain that expresses the behavior and that data might not even exist at all, but it might be as crazy as statically generated HTML files for our Web views, or at least statically generated JSON files we could pull directly from a CDN into our JavaScript code. It basically provided us with the scalability, flexibility and fueled us as developer with a set of mindsets that are really powerful.

MVVM
Back when I originally started the project, I always kept a very open eye on bridging the gap between the backend and the frontend. A pattern I’ve been loving for a few years now is Model View ViewModel. Modern Web applications are doing more and more on the client using JavaScript. Combine that with the growing popularity of single page web applications, MVVM seems to be a perfect fit. Bifrost has been built on top of Knockout JS, extending and formalizing a few things. 

Much much more..
There is quite a few more things related to Bifrost. But I’m not going to take on the task of going through it all in this post. But on the official site. The site is really a work in progress some of the elements of Bifrost are documented, but for the most part not at this moment in time. Stay tuned and we’ll get more of documentation up and running. We’re also focusing on an API documentation that goes into detail.

Our conclusion

Although we jump-started the framework again and wanted to focus on the CQRS parts, we quickly realized that Bifrost was not just a CQRS library, it was something else. Its place in life is to facilitate any line-of-business application development. We see great opportunities to simplify a lot of the everyday developers life and is also something we would love to hear from you about. Don’t hesitate to engage in a conversation with us at our GitHub site or our Google Forum.

 


Standard