Bifrost, C#, Cloud, CQRS

Bifrost roadmap first half 2017 (new name)

This type of post is the first of its kind, which is funnny enough seeing that Bifrost has been in development since late 2008. Recently development has moved forward quite a bit and I figured it was time to jot down whats cooking and what the plan is for the next six months – or perhaps longer.

First of all, its hard to commit to any real dates – so the roadmap is more a “this is the order in which we’re going to develop”, rather than a budget of time.

We’ve also set up a community standup that we do every so often – not on a fixed schedule, but rather when we feel we have something new to talk about. You can find it here.

1.1.3

One of the things we never really had the need for was to scale Bifrost out. This release is focusing on bringing this back. At the beginning of the project we had a naïve way of scaling out – basically supporting a 2 node scale out, no consideration for partitioning or actually checking if events had been processed or not. With this release we’re revisiting this whole thing and at the same time setting up for success moving forward. One of the legacies we’ve been dragging behind us is the that all events where identified by their CLR types, maintaining the different event processors was linked to this – making it fragile if one where to move things around. This is being fixed by identifying application structure rather than the CLR structure in which the event exist in. This will become convention based and configurable. With this we will enable RabbitMQ as the first supported scale out mechanism. First implementation will not include all partitioning, but enabling us to move forward and get that in place quite easily. It will also set up for a more successful way of storing events in an event store. All of this is in the middle of development right now. In addition there are minor details related to the build pipeline and automating everything. Its a sound investment getting all versioning and build details automated. This is also related to the automatic building and deployment of documentation, which is crucial for the future of the project. We’ll also get an Azure Table Storage event store in place for this release, which should be fairly straight forward.

1.1.4

Code quality has been set as the focus for this release. Re-enabling things like NDepend, static code analysis.

1.1.5

Theme of this version is to get the Web frontend sorted. Bifrost has a “legacy” ES5 implementation of all its JavaScript. In addition it is very coupled to Knockout, making it hard to use things like Angular, Aurelia or React. The purpose of this release is to decouple the things that Bifrost bring to the table; proxy generation and frontend helpers such as regions, operations and more. Also start the work of modernizing the code to ES2015 and newer by using BabelJS. Also move away from Forseti, our proprietary JavaScript test runner over to more commonly used runners.

Inbetween minor releases

From this point to the next major – it is a bit fuzzy. In fact, we might prioritize to push the 2.0.0 version rather than do anything inbetween. We’ve defined a version 1.2.0 and 1.3.0 with issues we want to deal with, but might decide to move these to 2.0.0 instead. The sooner we get to 2.0.0, the better in many ways.

2.0.0

Version 2.0.0 is as indicated; breaking changes. First major breaking change; new name. The project will transition over to be called Dolittle as the GitHub organization we already have for it. Besides this, the biggest breaking change is that it will be broken up into a bunch of smaller projects – all separated and decoupled. We will try to independently version them – meaning they will take on a life of their own. Of course, this is a very different strategy than before – so it might not be a good idea and we might need to change the strategy. But for now, thats the idea and we might keep major releases in sync.

The brand Dolittle is something I’ve had since 1997 and own domains such as dolittle.com, dolittle.io and more related to it. These will be activated and be the landing page for the project.

Standard
Bifrost, Code Quality, CQRS

Concepts and more

With Bifrost we’re aligning ourselves more and more with being a platform for doing Domain Driven Design. Introducing more and more artefacts from the building blocks as we go along. When we set out to build Bifrost, we decided early on to be true to not be building anything into it that we didn’t need in a real world scenario. This was after we had started falling into the pattern of what if of software development. We started imagining problems and had to deal with them way before they had actually happened. With the risk of generalising; a fairly common scenario amongst dirty minded tech people. It stems from experience, knowing that there will always be something that can go wrong. Sure, there always is. I digress, I think this could probably be a blogpost on its own. The point being, we were heading down this path and for some reason got jolted back to reality and we started focusing on implementing only the things we needed and rather actually go back and remove things that came out of the “what if game”. On this new path we wanted to also stay focused on implementing things that were aligned with DDD and keep a close eye on the user.

Concepts

With the philosophy of CQRS at heart built with SOLID care we keep a very close eye on being very specific in our modelling. Things that are used in one part of the system is not automatically reused somewhere else, for the DRYness. We don’t believe in DRYing up properties and we favor composition of inheritance. Logic is still kept only once, on the command side of the fence. With all these principles at hand we were missing something that would link it all back together and make things look and feel consistent.

Let’s look at a scenario; say I want to update the address of a person. A command could be something like the following:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

In Bifrost you’d then have a CommandHandler to deal with this and then an AggregateRoot that would probably look like the following:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid personId) : base(personId) {}
   public UpdateAddress(string street, string city, string postalCode, string country)
   {
      // Apply an event
   }
}

The aggregate would then apply an event that looks like the following:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

An event subscriber would pick this up and update a read model that might look like the following:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

That was the artefacts we would typically be dealing with; command, aggregateroot, event and readmodel. For simplicity, these look pretty much the same – but they don’t have to, and in fact; most of the time they don’t. Lets address something here. We’re losing out on a potential in the modelling here. Take the Guid representing the unique identifier for the person. This is in fact something  that is part of the domain vocabulary that we’re losing by just making it a Guid directly.

In Bifrost we have something called ConceptAs that we can use to represent this domain concept. This is a base class that we recognize throughout the system and deals with properly during serialisation between the different out of process places it might go.

using System;
using Bifrost.Concepts;

public class Person : ConceptAs<Guid>
{
   public static implicit operator Person(Guid personId)
   {
      return new Person() { Value = personId };
   }
}

What this does is to wrap up the primitive, giving us a type that represent the domain concept. One modelling technique we applied when doing this is to stop referring to it as an id, so we started calling it the noun in which it represented. For us, this actually became the abstract noun. It doesn’t hold any properties for what it represents, only the notion of it. But codewise, this looks very good and readable.

In the ConceptAs base class we have an implicit operator that is capable of converting from the new type to the primitive, unfortunately C# does not allow for the same implicit operator going the other way in the base class, so this has to be explicitly implemented. With these operators we can move back and forth between the primitive and the concept. This comes very handy when dealing with events. We decided to drop the concepts in the events. The reason for this is that versioning becomes very hard when changing a concept, something you could decide to do. It could also make serialization more complex than you’d hope for with some serializers. Our conclusion is that we keep the events very simple and uses primitives, but everywhere else the concept is used.

The way we structure our source we basically have a domain project with our commands, command handlers and aggregates. Then we have a project for our read side and in between these two projects sit a project for holding the domain events. With this model we don’t get a coupling between the domain and the read, which is one of our primary objectives. The concepts on the other hand, they are going to be reused between the two. We therefor always have a concepts project where we keep our concepts.

Our typical project structure:

2015-02-03_07-43-27.png

So, now that we have our first concept, what did it do? It replaced the Guid reference throughout, introducing some clarity in our models. But the benefit we stumbled upon with this; we now have something to do cross cutting concerns with. By having the type of pipelines we have in Bifrost, we can now start doing things based on the type being used in different artefacts. Take the command for instance, we can now introduce input validation or business rules for it that would be applied automatically whenever used. Our support for FluentValidation has a BusinessValidator type that can be used for this:

using Bifrost.FluentValidation;
using FluentValidation;

public class PersonBusinessValidator : BusinessValidator<Person>
{
   public PersonBusinessValidator()
   {
      RuleFor(p => p.Value)
         .Must(… a method/lambda for checking if a person exist …)
         .WithMessage(“The person does not exist”);
   }
}

As long as you don’t create a specific business validator for the command, this would be automatically picked up. But if you were to create a specific validator for the command you could point it to this validator as a rule for the person property.

The exact same thing can then also be used for an input validator, which then would generate the proper metadata for the client and execute the validator on the client before the server.

It opens up for other cross cutting concerns as well, security for instance.

Value Objects

A second type of object, with the same importance in expressing the domain and opening for solving things in a cross cutting manner are value objects. This is a type of object that actually holds information, attributes that have value. They are useless on their own, but often used in the domain and also on the read side. Their uniqueness is based on all the fields in it. We find these in any domain all the time, they are typical things like money, phone number or in our case address. These are just the off the top of my head type of value objects you’d have, but you’ll find these in many forms. Lets tackle address:

using System;
using Bifrost.Concepts;

public class Address : Value
{
   public string Street { get; set; }
   public string City { get; set; }
   public string Postal { get; set; }
   public string Country { get; set; }
}

 

The Value baseclass implements IEquatable and deals with the property comparisons for uniquness.

With the value object you do get the same opportunities as with the concept for input and business validation, and yet another opportunity for dealing with cross cutting concerns.

If we summarize the sample before with these new building blocks, we would get:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Our event:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

As you can see, we keep it as it was, with the properties all in the event.

Our AggregateRoot:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid person) : base(person) {}

   public UpdateAddress(Address address)
   {
      Apply(new AddressUpdatedForPerson {
         Person = Id,
         Street = address.Street,
         City = address.City,
         Postal = address.Postal,
         Country = address.Country
      });
   }
}

The readmodel then would be:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Conclusion

For someone more familiar with traditional N-tier architecture and modelling your EDM or rather than separating things out like this, this probably raises a few eyebrows and questions. I can totally relate to it, before starting the Bifrost journey – I would have completely done the same thing. It seems like a lot of artefacts hanging around here, but every one of these serves a specific purpose and is really focused. Our experience with this is that we model things more explicitly, we reflect what we want in our model much better. Besides, you stop having things in your domain that can be ambiguous, which is the primary objective of DDD. DDD is all about the modelling and how we reach a ubiquitous language, a language that represent the domain, a language we all speak. From this perspective we’ve found domain concepts and value objects to go along with it to be very useful. With them in place as types, we found it very easy to go and retrofit cross cutting concerns we wanted in our solution without having to change any business logic. When you look at whats involved in doing it, its just worth it. The few lines of code representing it will pay back in ten folds of clarity and opportunities.

Standard
Code Quality, CQRS

Singling things out…

I was at a client a while back and was given the task to asses their code and architecture and provide a report of whatever findings I had. The first thing that jumped out was that things did more than one thing, the second thing being things concerning itself with things it should not concern itself about. When reading my report, the client recognised what it said and wanted me to join in and show how to rectify things; in good spirit I said:

NewImage

We approached one specific class and I remember breaking it down into what its responsibilities were and fleshed it all out into multiple classes with good naming telling exactly what it was supposed to do. When we were done after a couple of hours, the developer I sat with was very surprised and said something to the effect of; “.. so, single responsibility really means only one thing..”. Yes, it actually does. A class / type should have the responsibility for doing only one thing, likewise a method within that class should only do one thing. Then you might be asking; does that mean classes with only one methods in them. No, it means that the class should have one subject at a time and each and every method should only do work related to that subject and every method should just have the responsibility of solving one problem. Still, you might be wondering how to identify this.

Lets start with a simple example.

Say you have a system were you have Employees and each and every one of these have a WorkPosition related to it enabling a person to have different positions with the same employer (Yes, it is a real business case. 🙂 ).

The employee could be something like this :

public class Employee
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

A fairly simple class representing an aggregate root.

Then go and add this other thing in our domain; a WorkPosition, a value type that refers to the aggregate:

public class WorkPosition
{
    public int Id { get; set; }
    public int EmployeeId { get; set; }
    public double PositionSize { get; set; }
}

One could argue these represent entities that you might want to get from a database and you might want to call them entities and DRY up your code, since they both have an Id:

public class Entity
{
    public int Id { get; set; }
}

public class Employee : Entity
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class WorkPosition : Entity
{
    pubic int EmployeeId { get; set; }
    public double PositionSize { get; set; }
}

Nice, now we’ve dried it all up and the Id property can be reused.

The Id property is a classic pattern, but in domain driven design you would probably not use an integer as Id but rather a natural key that describes the aggregate better. For an Employee this could be the persons social security number. This means that an integer won’t do, but something that can tackle the needs of a social security number. For simplicity in this post, I’ll stick to primitives and do a string, normally I would introduce a domain concept for this; a type representing the concept instead of using generic primitives – more on that in another post.

We want to introduce this, but we’d love to keep the Entity base class, so we can stick common things into it, things like auditing maybe. But now we are changing the type of what identifies an Employee, and it’s not the same as for a WorkPosition; C# generics to the rescue:

public class Entity<T>
{
    public T Id { get; set; }
    public DateTime ModifiedAt { get; set; }
    public string ModifiedBy { get; set; }
}

public class Employee : Entity<string>
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class WorkPosition : Entity
{
    public string EmployeeId { get; set; }
    public double PositionSize { get; set; }
}

Great, now we have a generic approach to it and get auditing all in one go.

NewImage

We’ve just created a nightmare waiting to happen. We’ve made something generic, lost a lot from the domain already just to save typing in one property; Id (yeah I know, there are some auditing there – I’ll get back to that soon). We’ve lost completely what the intent of the Employees identification really is, which was a social security number. At least the name of the property should reflect that; Id doesn’t say anything about what kind of identification it is. A better solution would be going back to the original code and just make it a little bit more explicit:

public class Employee
{
    public string SocialSecurityNumber { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class WorkPosition
{
    public int Id { get; set; }
    public string SocialSecurityNumber { get; set; }
    public double PositionSize { get; set; }
}

That made it a lot more readable, we can see what is expected, and in fact we’ve also decoupled Employee and WorkPosition in one go – they weren’t coupled directly before, but the property named EmployeeId made a logical coupling between the two – which we might not need.

Another aspect of this would be bounded contexts; different representations of domain entities depending on the context they are in. In many cases an entity would even have different things that identifies it, depending on the context you’re in. Then Id would be a really bad name of a property and also having a generic representation of it would just make the whole thing so hard to read and understand. Normally what you would have is an underlying identifier that is shared amongst them, but you wouldn’t necessarily expose it in the different bounded contexts. Instead you would introduce a context map that would map from the concepts in the different bounded context to the underlying one.

Back to auditing – don’t we want to have that? Sure. But let’s think about that for a second. Auditing sounds like something you’d love to have for anything in a system, or in a particular bounded context, one could argue its cross cutting. It is a concern of every entity, but I would argue it is probably not something you need to show all over the place; in fact I’ll put forward the statement that auditing probably is an edge case when showing the entities on any dialog. So that means we probably don’t need them on the entities themselves, but rather make sure that we just get that information updated correctly in the database; this could be something we could do directly in the database as triggers or similar, or make sure everything goes through a well-defined common data context that can append this information. Then, for the edge cases were you need the auditing information, model only that and an auditing service of some kind that can get that information for the entities you need.

Fess up

Ok. So I have sinned, I’ve broken the Single Responsibility Principle myself many times, and I will guarantee you that I will break it in the future as well. In fact, let me show you code I wrote that I came across in Bifrost a few weeks back that got the hairs on my back rising, a system called EventStore:

public class EventStore : IEventStore
{
    public EventStore(IEventRepository eventRepository,
                      IEventStoreChangeManager eventStoreChangeManager,
                      IEventSubscriptionManager eventSubscriptionManager,
                      ILocalizer localizer)
    {
        // Non vital code for this sample...
    }

    public void Save(UncommittedEventStream eventsToSave)
    {
        using(_localizer.BeginScope())
        {
            _repository.Insert(eventsToSave);
            _eventSubscriptionManager.Process(eventsToSave);
            _eventStoreChangeManager.NotifyChanges(this, eventsToSave);
        }
    }
}

I’ve stripped away some parts that aren’t vital for this post, but the original class some 60 lines. But looking at the little code above tells me a couple of things:

  • Its doing more than one thing without having a name reflecting it should do that
  • The EventStore sounds like something that holds events, similar to a repository – but it deals with other things as well, so…
  • The API is wrong; it takes something that is uncommitted and it saves it – normally you’d expect a system to take something uncommitted and commit it

The solution to this particular problem was very simple. EventStore needed to do just what it promises to do in its name, all the other stuff is coordination and by the looks of it, it is coordinating streams of uncommitted events. So I introduced just that; UncommittedEventStreamCoordinator with a method of Commit() on it. Its job is then to coordinate the stuff we see above and the EventStore can then take on the real responsibility of dealing with storing things, and in fact I realised that the EventRepository could go at this point because I had tried to solve it all in a generic manner and realised that specialised EventStores for the different databases / storage types we support would be a lot better and not a lot of work to actually do.

Another thing the refactoring did for us was the ability to now turn of saving of events, but still get things published. By binding the IEventStore that we have to an implementation called NullEventStore – we don’t have to change any code, but it won’t save. Also what we also can do is to introduce the ability the EventStore itself to by asynchronous, we can then create something like AsyncEventStore that just chains back to the original EventStore, but does so asynchronously. All in all it adds more flexibility and readability.

Behaviors to the rescue

All good you might think, but how do you really figure out when to separate things out. I’ll be the first to admit, it can be hard sometimes, and also one of them things one can’t just be asked and necessarily be able to identify it within a heartbeat – sometimes it is a process getting to the result. But I would argue that thinking in behaviors makes it simpler, just for the simple fact that you can’t do two behaviors at the same time. Thinking from a testing perspective and going for BDD style testing with a gherkin (given, when, then) also gives this away more clearly; the second you write a specification that has and in it, you’re doing two things.

Why care?

Took a while getting to the why part. It is important to have the right motivation for wanting to do this. Single Responsibility Principle and Seperation of Concerns are principles that have saved me time and time again. It has helped me decouple my code and get my applications cleaner. Responsibility separated out gives you a great opportunity to get to Lego pieces that you can stitch together and really get to a better DRY model. It also makes testing easier, testing interaction between systems and more focused and simpler tests. Also, separating out the responsibility produces in my opinion simpler code in the systems as well, simpler means easier to read. Sure, it leaves a trail of more files / types, but applying proper naming and good structure, it should be a problem – rather a bonus. It is very different from procedural code, you can’t read a system start to finish – but I would argue you probably don’t want to that, it is in my opinion practically impossible to keep an entire system these days in your head. Instead one should practice focusing on smaller bits, make them specialised; great at doing one thing. Its so much better being great at one thing than do a half good job at many things.

Standard
.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.

Flow

Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :

NewImage

A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.

Validation

Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :

NewImage

Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.

Features

Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :

NewImage

Then you need to relocate the views master location formats for the view engines in ASP.net MVC :

NewImage

(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.

Conclusion

ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

Standard
C#, CQRS, Patterns, Practices

CQRS : The awakening

Ever have that feeling when all of a sudden you’re completely getting something, like an awakening. You might have been doing something that you think you were doing right, but all of sudden you get this eureka moment that not only tells you that you’re doing things wrong, but also raises your awareness to a whole new level. This is what we experienced on the first project were we applied CQRS. In fact we had quite a few of these moments, this post is about trying to describe what we went through.

The background

Before we can start, I think it is important to give a bit of a background of what we were used to be doing. Like most people in the development community, we tried following the right people on Twitter, read blogs, read books and in general just try to keep up the pace and learn new things. Some of us had really fallen in love over the years with the concept of TDD, BDD, clean code, agile, domain driven design, the S.O.L.I.D principles and what not. We tried to write our software according to the principles we loved. We were pretty convinced we were doing a great job at it as well. Even though we felt at times there were something wrong with a few things, we carried on.

The way we used to do things was to basically have a relational database sitting at the core with pretty much just tables and relations and the odd view every now and then. On top of that we would model objects that represented those tables and relationships and we would add more business logic to these objects and we would then slam a stamp on these objects and call them our domain model. This domain model would only have one representation of every type, so that we didn’t have to repeat ourself. From this we modeled repositories that enabled us to work with getting the domain objects, change them, update them and every now and then we needed to delete them; good old CRUD operations. We would even throw in some services to explicitly model functionality that was outside the responsibility of the repository, but more to capture business logic that was more involved. From my experience working in different projects and what I’ve seen doing consultancy, this is not way off what people tend to do. Of course there is more to it, you have your factories, your managers and what not, and you might even have different representations of your domain model for the view were you need to map back and forth between the different layers. Sure. More involved, but you get the picture.

Relationships

Often you find yourself writing these relationships between things, and it is just so logical that there is a certain relation between the two entities – they belong together in a symbiosis somehow. Take for instance a shopping cart, it is just sitting there screaming for content in the form of some items, there is no other way around it. We just get a craving for amodel similar to this : 

NewImage

Looks innocent enough and really makes sense. You might even throw in some read only properties that gives you convenience for counting the number of items, summarizing all the items and so forth. And then you realize, hang on – that CartItem is not referring to any product. 

NewImage

So now we have first of all a One-To-Many relationship, then we have a One-To-One relationship. Whenever we want to just check how many items we have in our cart we have to pull in this entire thing. Wait you might be saying now, with modern ORM we can specify how it is supposed to fetch things, by demand – typically what is referred to as lazy loading. Sure, we could do that for Product, and probably also for the CartItem, but the second you would ask for something like quantity it would still need to fetch all the CartItems. This adds quite a bit of complexity to your solution. All of a sudden you have a situation were your application is governed by how the objects are configured for the ORM and how they should be fetched. Potentially you could end up with 3 round-trips to the database if the model is used in a particular part of your Cart visualization that is showing the product names. Ones for getting the Cart from the database, ones for getting the cart items it is enumerating, and once for getting the product. What happens if we throw price into the mix. The product can’t have a price directly, because there might be discounts that is relevant for the customer, so yet another object just to display something that seems very simple and straightforward.

Now you might be saying, why don’t we just flatten this all out in a database view – my response to that; exactly. That is exactly what we should be doing, but not in the database. This is at the core of the benefits CQRS can give you; the ability to flatten things out – especially if you buy into the EventSourcing story, storing state as a series of fine grained events. Sure, you don’t need EventSourcing, but it adds for greater flexibility when you want to fully take advantage of the ability to flatten things out after something has already happened. More on this in a bit.

So views and flattening things out. That was our first moment of clarity. We really didn’t need any relationships, in fact we didn’t even need a Cart object, we needed something called CartSummary – it was a very different beast. And so much simpler in its nature.

 NewImage

So what happened here?

Instead of having to go to the database and get all those details as mentioned earlier every time we wanted to get to the summary, we had this simple object. Whenever the user performed an action that said add something to the cart, or remove something from the cart, we would have 2 subscribers for these events. One subscriber would be in charge of updating the underlying CartItem, and another one would be in charge of updating the CartSummary by selecting into the CartItems and just setting the values accordingly. Instead of taking the hit every time you needed the detail, we now take the hit only when actions are performed. My claim is that most applications out there has a ratio between reads and executes that are probably around 9 to 1, so why haven’t we been optimizing for this scenario before? 

Anyways, clearly the objects became simpler and more focused – all of a sudden our objects were doing one thing, and one thing only. We were empowered with the ability to really apply the single responsibility principle for our objects.

I mentioned EventSourcing; a technique for storing the events sequentially as they happen so that we can replay them at any given point in time. This is a very powerful tool. Imagine the flattening we just did and lets say you want to keep introduce another object to the mix and you want the historic state to affect that object. With event sourcing it is then just a matter of replaying those events for the event subscribers dealing with the new object and all of a sudden you have your new optimized state. An example of that could in the cart context be something like a realtime report showing all the carts historically aggregated, you could have objects representing only living carts – and more. The best part, these objects are lightweight and you start seeing possibilities for storing these in a very different manner.

Concepts

One of the things we started also realizing was that we were having a lot of IDs as a consequence hanging around, either as integers representing the identity or GUIDs. Neither our Commands nor our Events were reflecting properties on them very well. A standard value type sitting there didn’t say what it was, it was basically just an integer or a GUID. So we started modeling these things that we ended up calling Concepts. These were cross cutting and something we kept in its own project that could be reused all through. It turned out to be the one thing that was shared between things. The types we introduced were basic reusable things, like StoreId, representing a Store, but also more complex things like a PhoneNumber instead of holding a string. EMailAddress is another example of just that and all of a sudden it was so much easier to model all aspects of the system and we got ourself a vocabulary – one of the key elements of domain driven design. You couldn’t go wrong. Another benefit we got from this, purely technically, was that validation was all of a sudden falling into place as the cross-cutting-concern it is. Along side some of these concepts we put validators, we used FluentValidation for this as it let us put validation outside of the thing we were validating, giving us great flexibility in how we applied validation – a whole concept for a different post that I’m hoping to get to; discovery mechanisms based on configurable conventions and hierarchies. Anyhow, with having the concepts and validation of concepts we got DRY in what we consider a very good way. 

LINQ madness

Once you learn something and you really like it, it is hard to change how you do things. This was something we ran into quite a bit, but one incident that comes to mind was specifically tied into LINQ. We had our Cart and CartItems and just wanted to model the CartSummary, but wanted to solve it through querying. Basically we wanted to be able to specify for an object like that how it could aggregate things, totally disregarding that our platform could at this point flatten things out when things changed, not having to aggregate every time. We had a full day of bending things around LINQ and we wanted add some business rules to some of our objects we used in the view to change the outcome. To do this, we wanted to specify things using the specification pattern and turn it into LINQ expressions that we could execute against the datasource. Sure, it might have seemed as a good idea if you didn’t have CQRS in mind. But we did have CQRS and we wanted to be pure. At this point I remember having a feeling that we were just creating a really fancy ORM – and not really doing anything differently, rather just massively overkilling our existing ways of doing things. This day we discovered that it is not really about querying – the Q in CQRS is not about querying. We went back, changed our code so that we flattened things when events were risen. We could then just access everything by keys and that would be it. And this concept can go all the way for just about everything. As long as you have a key, you can go get things. This realization led to us not calling things queries – we changed it to be called view, lacking a better name. But at least it was closer to what things were, we weren’t going to execute complex queries – but get things by a key.

GUID is the key 

In order for CQRS to be applied, you need to start thinking very differently about how your objects gets their identification assigned. In our opinion; its all about being able to generate the Id already from the client. By doing that, things get so much easier. All of a sudden the client knows the identity of the thing it is creating, meaning that it can already at that point, even though it might not be ready due to eventual consistency, use the Id for other purposes related to it. Basically, we can start doing a few assumptions in the client and create better user experiences. We used a GUID throughout the system for everything that was used to identify items. 

Data, what data…

One of the things that we’re thought when working with more classic approaches like explained earlier, is to be very data focused. We’re modeling the data and then we’re creating, updating and deleting it. We’re thought to work with state. When you sit down with a user and ask them to describe to you their needs in the application you’re building for her. Unless she is a power user tainted by developers, she will probably state her needs as behaviors. How often have you heard the “I need a button here, so that…”, and we immediately stop paying attention because we’re so annoyed that the user couldn’t speak in more abstract terms than concretely about the button and we only pick up the stuff that comes after “so that” which is the state bit. Creating software is not about building state, its about exposing behaviors to the user that might end up as state. This is a very core principle in CQRS; Commands. You model your commands to expose the behavior that is needed and you also capture the intent. With a command you represent the What and potentially the Why – you put it in the name of the Command : ChangeAddressBecauseOfMove – now we have the What and the Why. 

This is something we realized more and more throughout the project – it is really all about the behaviors, once we got to that, it was so much easier picking up what the users wanted from the system. All of a sudden the button that the user wanted is not that much of a problematic way of expressing the needs, especially when you start doing things top-down and you can actually work with the user on prototypes, drawings and similar – things that they relate to. A user do not relate to a SQL table or relationship, they relate to the things they do – like clicking a button. 

From this realization came a bunch of other eurekas throughout the project. The biggest and most important thing was that we should step away from the idea that state can only be held in one place. Lets face it, a SQL server is good at many things but not everything, just like any technology; nothing is great at everything. We started realizing that we could start persisting our state that would be most suitable for its use. For one particular feature we put the data as a JSON object model directly onto a CDN and let the client download it when the feature was loaded. With GZIP compression on the server that request turned out to be on an average of 40KB and totally taking the load away from the Web server and the SQL server for performing something that would have been a pretty complex query. Instead we generated the JSON file whenever the underlying data changed, and it was not changed very often – so browser caching would also be a big benefit for returning customers.

We started realizing that we could pretty much go any way with this. A lot of the time, things are close to static – they hardly change. Why are we then dynamically generating things like Web pages – why aren’t we just generating them when things change and put them on a CDN or on the file system of the Web server that hosts the solution and we just sent it directly back to the client. You might be saying at this point that we do have things like caching of web pages for things like this, but I would argue – why would you add that complexity to your solution, why would you also add that pressure onto your hardware in the terms of extra memory needed, for all of your servers in the farm to be able to process this. All that for something that is solved way easier, just render it when needed. 

With all this, we stopped thinking about data – we focused on the behavior, the data – or state as I would rather prefer to call it, could then be represented in any way, in fact – it might even change over time how you represent it.

The bounded contexts

One of the things that Eric Evans mentioned in his book on DDD is the concept of bounded contexts. A concept that is in a bit of conflict with the idea of DRYing up things and the whole idea of having one model to rule them all. The idea is to have context aware representations of concepts found in your domain. Take for instance a Product in an e-commerce setting. For someone in charge of acquiring products for the store to have it available in the shop, they are interested in certain aspects of it the product. The are interested to know about who the vendor is, who is importing that particular product, you are of course interested in the price you as a store can buy it for – so you can figure out what your margins can potentially be on the product. Once the product hits the warehouse, none of those properties are important – they only care about how they can store that product in the warehouse – they don’t even care about the brand. Things like dimensions and weight is key information to the warehouse. Once the user sees the product on the web-shop, they are interested in the retail price, the description, other customers feedback, reviews and a whole lot more. Why model this as one product, chances are that the people in your organization might not be referring to it as a product, at least not in all departments. 

This was a huge realization we had, the minute we discovered this truth we stopped completely with thinking in any relational manner. Sure, we did have an existing database to take care of, because the project was greenfield built on top of existing data that was shared between the old and the new. But our mindset. We started modeling things with the lingo that was for the context, and not try to apply the same model across the board – sort of a compromise model that would suit all the needs. 

Features

Before Greg Yong and Udi Dahan agreed upon the concept of business components, we were well on our way with what we called a feature. But it serves the same purpose. The idea is to isolate features within a bounded context and that feature can not be shared between bounded contexts. It was also a very important realization we had that gave us a structure in the application and the different components it was built on to have this recognizable structure throughout. Naming was consistent, and you isolated things within the feature itself, so no namespace for interfaces that turns out to be a dumping ground for all the interfaces in the project or something like that. For all the tiers, you would find the features name – and all artifacts related to the feature in that tier would be inside that namespace / folder. That way we kept related things close, and it was more maintainable and even bringing new guys into the project made it a lot easier. For the ASP.net MVC part, the front-end tier, we even went ahead and relocated were views and controllers normally are found; we put them together – in fact we put together everything related to a feature inside the same folder – be it controllers, views, javascripts, images and so forth. It was just a matter of configuring things correctly and we could maintain a consistency throughout the project that would prove to be of great value.

Conclusion

As with any software project, these are how we experienced things – these are our conclusions, our interpretations of patterns and ways of doing things. We can not go out and say that our way is the best, but it was the closest thing to best for our team. The conclusions we reached, the realizations we had made our work so much easier. It turns out however for the guys that were part of these realizations and conclusions that it gave a push in a general direction of writing better code. We feel that we are now empowered with the knowledge to not compromise on code quality and be able to deliver on time, something we did again and again on the project. All deadlines we had were met, partly because of the platform we build, but also the way we worked in general. The more of the mindset we got changed, the faster we were able to deliver and we were focusing on the right stuff; the business value. It gave us a way to speak to users, and really translate it into what they were looking for; not the data, but capture the behaviors. 

Standard
Bifrost, C#, CQRS, MVVM, Patterns, Practices

CQRS applied : a summary

Every now and then in a software career you get a chance to write something from scratch and try out new things; a proper greenfield project. I’ve had that luck a couple of times and latest a project that proved to be the complete game-changer for me personally. Game changer in the sense that I gained a knowledge that I am pretty sure I will treasure for, if not the rest of my career, at least for quite a few years moving forward. The knowledge I am talking about can be linked back to applying CQRS, but it is not CQRS in itself that is the knowledge, its the concepts that tag along with it and the gained knowledge of how one can write code that is maintainable in the long run. Its also about the things we discovered during the project, smart ways to work, smart code we wrote – techniques we applied in order to meet requirements, add the needed business value, and at the same time deliver on time with more than was asked for.  

This is a more in-depth post than the talk I did @ NDC 2011

… from the top …

For the last couple of years, till March this year, I had the pleasure of being hired by Komplett Group, the largest e-commerce in Norway. At first I was assigned tasks to maintain the existing solution and was part of the on-premise team to do just that. As a consultant, that is very often what you find yourself doing – unless you’re hired in to be a particular role, like I’ve been in the past; system architect. I helped establish some basic architectural principles at that time, applying a few principles, like IOC and other parts of our favorite acronym; S.O.L.I.D. I remember feeling a bit at awe of just being there, they had a solution that could pretty much take on any number of clients and still be snappy and they never went down. I’ve learned to respect systems like that, even though it requires a lot of work – not necessarily development work, but a lot of the time IT or DevOps help keep systems alive. Anyhow, after a few months, back in 2009 I was asked by the department manager if I wanted to lead a small team on a particular project, an administration-tool for editing order details. With my background earlier as a team lead and also as a department manager myself, I kinda missed that role a bit and jumped at it. It was to be a stand-alone tool, accessible from the other tools they had, but we were given pretty much carte-blanche when it came to how we did it, whatever technology within the .net space we wanted. We settled on applying ASP.net MVC, Silverlight for some parts, WCF for exposing service for the Silverlight parts and nHibernate at the heart as the ORM for our domain. 

Part of the project was also to try out Scrum, having had quite a bit of experience with everything ranging from eXtreme Programming to MSF Agile and later Scrum, that excited me as well. So we applied it as well. 

Half-way through the project we started having problems, our domain was the one thing we shared with the others and we started running into nightmare after nightmare because we worked under the one-model-to-rule-them all idea. Which is really hard to actually get to work properly, and looking back I realize that most projects I’ve been have suffered from this. We ran into issues were for our purpose we needed some things in *-to-many to be eager fetched, which had consequences we could not anticipate in other systems that was using the same model. But we managed to come up with compromises that both systems could live with – still, we weren’t seeing eureka, just brushing up against the problems that a lot of projects meet without seeing that the approach was wrong. A bit after this we started brushing up against something that really got us excited; Commands. Without really knowing about CQRS at this point, but more coming from working with Silverlight and WPF, the concept of modeling behavior through commands. The reason we needed these commands was that we needed to perform actions on objects over a long period of time; potentially days, and at the end commit the changes. We came up with something we called a CommandChain – a chain of commands that we appended to and persisted. Commands represented behavior and modified state for the most part on entities when executed. We came up with a tool were we could debug these chains, and we could inspect which Command was causing problems and not.

NewImage

All in all, we were quite pleased with the project; we had done a lot of new things, applied TDD in a behavioral style, started exploring new corners of the universe we had yet to realize the extent of. Delivered not too badly on time, not perfectly on time – but close enough.

The turning point

After yet another 6 months or so, there were initial talks about the need to expose functionality from the web-shop to other systems used internally, a few design meetings and meetings with management lead to a new project. The scope of the project turned out to be not only exposing some services, but also a new web-shop frontend targeted and optimized for smartphone devices. The project was initiated from a technical perspective and not one with a specific business need in mind. From a technical perspective, the existing codebase had reached a point were it was hard to maintain and something new needed to replace it to gain back velocity and control over the software. It was to be a complete greenfield project, totally throw things overboard and just basically work with existing database but add flexibility enough that even that could be thrown out the door, if one ever wanted to do that. Early on I was vocal about them needing an architect to be able to deliver this project, I pointed in a couple of directions to internal resources they had – but people pointed back to me and I soon found myself as the system architect for the project. 

Requirements

When dealing with e-commerce at this level, there are quite a few challenges. Lets look at a few numbers; in the product catalog there was at the time I got off the project about 13.000 products, there was an order shipped every 21 seconds, in 2011 that amounted up to 1.454.776 orders, ~30.000 living sessions at any given time. Sure, its not Amazon, but for our neck of the woods its substantial. These numbers are of course on an average, but come busy times like Christmas, these numbers are more focused and the pressure is really on for that period in particular.

Decisions, decisions, decisions…

Before we started production, back in November 2010, we needed to get a few things straight; architecture, core values for the project, process and then getting everyone on-board with the decisions. We early on decided that we were going to learn all about CQRS, as it seemed to fit nicely with the requirements – especially for performance, and we were also requiring ourself that we wanted a rich domain model that really expressed all aspects of the system.  We also decided that we wanted to drive it all out applying BDD, and we wanted to be driving the project forward using Scrum and really be true to the process, not make our own version of Scrum. A dedicated product owner was assigned to the project that would have the responsibility for the backlog, make sure that we refined as needed, planned as needed and executed on it. 

Adding the business value

As I mentioned, this project came out of a technical need, not a concrete business need. We had the product owner role in place and he needed to fill the backlog with concrete business value. This was not an easy task to do, basically because the organization as a whole probably didn’t see the need for the project. In their defense, they had a perfectly fine solution today, not entirely optimal for smaller screens like a smartphone, but manageable. To the different store owners that normally provided the needs to the backlog, they were in desperate need of new features on existing solution, rather than this new thing targeting a platform they didn’t see much business value in adding. In combination with the fact that the organization had been in migration mode and all developer resources partly or close to full-time in periods being tied down to work related to migration of systems that was a result of merges and acquisitions, the organization had gotten used to not getting things done anyways. All this didn’t exactly create the most optimal environment for getting the real business value into the project. Something we really wanted. Early on we realized that the project could not be realized if we had user stories that were technical in nature. The first couple of months we did have quite a few technical user stories, and statistically these failed on estimation. We didn’t have any business value to relate them directly back to, and ended up in many cases as over-engineering and way out of their proportions as we as developers got creative and failed at doing our job; add business value. So we came to the conclusion; no technical user stories were allowed – ever. Something that I still today think was one of the wisest decisions we had on the project. It helped us get back and focus on why we were writing code every day; add business value. Even though this project was a spawn of the developers, there was clearly business value to guide us through. The approach became; lets pretend we’re writing an e-commerce solution for the first time. This turned out to be a good decision, it helped us  be naïve in our implementations – keeping in line with core principles of agile processes; the simplest thing that could possibly work. Our product owner was then left with the challenge of dragging the business value out of the business, he did a great job in doing that and at the same time getting them to realize the need for the change of platform that was in reality taking place. Something that became evident further down the line; we were in fact not building an e-commerce front-end for smartphones, but an entire new platform. More on that later.

YES, we did create a framework

One of the realizations we had early on was that we needed to standardize quite a few things. If you’re going to do that many new things and have a half-way chance of getting everyone with you and feel productive in the new environment, you need to get a basis working that people can work with. Back in 2008 I started a project called Bifrost, you can read more here. We looked at it and decided it was a good starting point for what we wanted to achieve. We also wanted the framework to be open-sourced. The philosophy was to create a generic framework to be the infrastructure sitting at the core of the application we were building. It would abstract away all the nitty gritty details of any underlying infrastructure, but also serve as the framework that promoted CQRS and the practices that we wanted. It was to be a framework that guided and assisted you, and very clearly not in your way. I’m not going to go in-depth in the framework, as there are more posts related to it specifically in the making and already out there.

CRUDing our way through CQRS

Well on our way, we had quite a few things we really couldn’t wrap our heads around. Coming from a very CRUD centric world, the thought of decoupling things in the way that CQRS was saying was really hard. And at the same time, there were potential for duplication in the code. I remember being completely freaked out at the beginning of the project. All my neural cells were screaming “NO! STOP!” – but we had to move on and get smarter, get passed the hurdles, learn. At first we really started making a mess out of things, just because we were building it on assumptions – the assumptions that CQRS is similar to doing regular old CRUD with what we used to know as a domain model. It was far from it, and we had a true eureka at one point were we realized something important; we were working hard an entire day on how to represent some queries in a good way so that they would be optimal in the code but also execute optimally – and it hit us as a ton of brick after leaving work that day. We were doing everything wrong, and we even came up with a mantra; “if a problem seems complicated, chances are we’re doing it wrong”. That was the turning point that helped us write code that was simpler, more testable, more focused, faster and we picked up pace in the project like I’ve never experienced before. 

From that point we had our mantra that really proved as a guiding star. Whenever we ran into things we didn’t have an answer to straight away and we started finding advanced solutions to the challenges, we applied the mantra and went back to rethink things.  

Tooling

Early in the project we realized we needed a tool for both visualizing the events being generated, but also be able to republish events. We came up with a tool built in Silverlight, using the pivot control from Microsoft to visualize.

Mimir

The real benefits

Looking back at what we did and trying to find the concrete benefits, I must say we now have gained serious amount of knowledge in a few areas. The thing that CQRS specifically gave us was the ability to model our domain properly. We achieved the separation we wanted between the behavior of the application and the things the behaviors caused changes to, the data on the other side. It helped us achieve greater flexibility, easier maintenance. Since we decided to not only just apply CQRS, but also build a reusable framework sitting at the bottom, we achieved a certain pattern of working that made things really easy to get started with development, and also a recognizable structure that made it easy to know were to put things if the core principles was explained to you.

I think by far the biggest benefit we achieved was the insight into how we should be developing software. Keeping things simple really have huge benefits. Decouple things, staying true to single-responsibility in every sense of the word single.

Another huge realization I had, something I have been saying all the time throughout my career but really got re-enforced with this project; concrete technology doesn’t really matter. Sure things will end up as a certain concrete technology – but stop thinking concretely when designing the system. Try to get down to the actual business needs, model it and let the concrete technology fall into place later. With this approach, you gain another useful possibility; doing top-down development. Start with the user interface, move your way down. Keep the feedback loop as tight as possible with the business. Don’t do more than is needed. This approach is something I know I will be missing the most in future projects. A tight feedback-loop is were the gold is hidden.

Were did we screw up?

This project must come across as a fairly peachy story. And sure, it was by far in my experience the project with the best code-base, the most structured one, the one that I personally learned the most from and also the one project in my career that we really managed to be on schedule and in fact for a couple of the releases we delivered more business value than was asked for. But it came at a price. One of the things we struggled with early on was to spread the knowledge across to the entire team and get everyone excited about the architecture, the new way of working with things and so forth. Personally I didn’t realize how invested people were in their existing solution, and also in the existing way of doing things. Me as the architect, should have seen this before we got started. The problem with not realizing this ended up being a growing problem in the group. You had a divide in the group of people buying into the entire story and those who didn’t or didn’t quite get the entire story. My theory is that we should have given the most invested members of the group a time for mourning. Get time to bury their friend through many years; the old project. We should have realized that we were in fact building for the future and would replace the existing solution at the beginning of the project and this should have been the official line. Instead it kind of organically became the official line. We did at the beginning do training in all the new techniques, and gave people time to learn. Basically didn’t give them any tasks for a few weeks and just pointed them in the general direction of things they should look at. What I think we failed on was that we didn’t point out that these things were not optional, these new ideas were in fact mandatory knowledge. We should have been much clearer about this and been vocal about the expectations. Another thing I think I would have done a bit different; involve more people in the framework part of things. With the risk of stepping on toes, I think it is not wrong of me to say that I was the framework guy. For the most part, I ended up working on the framework. Don’t get me wrong, I love doing that kind of work – but I think the experience, the design decisions got lost in translation and not everyone in the group understood why things were done as they were. 

Conclusion

The project and opportunity that was given to the team was awesome, I really appreciate the trust that was given to me for leading the way in this project. The pace we had, the stuff we did has so far amounted up to be the coolest project I’ve ever worked on – and I am happy to admit it; I miss the project. Hadn’t it been for a great opportunity that was given to me, I would have loved to stay on further. We had ups and downs, as with any software project, but overall I am wildly impressed with our accomplishments as a team and also by the end result.

Ohh… By the way. The end result can be found here.

Standard