.net, C#, Code Tips

Tip: Improving async experience with Pulumi

Recently we’ve been working a lot with Pulumi for automating our cloud environments. We’re building out our own management tool and creating Pulumi stack definitions in C#. One thing that quickly became a pain was working with the Inputs and Output and running into code that became way too nested, looking a lot like the old TPL or JavaScript Promises with .ContinueWith() or .then().

We’re building our stacks using the Pulumi function:

PulumiFn.Create(async () =>
{
    // Automate things...
});

Within the Action we set up the things we want to automate. A scenario we have, is to create a configuration object that contain the connection string from a MongoDB Cluster running with Atlas. The file generated is stored in an Azure file share we create with Pulumi.

// Storage is an object being passed along with information about the Azure storage being used.
var getFileShareResult = GetFileShare.Invoke(new()
{
    AccountName = storage.AccountName,
    ResourceGroupName = resourceGroupName,
    ShareName = storage.ShareName
});

// Cluster is an object holding the MongoDB cluster information.
var getClusterResult = GetCluster.Invoke(new()
{
    Name = cluster.Name,
    ProjectId = cluster.ProjectId
});

// Get the values we need to be able to write the connection string
getFileShareResult.Apply(fileShare =>
{
    getClusterResult.Apply(clusterInfo =>
    {
        // Write the file with the connection string
        return clusterInfo;
    });

    return fileShare;
});

In this sample we’re just interested in 2 values and still its quite a few lines of code and nested scopes.

To improve on this, we ended up creating a couple of extension methods that helps us write regular async/await based code.

public static class OutputExtensionMethods
{
    public static Task<T> GetValue<T>(this Output<T> output) => output.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Output<T> output, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        output.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

And for Input it would be the same:

public static class InputExtensionMethods
{
    public static Task<T> GetValue<T>(this Input<T> input) => input.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Input<T> input, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        input.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

With these we can now simplify the whole thing down to two lines of code:

// Get the values we need to be able to write the connection string
var fileShareResult = await getFileShareResult.GetValue(_ => _);
var clusterInfo = await getClusterResult.GetValue(_ => _);

// Write the file with the connection string
...

We’re interested to hear if there are better ways already with the Pulumi SDK or if we’re going about this in the wrong way. Please leave a comment with any input, much appreciated.

Standard
.net, C#, Code Tips

ASP.NET Core 6 – transparent WebSockets

Lets face it; I’m a framework builder. In the sense that I build stuff for other developers to use. The goal when doing so is that the developer using what’s built should feel empowered by its capabilities. Developers should have lovable APIs that put them in the pit of success and lets them focus on delivering the business value for their business. These are the thoughts that goes into what we do at work when building reusable components. This post represents some of these reusable components we build.

TL;DR

All the things discussed is documented here. Its backend implementation here, frontend here. Concrete backend example of this here and frontend here. Recommend reading my post on our proxy generation tool for more context.

Introduction

WebSocket support for ASP.NET and ASP.NET Core has been around forever. At its core it is very simple but at the same time crude and not as elegant or structured IMO as your average Controller. We started thinking about how we could simplify this. Sure there is the SignalR approach – which is a viable option (and I’ve written a couple of books about it a few years back here and here). But we wanted something that wouldn’t involve changing the programming model too much from a regular Controller.

One of the reasons we wanted to add some sparkling reactiveness into our software was that we are building software that is all focused on CQRS and Event Sourcing. With this we get into an eventual consistency game real quick for the read side. Once an action – or command in our case – is performed, the read side only updates as a consequence of an event being handled. Since we don’t really know when it is done and ready, we want to be able to notify the frontend with any changes as they become available.

Queries

One of the things that we do is to encapsulate the result of a query into a well known structure. Much like GraphQL does with not just relying on HTTP error codes as the means of communication success or not, we want to capture it in a well known structure that holds the details of whether or not the query was successful and eventually we’ll also put validation results, exception messages and such as well. Along side this, the actual result of the query is also kept on it. For now it looks like the following:

public record QueryResult(object Data, bool IsSuccess);

You’ll see this type used throughout this post.

Observables

We’re very fond of the concept of observables and use Reactive Extensions throughout our solution for different purposes. Our first construct is therefor a special kind of observable we call the ClientObservable. It is the encapsulation we will be using from our Controllers. Its responsibility is to do the heavy lifting of handling the WebSocket “dance” and also expose a clean API for us to provide data to it as things change. It also needs to deal with client closing connection and cleaning up after itself and all.

The basic implementation of looks like below:

public class ClientObservable<T> : IClientObservable
{
    readonly ReplaySubject<T> _subject = new();

    public ClientObservable(Action? clientDisconnected = default)
    {
        ClientDisconnected = clientDisconnected;
    }

    public Action? ClientDisconnected { get; set; }

    public void OnNext(T next) => _subject.OnNext(next);

    public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
    {
        using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
        var subscription = _subject.Subscribe(_ =>
        {
            var queryResult = new QueryResult(_!, true);
            var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
            var message = Encoding.UTF8.GetBytes(json);

            webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
        });

        var buffer = new byte[1024 * 4];
        var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

        while (!received.CloseStatus.HasValue)
        {
            received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
        }

        await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
        subscription.Dispose();

        ClientDisconnected?.Invoke();
    }
}

Since the class is generic, there is a non-generic interface that specifies the functionality that will be used by the next building block.

public interface IClientObservable
{
    Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions);

    object GetAsynchronousEnumerator(CancellationToken cancellationToken = default);
}

Action Filters

Our design goal was that Controller actions could just create ClientObservable instances and return these and then add some magic to the mix for it to automatically be hooked up properly.

For this to happen we can leverage Filters in ASP.NET Core. They run within the invocation pipeline of ASP.NET and can wrap itself around calls and perform tasks. We need a filter that will recognize the IClientObservable return type and make sure to handle the connection correctly.

public class QueryActionFilter : IAsyncActionFilter
{
    readonly JsonOptions _options;

    public QueryActionFilter(IOptions<JsonOptions> options)
    {
        _options = options.Value;
    }

    public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
    {
        if (context.HttpContext.Request.Method == HttpMethod.Get.Method
            && context.ActionDescriptor is ControllerActionDescriptor)
        {
            var result = await next();
            if (result.Result is ObjectResult objectResult)
            {
                switch (objectResult.Value)
                {
                    case IClientObservable clientObservable:
                        {
                            if (context.HttpContext.WebSockets.IsWebSocketRequest)
                            {
                                await clientObservable.HandleConnection(context, _options);
                                result.Result = null;
                            }
                        }
                        break;

                    default:
                        {
                            result.Result = new ObjectResult(new QueryResult(objectResult.Value!, true));
                        }
                        break;
                }
            }
        }
        else
        {
            await next();
        }
    }
}

With the filter in place, you typically add these during the configuration of your controllers e.g. in your Startup.cs during ConfigureServices – or using the minimal APIs:

services.AddControllers(_ => _.Filters.Add<QueryActionFilter>());

Client abstraction

We also built a client abstraction in TypeScript for this to provide a simple way to leverage this. It is built in layers starting off with a representation of the connection.

export type DataReceived<TDataType> = (data: TDataType) => void;

export class ObservableQueryConnection<TDataType> {

    private _socket!: WebSocket;
    private _disconnected = false;

    constructor(private readonly _route: string) {
    }

    connect(dataReceived: DataReceived<TDataType>) {
        const secure = document.location.protocol.indexOf('https') === 0;
        const url = `${secure ? 'wss' : 'ws'}://${document.location.host}${this._route}`;
        let timeToWait = 500;
        const timeExponent = 500;
        const retries = 100;
        let currentAttempt = 0;

        const connectSocket = () => {
            const retry = () => {
                currentAttempt++;
                if (currentAttempt > retries) {
                    console.log(`Attempted ${retries} retries for route '${this._route}'. Abandoning.`);
                    return;
                }
                console.log(`Attempting to reconnect for '${this._route}' (#${currentAttempt})`);

                setTimeout(connectSocket, timeToWait);
                timeToWait += (timeExponent * currentAttempt);
            };

            this._socket = new WebSocket(url);
            this._socket.onopen = (ev) => {
                console.log(`Connection for '${this._route}' established`);
                timeToWait = 500;
                currentAttempt = 0;
            };
            this._socket.onclose = (ev) => {
                if (this._disconnected) return;
                console.log(`Unexpected connection closed for route '${this._route}`);
                retry();
            };
            this._socket.onerror = (error) => {
                console.log(`Error with connection for '${this._route} - ${error}`);
                retry();
            };
            this._socket.onmessage = (ev) => {
                dataReceived(JSON.parse(ev.data));
            };
        };

        connectSocket();
    }

    disconnect() {
        console.log(`Disconnecting '${this._route}'`);
        this._disconnected = true;
        this._socket?.close();
    }
}

On top of this we then have a ObservableQueryFor construct which leverages this and provides a way to subscribe for changes.

export abstract class ObservableQueryFor<TDataType, TArguments = {}> implements IObservableQueryFor<TDataType, TArguments> {
    abstract readonly route: string;
    abstract readonly routeTemplate: Handlebars.TemplateDelegate<any>;

    abstract readonly defaultValue: TDataType;
    abstract readonly requiresArguments: boolean;

    /** @inheritdoc */
    subscribe(callback: OnNextResult, args?: TArguments): ObservableQuerySubscription<TDataType> {
        let actualRoute = this.route;
        if (args && Object.keys(args).length > 0) {
            actualRoute = this.routeTemplate(args);
        }

        const connection = new ObservableQueryConnection<TDataType>(actualRoute);
        const subscriber = new ObservableQuerySubscription(connection);
        connection.connect(callback);
        return subscriber;
    }
}

The subscription being returned:

export class ObservableQuerySubscription<TDataType> {
    constructor(private _connection: ObservableQueryConnection<TDataType>) {
    }

    unsubscribe() {
        this._connection.disconnect();
        this._connection = undefined!;
    }
}

We build our frontends using React and added a wrapper for this to make it even easier:

export function useObservableQuery<TDataType, TQuery extends IObservableQueryFor<TDataType>, TArguments = {}>(query: Constructor<TQuery>, args?: TArguments): [QueryResult<TDataType>] {
    const queryInstance = new query() as TQuery;
    const [result, setResult] = useState<QueryResult<TDataType>>(new QueryResult(queryInstance.defaultValue, true));

    useEffect(() => {
        if (queryInstance.requiresArguments && !args) {
            console.log(`Warning: Query '${query.name}' requires arguments. Will not perform the query.`);
            return;
        }

        const subscription = queryInstance.subscribe(_ => {
            setResult(_ as unknown as QueryResult<TDataType>);
        }, args);

        return () => subscription.unsubscribe();
    }, []);

    return [result];
}

The entire frontend abstraction can be found here.

Usage

To get WebSockets working, we will need to add the default ASP.NET Core middleware that handles it (read more here). Basically in your Startup.cs or your app builder add the following:

app.UseWebSockets()

With all of this we can now create a controller that watches a MongoDB collection:

public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _collection;

    public Accounts(IMongoCollection<DebitAccount> collection) => _collection = collection;

    [HttpGet]
    public ClientObservable<IEnumerable<DebitAccount>> AllAccounts()
    {
        var observable = new ClientObservable<IEnumerable<DebitAccount>>();
        var accounts = _accountsCollection.Find(_ => true).ToList();
        observable.OnNext(accounts);
        var cursor = _accountsCollection.Watch();

        Task.Run(() =>
        {
            while (cursor.MoveNext())
            {
                if (!cursor.Current.Any()) continue;
                observable.OnNext(_accountsCollection.Find(_ => true).ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

Notice the usage of the ClientObservable and how it can be used with anything.

MongoDB simplification – extension

The code in the controller above is typically a thing that will be copy/pasted around as it is a very common pattern. We figured that we will be pretty much doing the same for most of our queries and added convenience methods for MongoDB. They can be found here.

We can therefor package what we had in the controller into an extension API and make it more generalized.

public static class MongoDBCollectionExtensions
{
    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        Expression<Func<TDocument, bool>>? filter,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= _ => true;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        FilterDefinition<TDocument>? filter = null,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= FilterDefinition<TDocument>.Empty;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
            this IMongoCollection<TDocument> collection,
            Func<Task<IAsyncCursor<TDocument>>> findCall)
    {
        var observable = new ClientObservable<IEnumerable<TDocument>>();
        var response = await findCall();
        observable.OnNext(response.ToList());
        var cursor = collection.Watch();

        _ = Task.Run(async () =>
        {
            while (await cursor.MoveNextAsync())
            {
                if (!cursor.Current.Any()) continue;
                var response = await findCall();
                observable.OnNext(response.ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

With this glue in place, we now have something that makes it very easy to create something that observes a collection and sends any changes to the frontend:

[Route("/api/accounts/debit")]
public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _accountsCollection;

    public Accounts(
        IMongoCollection<DebitAccount> accountsCollection)
    {
        _accountsCollection = accountsCollection;
    }

    [HttpGet]
    public Task<ClientObservable<IEnumerable<DebitAccount>>> AllAccounts()
    {
        return _accountsCollection.Observe();
    }
}

Streaming JSON

A nice addition to ASP.NET Core 6 is the native support for IAsyncEnumerable<T> and streaming of JSON:

One benefit of this is you can now quite easily support both a WebSocket scenario and regular web requests. On our ClientObservable<T> we can then implement the IAsyncEnumerable<T> interface and create our own enumerator that supports this by observing the subject we have there.

    public class ClientObservable<T> : IClientObservable, IAsyncEnumerable<T>
    {
        readonly ReplaySubject<T> _subject = new();

        public ClientObservable(Action? clientDisconnected = default)
        {
            ClientDisconnected = clientDisconnected;
        }

        public Action? ClientDisconnected { get; set; }

        public void OnNext(T next) => _subject.OnNext(next);

        public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
        {
            using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
            var subscription = _subject.Subscribe(_ =>
            {
                var queryResult = new QueryResult(_!, true);
                var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
                var message = Encoding.UTF8.GetBytes(json);

                webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
            });

            var buffer = new byte[1024 * 4];
            var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

            while (!received.CloseStatus.HasValue)
            {
                received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
            }

            await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
            subscription.Dispose();

            ClientDisconnected?.Invoke();
        }

        public IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken = default) => new ObservableAsyncEnumerator<T>(_subject, cancellationToken);

        public object GetAsynchronousEnumerator(CancellationToken cancellationToken = default) => GetAsyncEnumerator(cancellationToken);
    }

The return type of ObervableAsyncEnumerator<T> can be implemented as follows:

public class ObservableAsyncEnumerator<T> : IAsyncEnumerator<T>
{
    readonly IDisposable _subscriber;
    readonly CancellationToken _cancellationToken;
    readonly ConcurrentQueue<T> _items = new();
    TaskCompletionSource _taskCompletionSource = new();

    public ObservableAsyncEnumerator(IObservable<T> observable, CancellationToken cancellationToken)
    {
        Current = default!;
        _subscriber = observable.Subscribe(_ =>
        {
            _items.Enqueue(_);
            if (!_taskCompletionSource.Task.IsCompletedSuccessfully)
            {
                _taskCompletionSource?.SetResult();
            }
        });
        _cancellationToken = cancellationToken;
    }

    public T Current { get; private set; }

    public ValueTask DisposeAsync()
    {
        _subscriber.Dispose();
        return ValueTask.CompletedTask;
    }

    public async ValueTask<bool> MoveNextAsync()
    {
        if (_cancellationToken.IsCancellationRequested) return false;
        await _taskCompletionSource.Task;
        _items.TryDequeue(out var item);
        Current = item!;
        _taskCompletionSource = new();
        if (!_items.IsEmpty)
        {
            _taskCompletionSource.SetResult();
        }

        return true;
    }
}

Conclusion

This post we’ve touched on an optimization and formalization of reactive Web programming. From a perspective of covering the most common use cases, we feel that this approach achieves that. It is not a catch-all solution, but with the way we’ve built it you do have some flexibility in how you use this. It is not locked down to be specifically just MongoDB. The ClientObservable is completely agnostic, you can use it for anything – all you need is to be able to observe something else and then call the OnNext method on the observable whenever new things appear.

From a user perspective I think we should aim for solutions that does not require the user to hit a refresh button. In order to do that, it needs to be simple for developers to enable it in their solutions. The solution presented here is geared towards that.

If you have any feedback, good or bad, improvements or pitfalls; please leave a comment.

Standard
.net, C#, Code Quality

Avoid code generation if compiler is in error state

One of the things discovered with usage of our proxy generator was that when working in the code and adding things like another property on a class/record. While typing we could see the generator running and spitting out files as we type. For instance, lets say we have the following:

public record DebitAccount(AccountId Id, AccountName Name, PersonId Owner, double Balance);

If I were to now after a build start typing for a fifth property in this, it would start spitting out things. First a file without any name, then as I typed the type I would get a file for each letter I added – depending on how fast I typed. So if this was a string type, I could be seeing s.ts, st.ts, str.ts and so on.

Turns out this is by design. One of the optimizations done for the dotnet build command is that it keeps the compiler process around. It starts a build-server that will handle incremental builds as things is happening and therefor be prepared for when we actually do a dotnet build and be as fast as it can be.

When doing proxy generation, this is obviously less than optimal. To avoid this, we added a check if there are any diagnostics errors from the compiler – if so, do not generate anything.

In our source generator we added a line at the top to avoid this:

public void Execute(GeneratorExecutionContext context)
{
    if (context.Compilation.GetDiagnostics().Any(_ => _.Severity == DiagnosticSeverity.Error)) return;
}

Standard
.net, C#

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

<ItemGroup>
   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->
</ItemGroup>

This will then include this reusable file for the compiler.

Standard
Bifrost, C#, Cloud, CQRS

Bifrost roadmap first half 2017 (new name)

This type of post is the first of its kind, which is funnny enough seeing that Bifrost has been in development since late 2008. Recently development has moved forward quite a bit and I figured it was time to jot down whats cooking and what the plan is for the next six months – or perhaps longer.

First of all, its hard to commit to any real dates – so the roadmap is more a “this is the order in which we’re going to develop”, rather than a budget of time.

We’ve also set up a community standup that we do every so often – not on a fixed schedule, but rather when we feel we have something new to talk about. You can find it here.

1.1.3

One of the things we never really had the need for was to scale Bifrost out. This release is focusing on bringing this back. At the beginning of the project we had a naïve way of scaling out – basically supporting a 2 node scale out, no consideration for partitioning or actually checking if events had been processed or not. With this release we’re revisiting this whole thing and at the same time setting up for success moving forward. One of the legacies we’ve been dragging behind us is the that all events where identified by their CLR types, maintaining the different event processors was linked to this – making it fragile if one where to move things around. This is being fixed by identifying application structure rather than the CLR structure in which the event exist in. This will become convention based and configurable. With this we will enable RabbitMQ as the first supported scale out mechanism. First implementation will not include all partitioning, but enabling us to move forward and get that in place quite easily. It will also set up for a more successful way of storing events in an event store. All of this is in the middle of development right now. In addition there are minor details related to the build pipeline and automating everything. Its a sound investment getting all versioning and build details automated. This is also related to the automatic building and deployment of documentation, which is crucial for the future of the project. We’ll also get an Azure Table Storage event store in place for this release, which should be fairly straight forward.

1.1.4

Code quality has been set as the focus for this release. Re-enabling things like NDepend, static code analysis.

1.1.5

Theme of this version is to get the Web frontend sorted. Bifrost has a “legacy” ES5 implementation of all its JavaScript. In addition it is very coupled to Knockout, making it hard to use things like Angular, Aurelia or React. The purpose of this release is to decouple the things that Bifrost bring to the table; proxy generation and frontend helpers such as regions, operations and more. Also start the work of modernizing the code to ES2015 and newer by using BabelJS. Also move away from Forseti, our proprietary JavaScript test runner over to more commonly used runners.

Inbetween minor releases

From this point to the next major – it is a bit fuzzy. In fact, we might prioritize to push the 2.0.0 version rather than do anything inbetween. We’ve defined a version 1.2.0 and 1.3.0 with issues we want to deal with, but might decide to move these to 2.0.0 instead. The sooner we get to 2.0.0, the better in many ways.

2.0.0

Version 2.0.0 is as indicated; breaking changes. First major breaking change; new name. The project will transition over to be called Dolittle as the GitHub organization we already have for it. Besides this, the biggest breaking change is that it will be broken up into a bunch of smaller projects – all separated and decoupled. We will try to independently version them – meaning they will take on a life of their own. Of course, this is a very different strategy than before – so it might not be a good idea and we might need to change the strategy. But for now, thats the idea and we might keep major releases in sync.

The brand Dolittle is something I’ve had since 1997 and own domains such as dolittle.com, dolittle.io and more related to it. These will be activated and be the landing page for the project.

Standard
.net, C#

Machine Specifications – .NET Core

I’ve been working on a particular project, mostly in the design phase – but leading up to implementation I quickly hit a snag; my favorite framework and tools for running my tests – or rather, specs, are not in the .NET Core space yet. After kicking and screaming for my self for the most part, I decided to do something about it and contribute something back after having been using the excellent Machine.Specifications Specification by Example framework and accompanying tools for years.

The codebase was not really able to directly build on top of .NET Core – and I started looking at forking it and just #ifdefing my way through the changes. This would be the normal way of contributing in the open source space. Unfortunately, it quickly got out of hand – there simply are too many differences in order for me to work fast enough and achieve my own goals right now. So, allthough not a decision I took lightly; I decided to just copy across into a completely new repository the things needed to be able to run it on .NET Core. It now lives here.

Since .NET Core is still in the flux, and after the announcement of DNX being killed off and replaced by a new .NET CLI tool called dotnet – I decided to for now just do the simplest thing possible and not implement a command or a test framework extension. This will likely change as the tools mature over time. Focused on my own feedback loop right now.

Anywho, the conclusion I’ve come to is that I will have my own test/spec project right now be regular console apps with a single line of code executing the all the specs in the assembly. This is far from ideal, but a starting point so I can carry on. The next logical step is to look at improving this experience with something that runs the specs affected by a change either in the unit under test or the spec itself. If you want a living example, please have a look here.

Basically – the needed bits are Nuget packages that you can find here, here and here.
The first package do include a reference to the others. But right now the tooling is too flaky to predict wether or not intellisense actually works using things like OmniSharp with VSCode or similar, so I have been explicitly taking all three dependencies.

The next thing you need is to have a Program with a Main method that actually runs it by calling the AssemblyRunner that I’ve put in for now.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-55-21.png&quot; data-mce-src="2016-04-16_23-55-21.png"

Once you have this you can do a dotnet run and the output will be in the console.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-40-52.png&quot; data-mce-src="2016-04-16_23-40-52.png"

.NET Core Version

Important thing to notice is that I’ve chosen to be right there on the bleeding edge of things, taking dependencies on packages and runtime versions from the MyGet feeds. The reason behind this is that some of the things that I’m using only exist in the latest bits. Scott Hanselman has a great writeup with regards to where we are today with .NET Core.

Future

Well, I’m not yet knee deep into this and not focusing my effort on this project. I’ll be building what I need, but of course – totally open to anyone wanting to contribute to this project. But if I were to say anything about my own vision and steps I can see right now that would be natural progressions for this it would be that I’d love to see the first step be an auto-watching CLI tool that will run the appropriate tests according to files being changed. I would not go all in and do a full analysis of call stacks and all to figure out what is changing, but rather have an approximation approach to it – similar to what we put in place for our test runner project for JavaScript called Forseti. The approximation was basically based on configurable conventions mapping a relationship between the systems under test and the tests – or specs as I prefer to refer to them as. After that I can see integration with VSCode – which is my favorite editor at the moment. Something similar to WallabyJS would be very cool.

Standard
.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.

Flow

Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :

NewImage

A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.

Validation

Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :

NewImage

Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.

Features

Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :

NewImage

Then you need to relocate the views master location formats for the view engines in ASP.net MVC :

NewImage

(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.

Conclusion

ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

Standard
C#, CQRS, Patterns, Practices

CQRS : The awakening

Ever have that feeling when all of a sudden you’re completely getting something, like an awakening. You might have been doing something that you think you were doing right, but all of sudden you get this eureka moment that not only tells you that you’re doing things wrong, but also raises your awareness to a whole new level. This is what we experienced on the first project were we applied CQRS. In fact we had quite a few of these moments, this post is about trying to describe what we went through.

The background

Before we can start, I think it is important to give a bit of a background of what we were used to be doing. Like most people in the development community, we tried following the right people on Twitter, read blogs, read books and in general just try to keep up the pace and learn new things. Some of us had really fallen in love over the years with the concept of TDD, BDD, clean code, agile, domain driven design, the S.O.L.I.D principles and what not. We tried to write our software according to the principles we loved. We were pretty convinced we were doing a great job at it as well. Even though we felt at times there were something wrong with a few things, we carried on.

The way we used to do things was to basically have a relational database sitting at the core with pretty much just tables and relations and the odd view every now and then. On top of that we would model objects that represented those tables and relationships and we would add more business logic to these objects and we would then slam a stamp on these objects and call them our domain model. This domain model would only have one representation of every type, so that we didn’t have to repeat ourself. From this we modeled repositories that enabled us to work with getting the domain objects, change them, update them and every now and then we needed to delete them; good old CRUD operations. We would even throw in some services to explicitly model functionality that was outside the responsibility of the repository, but more to capture business logic that was more involved. From my experience working in different projects and what I’ve seen doing consultancy, this is not way off what people tend to do. Of course there is more to it, you have your factories, your managers and what not, and you might even have different representations of your domain model for the view were you need to map back and forth between the different layers. Sure. More involved, but you get the picture.

Relationships

Often you find yourself writing these relationships between things, and it is just so logical that there is a certain relation between the two entities – they belong together in a symbiosis somehow. Take for instance a shopping cart, it is just sitting there screaming for content in the form of some items, there is no other way around it. We just get a craving for amodel similar to this : 

NewImage

Looks innocent enough and really makes sense. You might even throw in some read only properties that gives you convenience for counting the number of items, summarizing all the items and so forth. And then you realize, hang on – that CartItem is not referring to any product. 

NewImage

So now we have first of all a One-To-Many relationship, then we have a One-To-One relationship. Whenever we want to just check how many items we have in our cart we have to pull in this entire thing. Wait you might be saying now, with modern ORM we can specify how it is supposed to fetch things, by demand – typically what is referred to as lazy loading. Sure, we could do that for Product, and probably also for the CartItem, but the second you would ask for something like quantity it would still need to fetch all the CartItems. This adds quite a bit of complexity to your solution. All of a sudden you have a situation were your application is governed by how the objects are configured for the ORM and how they should be fetched. Potentially you could end up with 3 round-trips to the database if the model is used in a particular part of your Cart visualization that is showing the product names. Ones for getting the Cart from the database, ones for getting the cart items it is enumerating, and once for getting the product. What happens if we throw price into the mix. The product can’t have a price directly, because there might be discounts that is relevant for the customer, so yet another object just to display something that seems very simple and straightforward.

Now you might be saying, why don’t we just flatten this all out in a database view – my response to that; exactly. That is exactly what we should be doing, but not in the database. This is at the core of the benefits CQRS can give you; the ability to flatten things out – especially if you buy into the EventSourcing story, storing state as a series of fine grained events. Sure, you don’t need EventSourcing, but it adds for greater flexibility when you want to fully take advantage of the ability to flatten things out after something has already happened. More on this in a bit.

So views and flattening things out. That was our first moment of clarity. We really didn’t need any relationships, in fact we didn’t even need a Cart object, we needed something called CartSummary – it was a very different beast. And so much simpler in its nature.

 NewImage

So what happened here?

Instead of having to go to the database and get all those details as mentioned earlier every time we wanted to get to the summary, we had this simple object. Whenever the user performed an action that said add something to the cart, or remove something from the cart, we would have 2 subscribers for these events. One subscriber would be in charge of updating the underlying CartItem, and another one would be in charge of updating the CartSummary by selecting into the CartItems and just setting the values accordingly. Instead of taking the hit every time you needed the detail, we now take the hit only when actions are performed. My claim is that most applications out there has a ratio between reads and executes that are probably around 9 to 1, so why haven’t we been optimizing for this scenario before? 

Anyways, clearly the objects became simpler and more focused – all of a sudden our objects were doing one thing, and one thing only. We were empowered with the ability to really apply the single responsibility principle for our objects.

I mentioned EventSourcing; a technique for storing the events sequentially as they happen so that we can replay them at any given point in time. This is a very powerful tool. Imagine the flattening we just did and lets say you want to keep introduce another object to the mix and you want the historic state to affect that object. With event sourcing it is then just a matter of replaying those events for the event subscribers dealing with the new object and all of a sudden you have your new optimized state. An example of that could in the cart context be something like a realtime report showing all the carts historically aggregated, you could have objects representing only living carts – and more. The best part, these objects are lightweight and you start seeing possibilities for storing these in a very different manner.

Concepts

One of the things we started also realizing was that we were having a lot of IDs as a consequence hanging around, either as integers representing the identity or GUIDs. Neither our Commands nor our Events were reflecting properties on them very well. A standard value type sitting there didn’t say what it was, it was basically just an integer or a GUID. So we started modeling these things that we ended up calling Concepts. These were cross cutting and something we kept in its own project that could be reused all through. It turned out to be the one thing that was shared between things. The types we introduced were basic reusable things, like StoreId, representing a Store, but also more complex things like a PhoneNumber instead of holding a string. EMailAddress is another example of just that and all of a sudden it was so much easier to model all aspects of the system and we got ourself a vocabulary – one of the key elements of domain driven design. You couldn’t go wrong. Another benefit we got from this, purely technically, was that validation was all of a sudden falling into place as the cross-cutting-concern it is. Along side some of these concepts we put validators, we used FluentValidation for this as it let us put validation outside of the thing we were validating, giving us great flexibility in how we applied validation – a whole concept for a different post that I’m hoping to get to; discovery mechanisms based on configurable conventions and hierarchies. Anyhow, with having the concepts and validation of concepts we got DRY in what we consider a very good way. 

LINQ madness

Once you learn something and you really like it, it is hard to change how you do things. This was something we ran into quite a bit, but one incident that comes to mind was specifically tied into LINQ. We had our Cart and CartItems and just wanted to model the CartSummary, but wanted to solve it through querying. Basically we wanted to be able to specify for an object like that how it could aggregate things, totally disregarding that our platform could at this point flatten things out when things changed, not having to aggregate every time. We had a full day of bending things around LINQ and we wanted add some business rules to some of our objects we used in the view to change the outcome. To do this, we wanted to specify things using the specification pattern and turn it into LINQ expressions that we could execute against the datasource. Sure, it might have seemed as a good idea if you didn’t have CQRS in mind. But we did have CQRS and we wanted to be pure. At this point I remember having a feeling that we were just creating a really fancy ORM – and not really doing anything differently, rather just massively overkilling our existing ways of doing things. This day we discovered that it is not really about querying – the Q in CQRS is not about querying. We went back, changed our code so that we flattened things when events were risen. We could then just access everything by keys and that would be it. And this concept can go all the way for just about everything. As long as you have a key, you can go get things. This realization led to us not calling things queries – we changed it to be called view, lacking a better name. But at least it was closer to what things were, we weren’t going to execute complex queries – but get things by a key.

GUID is the key 

In order for CQRS to be applied, you need to start thinking very differently about how your objects gets their identification assigned. In our opinion; its all about being able to generate the Id already from the client. By doing that, things get so much easier. All of a sudden the client knows the identity of the thing it is creating, meaning that it can already at that point, even though it might not be ready due to eventual consistency, use the Id for other purposes related to it. Basically, we can start doing a few assumptions in the client and create better user experiences. We used a GUID throughout the system for everything that was used to identify items. 

Data, what data…

One of the things that we’re thought when working with more classic approaches like explained earlier, is to be very data focused. We’re modeling the data and then we’re creating, updating and deleting it. We’re thought to work with state. When you sit down with a user and ask them to describe to you their needs in the application you’re building for her. Unless she is a power user tainted by developers, she will probably state her needs as behaviors. How often have you heard the “I need a button here, so that…”, and we immediately stop paying attention because we’re so annoyed that the user couldn’t speak in more abstract terms than concretely about the button and we only pick up the stuff that comes after “so that” which is the state bit. Creating software is not about building state, its about exposing behaviors to the user that might end up as state. This is a very core principle in CQRS; Commands. You model your commands to expose the behavior that is needed and you also capture the intent. With a command you represent the What and potentially the Why – you put it in the name of the Command : ChangeAddressBecauseOfMove – now we have the What and the Why. 

This is something we realized more and more throughout the project – it is really all about the behaviors, once we got to that, it was so much easier picking up what the users wanted from the system. All of a sudden the button that the user wanted is not that much of a problematic way of expressing the needs, especially when you start doing things top-down and you can actually work with the user on prototypes, drawings and similar – things that they relate to. A user do not relate to a SQL table or relationship, they relate to the things they do – like clicking a button. 

From this realization came a bunch of other eurekas throughout the project. The biggest and most important thing was that we should step away from the idea that state can only be held in one place. Lets face it, a SQL server is good at many things but not everything, just like any technology; nothing is great at everything. We started realizing that we could start persisting our state that would be most suitable for its use. For one particular feature we put the data as a JSON object model directly onto a CDN and let the client download it when the feature was loaded. With GZIP compression on the server that request turned out to be on an average of 40KB and totally taking the load away from the Web server and the SQL server for performing something that would have been a pretty complex query. Instead we generated the JSON file whenever the underlying data changed, and it was not changed very often – so browser caching would also be a big benefit for returning customers.

We started realizing that we could pretty much go any way with this. A lot of the time, things are close to static – they hardly change. Why are we then dynamically generating things like Web pages – why aren’t we just generating them when things change and put them on a CDN or on the file system of the Web server that hosts the solution and we just sent it directly back to the client. You might be saying at this point that we do have things like caching of web pages for things like this, but I would argue – why would you add that complexity to your solution, why would you also add that pressure onto your hardware in the terms of extra memory needed, for all of your servers in the farm to be able to process this. All that for something that is solved way easier, just render it when needed. 

With all this, we stopped thinking about data – we focused on the behavior, the data – or state as I would rather prefer to call it, could then be represented in any way, in fact – it might even change over time how you represent it.

The bounded contexts

One of the things that Eric Evans mentioned in his book on DDD is the concept of bounded contexts. A concept that is in a bit of conflict with the idea of DRYing up things and the whole idea of having one model to rule them all. The idea is to have context aware representations of concepts found in your domain. Take for instance a Product in an e-commerce setting. For someone in charge of acquiring products for the store to have it available in the shop, they are interested in certain aspects of it the product. The are interested to know about who the vendor is, who is importing that particular product, you are of course interested in the price you as a store can buy it for – so you can figure out what your margins can potentially be on the product. Once the product hits the warehouse, none of those properties are important – they only care about how they can store that product in the warehouse – they don’t even care about the brand. Things like dimensions and weight is key information to the warehouse. Once the user sees the product on the web-shop, they are interested in the retail price, the description, other customers feedback, reviews and a whole lot more. Why model this as one product, chances are that the people in your organization might not be referring to it as a product, at least not in all departments. 

This was a huge realization we had, the minute we discovered this truth we stopped completely with thinking in any relational manner. Sure, we did have an existing database to take care of, because the project was greenfield built on top of existing data that was shared between the old and the new. But our mindset. We started modeling things with the lingo that was for the context, and not try to apply the same model across the board – sort of a compromise model that would suit all the needs. 

Features

Before Greg Yong and Udi Dahan agreed upon the concept of business components, we were well on our way with what we called a feature. But it serves the same purpose. The idea is to isolate features within a bounded context and that feature can not be shared between bounded contexts. It was also a very important realization we had that gave us a structure in the application and the different components it was built on to have this recognizable structure throughout. Naming was consistent, and you isolated things within the feature itself, so no namespace for interfaces that turns out to be a dumping ground for all the interfaces in the project or something like that. For all the tiers, you would find the features name – and all artifacts related to the feature in that tier would be inside that namespace / folder. That way we kept related things close, and it was more maintainable and even bringing new guys into the project made it a lot easier. For the ASP.net MVC part, the front-end tier, we even went ahead and relocated were views and controllers normally are found; we put them together – in fact we put together everything related to a feature inside the same folder – be it controllers, views, javascripts, images and so forth. It was just a matter of configuring things correctly and we could maintain a consistency throughout the project that would prove to be of great value.

Conclusion

As with any software project, these are how we experienced things – these are our conclusions, our interpretations of patterns and ways of doing things. We can not go out and say that our way is the best, but it was the closest thing to best for our team. The conclusions we reached, the realizations we had made our work so much easier. It turns out however for the guys that were part of these realizations and conclusions that it gave a push in a general direction of writing better code. We feel that we are now empowered with the knowledge to not compromise on code quality and be able to deliver on time, something we did again and again on the project. All deadlines we had were met, partly because of the platform we build, but also the way we worked in general. The more of the mindset we got changed, the faster we were able to deliver and we were focusing on the right stuff; the business value. It gave us a way to speak to users, and really translate it into what they were looking for; not the data, but capture the behaviors. 

Standard
Bifrost, C#, CQRS, MVVM, Patterns, Practices

CQRS applied : a summary

Every now and then in a software career you get a chance to write something from scratch and try out new things; a proper greenfield project. I’ve had that luck a couple of times and latest a project that proved to be the complete game-changer for me personally. Game changer in the sense that I gained a knowledge that I am pretty sure I will treasure for, if not the rest of my career, at least for quite a few years moving forward. The knowledge I am talking about can be linked back to applying CQRS, but it is not CQRS in itself that is the knowledge, its the concepts that tag along with it and the gained knowledge of how one can write code that is maintainable in the long run. Its also about the things we discovered during the project, smart ways to work, smart code we wrote – techniques we applied in order to meet requirements, add the needed business value, and at the same time deliver on time with more than was asked for.  

This is a more in-depth post than the talk I did @ NDC 2011

… from the top …

For the last couple of years, till March this year, I had the pleasure of being hired by Komplett Group, the largest e-commerce in Norway. At first I was assigned tasks to maintain the existing solution and was part of the on-premise team to do just that. As a consultant, that is very often what you find yourself doing – unless you’re hired in to be a particular role, like I’ve been in the past; system architect. I helped establish some basic architectural principles at that time, applying a few principles, like IOC and other parts of our favorite acronym; S.O.L.I.D. I remember feeling a bit at awe of just being there, they had a solution that could pretty much take on any number of clients and still be snappy and they never went down. I’ve learned to respect systems like that, even though it requires a lot of work – not necessarily development work, but a lot of the time IT or DevOps help keep systems alive. Anyhow, after a few months, back in 2009 I was asked by the department manager if I wanted to lead a small team on a particular project, an administration-tool for editing order details. With my background earlier as a team lead and also as a department manager myself, I kinda missed that role a bit and jumped at it. It was to be a stand-alone tool, accessible from the other tools they had, but we were given pretty much carte-blanche when it came to how we did it, whatever technology within the .net space we wanted. We settled on applying ASP.net MVC, Silverlight for some parts, WCF for exposing service for the Silverlight parts and nHibernate at the heart as the ORM for our domain. 

Part of the project was also to try out Scrum, having had quite a bit of experience with everything ranging from eXtreme Programming to MSF Agile and later Scrum, that excited me as well. So we applied it as well. 

Half-way through the project we started having problems, our domain was the one thing we shared with the others and we started running into nightmare after nightmare because we worked under the one-model-to-rule-them all idea. Which is really hard to actually get to work properly, and looking back I realize that most projects I’ve been have suffered from this. We ran into issues were for our purpose we needed some things in *-to-many to be eager fetched, which had consequences we could not anticipate in other systems that was using the same model. But we managed to come up with compromises that both systems could live with – still, we weren’t seeing eureka, just brushing up against the problems that a lot of projects meet without seeing that the approach was wrong. A bit after this we started brushing up against something that really got us excited; Commands. Without really knowing about CQRS at this point, but more coming from working with Silverlight and WPF, the concept of modeling behavior through commands. The reason we needed these commands was that we needed to perform actions on objects over a long period of time; potentially days, and at the end commit the changes. We came up with something we called a CommandChain – a chain of commands that we appended to and persisted. Commands represented behavior and modified state for the most part on entities when executed. We came up with a tool were we could debug these chains, and we could inspect which Command was causing problems and not.

NewImage

All in all, we were quite pleased with the project; we had done a lot of new things, applied TDD in a behavioral style, started exploring new corners of the universe we had yet to realize the extent of. Delivered not too badly on time, not perfectly on time – but close enough.

The turning point

After yet another 6 months or so, there were initial talks about the need to expose functionality from the web-shop to other systems used internally, a few design meetings and meetings with management lead to a new project. The scope of the project turned out to be not only exposing some services, but also a new web-shop frontend targeted and optimized for smartphone devices. The project was initiated from a technical perspective and not one with a specific business need in mind. From a technical perspective, the existing codebase had reached a point were it was hard to maintain and something new needed to replace it to gain back velocity and control over the software. It was to be a complete greenfield project, totally throw things overboard and just basically work with existing database but add flexibility enough that even that could be thrown out the door, if one ever wanted to do that. Early on I was vocal about them needing an architect to be able to deliver this project, I pointed in a couple of directions to internal resources they had – but people pointed back to me and I soon found myself as the system architect for the project. 

Requirements

When dealing with e-commerce at this level, there are quite a few challenges. Lets look at a few numbers; in the product catalog there was at the time I got off the project about 13.000 products, there was an order shipped every 21 seconds, in 2011 that amounted up to 1.454.776 orders, ~30.000 living sessions at any given time. Sure, its not Amazon, but for our neck of the woods its substantial. These numbers are of course on an average, but come busy times like Christmas, these numbers are more focused and the pressure is really on for that period in particular.

Decisions, decisions, decisions…

Before we started production, back in November 2010, we needed to get a few things straight; architecture, core values for the project, process and then getting everyone on-board with the decisions. We early on decided that we were going to learn all about CQRS, as it seemed to fit nicely with the requirements – especially for performance, and we were also requiring ourself that we wanted a rich domain model that really expressed all aspects of the system.  We also decided that we wanted to drive it all out applying BDD, and we wanted to be driving the project forward using Scrum and really be true to the process, not make our own version of Scrum. A dedicated product owner was assigned to the project that would have the responsibility for the backlog, make sure that we refined as needed, planned as needed and executed on it. 

Adding the business value

As I mentioned, this project came out of a technical need, not a concrete business need. We had the product owner role in place and he needed to fill the backlog with concrete business value. This was not an easy task to do, basically because the organization as a whole probably didn’t see the need for the project. In their defense, they had a perfectly fine solution today, not entirely optimal for smaller screens like a smartphone, but manageable. To the different store owners that normally provided the needs to the backlog, they were in desperate need of new features on existing solution, rather than this new thing targeting a platform they didn’t see much business value in adding. In combination with the fact that the organization had been in migration mode and all developer resources partly or close to full-time in periods being tied down to work related to migration of systems that was a result of merges and acquisitions, the organization had gotten used to not getting things done anyways. All this didn’t exactly create the most optimal environment for getting the real business value into the project. Something we really wanted. Early on we realized that the project could not be realized if we had user stories that were technical in nature. The first couple of months we did have quite a few technical user stories, and statistically these failed on estimation. We didn’t have any business value to relate them directly back to, and ended up in many cases as over-engineering and way out of their proportions as we as developers got creative and failed at doing our job; add business value. So we came to the conclusion; no technical user stories were allowed – ever. Something that I still today think was one of the wisest decisions we had on the project. It helped us get back and focus on why we were writing code every day; add business value. Even though this project was a spawn of the developers, there was clearly business value to guide us through. The approach became; lets pretend we’re writing an e-commerce solution for the first time. This turned out to be a good decision, it helped us  be naïve in our implementations – keeping in line with core principles of agile processes; the simplest thing that could possibly work. Our product owner was then left with the challenge of dragging the business value out of the business, he did a great job in doing that and at the same time getting them to realize the need for the change of platform that was in reality taking place. Something that became evident further down the line; we were in fact not building an e-commerce front-end for smartphones, but an entire new platform. More on that later.

YES, we did create a framework

One of the realizations we had early on was that we needed to standardize quite a few things. If you’re going to do that many new things and have a half-way chance of getting everyone with you and feel productive in the new environment, you need to get a basis working that people can work with. Back in 2008 I started a project called Bifrost, you can read more here. We looked at it and decided it was a good starting point for what we wanted to achieve. We also wanted the framework to be open-sourced. The philosophy was to create a generic framework to be the infrastructure sitting at the core of the application we were building. It would abstract away all the nitty gritty details of any underlying infrastructure, but also serve as the framework that promoted CQRS and the practices that we wanted. It was to be a framework that guided and assisted you, and very clearly not in your way. I’m not going to go in-depth in the framework, as there are more posts related to it specifically in the making and already out there.

CRUDing our way through CQRS

Well on our way, we had quite a few things we really couldn’t wrap our heads around. Coming from a very CRUD centric world, the thought of decoupling things in the way that CQRS was saying was really hard. And at the same time, there were potential for duplication in the code. I remember being completely freaked out at the beginning of the project. All my neural cells were screaming “NO! STOP!” – but we had to move on and get smarter, get passed the hurdles, learn. At first we really started making a mess out of things, just because we were building it on assumptions – the assumptions that CQRS is similar to doing regular old CRUD with what we used to know as a domain model. It was far from it, and we had a true eureka at one point were we realized something important; we were working hard an entire day on how to represent some queries in a good way so that they would be optimal in the code but also execute optimally – and it hit us as a ton of brick after leaving work that day. We were doing everything wrong, and we even came up with a mantra; “if a problem seems complicated, chances are we’re doing it wrong”. That was the turning point that helped us write code that was simpler, more testable, more focused, faster and we picked up pace in the project like I’ve never experienced before. 

From that point we had our mantra that really proved as a guiding star. Whenever we ran into things we didn’t have an answer to straight away and we started finding advanced solutions to the challenges, we applied the mantra and went back to rethink things.  

Tooling

Early in the project we realized we needed a tool for both visualizing the events being generated, but also be able to republish events. We came up with a tool built in Silverlight, using the pivot control from Microsoft to visualize.

Mimir

The real benefits

Looking back at what we did and trying to find the concrete benefits, I must say we now have gained serious amount of knowledge in a few areas. The thing that CQRS specifically gave us was the ability to model our domain properly. We achieved the separation we wanted between the behavior of the application and the things the behaviors caused changes to, the data on the other side. It helped us achieve greater flexibility, easier maintenance. Since we decided to not only just apply CQRS, but also build a reusable framework sitting at the bottom, we achieved a certain pattern of working that made things really easy to get started with development, and also a recognizable structure that made it easy to know were to put things if the core principles was explained to you.

I think by far the biggest benefit we achieved was the insight into how we should be developing software. Keeping things simple really have huge benefits. Decouple things, staying true to single-responsibility in every sense of the word single.

Another huge realization I had, something I have been saying all the time throughout my career but really got re-enforced with this project; concrete technology doesn’t really matter. Sure things will end up as a certain concrete technology – but stop thinking concretely when designing the system. Try to get down to the actual business needs, model it and let the concrete technology fall into place later. With this approach, you gain another useful possibility; doing top-down development. Start with the user interface, move your way down. Keep the feedback loop as tight as possible with the business. Don’t do more than is needed. This approach is something I know I will be missing the most in future projects. A tight feedback-loop is were the gold is hidden.

Were did we screw up?

This project must come across as a fairly peachy story. And sure, it was by far in my experience the project with the best code-base, the most structured one, the one that I personally learned the most from and also the one project in my career that we really managed to be on schedule and in fact for a couple of the releases we delivered more business value than was asked for. But it came at a price. One of the things we struggled with early on was to spread the knowledge across to the entire team and get everyone excited about the architecture, the new way of working with things and so forth. Personally I didn’t realize how invested people were in their existing solution, and also in the existing way of doing things. Me as the architect, should have seen this before we got started. The problem with not realizing this ended up being a growing problem in the group. You had a divide in the group of people buying into the entire story and those who didn’t or didn’t quite get the entire story. My theory is that we should have given the most invested members of the group a time for mourning. Get time to bury their friend through many years; the old project. We should have realized that we were in fact building for the future and would replace the existing solution at the beginning of the project and this should have been the official line. Instead it kind of organically became the official line. We did at the beginning do training in all the new techniques, and gave people time to learn. Basically didn’t give them any tasks for a few weeks and just pointed them in the general direction of things they should look at. What I think we failed on was that we didn’t point out that these things were not optional, these new ideas were in fact mandatory knowledge. We should have been much clearer about this and been vocal about the expectations. Another thing I think I would have done a bit different; involve more people in the framework part of things. With the risk of stepping on toes, I think it is not wrong of me to say that I was the framework guy. For the most part, I ended up working on the framework. Don’t get me wrong, I love doing that kind of work – but I think the experience, the design decisions got lost in translation and not everyone in the group understood why things were done as they were. 

Conclusion

The project and opportunity that was given to the team was awesome, I really appreciate the trust that was given to me for leading the way in this project. The pace we had, the stuff we did has so far amounted up to be the coolest project I’ve ever worked on – and I am happy to admit it; I miss the project. Hadn’t it been for a great opportunity that was given to me, I would have loved to stay on further. We had ups and downs, as with any software project, but overall I am wildly impressed with our accomplishments as a team and also by the end result.

Ohh… By the way. The end result can be found here.

Standard
.net, 3D, Balder, C#, Cloud, Community, JavaScript, Personal

GeekRider – the goal, technical perspective

As I briefly mentioned earlier I am endeavoring on a project which is going to demand a lot from me physically, but also from a technical perspective. I have a lot of things on my plate, during daytime I’m 100% engaged with work at clients, nighttime is the time I have to squeeze in a lot of activities into. For one, I have two kids that needs my attention – and I have a golden rule of engaging with them from the time I get back from work till they’re in bed. This leaves some 2-4 hours left per day to do all the things I do. I therefor have to be smart with my time and make the most of it. Adding things into the schedule is hard and if I add something, it in general must have a synergy with something already in my schedule. In my schedule I have a couple of open-source projects that I focus a lot of my energy on; BalderBifrost and Forseti, so pretty much anything I put in must relate to these in some fashion. Geekrider arose concretely from this need of synergy. I need to focus more on physical exercise and brought in Geekrider with the synergy of pushing forward development on the open-source projects I’m involved in forward. Balder will hopefully serve the purpose of 3D visualization and bringing forward the a few features that I want to have in that project. As a general web platform, I could have gone for anything already out there, but I wanted to push forward features in Bifrost, I therefor decided to build the site from scratch on top of it and also push into the cloud by hosting it on AppHarbor. Since the site will become very JavaScript intensive, and I pretty much get allergic reactions when I don’t write tests or BDD style specifications for my code, the last project also will get some love; Forseti. The reasoning behind the project is that most test runners out there has so many moving parts in the form of dependencies to get up and running and they’re also very focused on running things in a browser. Forseti is aiming towards something very different, a headless runner for JavaScript tests based on Env.js not using by default any browsers to execute the tests/specs.

One of the goals for Bifrost is to make it easier for developers to create rich web based applications, promoting good software development practices. Today, the RIA space is rapidly changing and for the most part moving away from plugin technologies such as Flash or Silverlight and focusing more on the open standards found in HTML, CSS and JavaScript/EcmaScript.

From a fronted development perspective, Bifrost is taking on this latter part. Traditionally one would compose the resulting web page that is handed over to the client on the server. Multiple solutions exist out there for doing so, and specifically in the .net space, ASP.net and its derivatives are the most popular ones. Rendering, as this is often referred to, adds an extra load onto the server – not only is the server responsible for dealing with the request from the user, wether it is getting data or performing an action, but it also has to transform the result into something the client can show. On top of all this, it has to deal with security. This pattern is a very proven pattern, but in my opinion not the pattern we want to be doing moving forward, and therefor Bifrost will focus on a different pattern. Sure, Bifrost will not only be compatible, but also support out of the box the traditional route – but for now in an opinionated fashion by only supporting ASP.net MVC. The technique that Bifrost will be focusing in on is the Single Page Applications, were you basically hand over the “rendering” to the client and let the client compose the page by swapping in and out elements at runtime. This is in fact nothing new, ever since AJAX became the big thing, we’ve pretty much been doing this – but only for parts at a time and even letting parts of our page be swapped out for new versions being rendered by the server dynamically.

Bifrost will have a composition technique that is based on, as most things in the framework, conventions. The focus will be on Features and one can point to a feature simply by adding a <div/> tag and give it the attribute data-feature=”[name of feature]”. Based on the configurable convention, Bifrost will find the necessary files representing the feature. Looking at the page from Geekrider as it is at the time of writing this post, we’ll have the following.

NewImage

So, back on track. Now that we have this, what is the next logical step?  Up till now, Bifrost has been very server side rendering focused, sporting an extension for ASP.net MVC and taking advantage of that stack. That is about to change, or should I say, the fact that it has been the only way to use Bifrost is about to change. A set of REST endpoints will be exposed from Bifrost, enabling any client to interact with the framework. From a Web developer perspective, this is not good enough, we’re therefor working on bringing in a JavaScript library that will just nicely integrate with all this.

In addition to the goals summarized thus far, I’ve also got another goal for me personally; I want to become more productive with tools other than what I’m used to. I recently bought a MacBook Air, an impressive piece of hardware – but it doesn’t sport the same specs as my MacBook Pro or my iMac, I’ve therefor decided not to put any virtualization software on it to run Windows. This means I have to start using other tools than Microsofts Visual Studio for my development. For .net development, I’m for now using MonoDevelop and for general HTML, JavaScript and CSS development, I’m using TextMate. My long term goal is to be using TextMate for everything.

Summarizing, Geekrider will be the proof of concept for features added to Balder and Bifrost – driving forward with new thoughts and ideas. I will try to blog about the progress as much as my schedule can permit. This means I should keep myself from playing around or doing unnecessary stuff.

 

Standard