.net, C#, Code Tips

Tip: Improving async experience with Pulumi

Recently we’ve been working a lot with Pulumi for automating our cloud environments. We’re building out our own management tool and creating Pulumi stack definitions in C#. One thing that quickly became a pain was working with the Inputs and Output and running into code that became way too nested, looking a lot like the old TPL or JavaScript Promises with .ContinueWith() or .then().

We’re building our stacks using the Pulumi function:

PulumiFn.Create(async () =>
{
    // Automate things...
});

Within the Action we set up the things we want to automate. A scenario we have, is to create a configuration object that contain the connection string from a MongoDB Cluster running with Atlas. The file generated is stored in an Azure file share we create with Pulumi.

// Storage is an object being passed along with information about the Azure storage being used.
var getFileShareResult = GetFileShare.Invoke(new()
{
    AccountName = storage.AccountName,
    ResourceGroupName = resourceGroupName,
    ShareName = storage.ShareName
});

// Cluster is an object holding the MongoDB cluster information.
var getClusterResult = GetCluster.Invoke(new()
{
    Name = cluster.Name,
    ProjectId = cluster.ProjectId
});

// Get the values we need to be able to write the connection string
getFileShareResult.Apply(fileShare =>
{
    getClusterResult.Apply(clusterInfo =>
    {
        // Write the file with the connection string
        return clusterInfo;
    });

    return fileShare;
});

In this sample we’re just interested in 2 values and still its quite a few lines of code and nested scopes.

To improve on this, we ended up creating a couple of extension methods that helps us write regular async/await based code.

public static class OutputExtensionMethods
{
    public static Task<T> GetValue<T>(this Output<T> output) => output.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Output<T> output, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        output.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

And for Input it would be the same:

public static class InputExtensionMethods
{
    public static Task<T> GetValue<T>(this Input<T> input) => input.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Input<T> input, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        input.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

With these we can now simplify the whole thing down to two lines of code:

// Get the values we need to be able to write the connection string
var fileShareResult = await getFileShareResult.GetValue(_ => _);
var clusterInfo = await getClusterResult.GetValue(_ => _);

// Write the file with the connection string
...

We’re interested to hear if there are better ways already with the Pulumi SDK or if we’re going about this in the wrong way. Please leave a comment with any input, much appreciated.

Standard
.net, C#, Code Tips

ASP.NET Core 6 – transparent WebSockets

Lets face it; I’m a framework builder. In the sense that I build stuff for other developers to use. The goal when doing so is that the developer using what’s built should feel empowered by its capabilities. Developers should have lovable APIs that put them in the pit of success and lets them focus on delivering the business value for their business. These are the thoughts that goes into what we do at work when building reusable components. This post represents some of these reusable components we build.

TL;DR

All the things discussed is documented here. Its backend implementation here, frontend here. Concrete backend example of this here and frontend here. Recommend reading my post on our proxy generation tool for more context.

Introduction

WebSocket support for ASP.NET and ASP.NET Core has been around forever. At its core it is very simple but at the same time crude and not as elegant or structured IMO as your average Controller. We started thinking about how we could simplify this. Sure there is the SignalR approach – which is a viable option (and I’ve written a couple of books about it a few years back here and here). But we wanted something that wouldn’t involve changing the programming model too much from a regular Controller.

One of the reasons we wanted to add some sparkling reactiveness into our software was that we are building software that is all focused on CQRS and Event Sourcing. With this we get into an eventual consistency game real quick for the read side. Once an action – or command in our case – is performed, the read side only updates as a consequence of an event being handled. Since we don’t really know when it is done and ready, we want to be able to notify the frontend with any changes as they become available.

Queries

One of the things that we do is to encapsulate the result of a query into a well known structure. Much like GraphQL does with not just relying on HTTP error codes as the means of communication success or not, we want to capture it in a well known structure that holds the details of whether or not the query was successful and eventually we’ll also put validation results, exception messages and such as well. Along side this, the actual result of the query is also kept on it. For now it looks like the following:

public record QueryResult(object Data, bool IsSuccess);

You’ll see this type used throughout this post.

Observables

We’re very fond of the concept of observables and use Reactive Extensions throughout our solution for different purposes. Our first construct is therefor a special kind of observable we call the ClientObservable. It is the encapsulation we will be using from our Controllers. Its responsibility is to do the heavy lifting of handling the WebSocket “dance” and also expose a clean API for us to provide data to it as things change. It also needs to deal with client closing connection and cleaning up after itself and all.

The basic implementation of looks like below:

public class ClientObservable<T> : IClientObservable
{
    readonly ReplaySubject<T> _subject = new();

    public ClientObservable(Action? clientDisconnected = default)
    {
        ClientDisconnected = clientDisconnected;
    }

    public Action? ClientDisconnected { get; set; }

    public void OnNext(T next) => _subject.OnNext(next);

    public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
    {
        using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
        var subscription = _subject.Subscribe(_ =>
        {
            var queryResult = new QueryResult(_!, true);
            var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
            var message = Encoding.UTF8.GetBytes(json);

            webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
        });

        var buffer = new byte[1024 * 4];
        var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

        while (!received.CloseStatus.HasValue)
        {
            received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
        }

        await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
        subscription.Dispose();

        ClientDisconnected?.Invoke();
    }
}

Since the class is generic, there is a non-generic interface that specifies the functionality that will be used by the next building block.

public interface IClientObservable
{
    Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions);

    object GetAsynchronousEnumerator(CancellationToken cancellationToken = default);
}

Action Filters

Our design goal was that Controller actions could just create ClientObservable instances and return these and then add some magic to the mix for it to automatically be hooked up properly.

For this to happen we can leverage Filters in ASP.NET Core. They run within the invocation pipeline of ASP.NET and can wrap itself around calls and perform tasks. We need a filter that will recognize the IClientObservable return type and make sure to handle the connection correctly.

public class QueryActionFilter : IAsyncActionFilter
{
    readonly JsonOptions _options;

    public QueryActionFilter(IOptions<JsonOptions> options)
    {
        _options = options.Value;
    }

    public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
    {
        if (context.HttpContext.Request.Method == HttpMethod.Get.Method
            && context.ActionDescriptor is ControllerActionDescriptor)
        {
            var result = await next();
            if (result.Result is ObjectResult objectResult)
            {
                switch (objectResult.Value)
                {
                    case IClientObservable clientObservable:
                        {
                            if (context.HttpContext.WebSockets.IsWebSocketRequest)
                            {
                                await clientObservable.HandleConnection(context, _options);
                                result.Result = null;
                            }
                        }
                        break;

                    default:
                        {
                            result.Result = new ObjectResult(new QueryResult(objectResult.Value!, true));
                        }
                        break;
                }
            }
        }
        else
        {
            await next();
        }
    }
}

With the filter in place, you typically add these during the configuration of your controllers e.g. in your Startup.cs during ConfigureServices – or using the minimal APIs:

services.AddControllers(_ => _.Filters.Add<QueryActionFilter>());

Client abstraction

We also built a client abstraction in TypeScript for this to provide a simple way to leverage this. It is built in layers starting off with a representation of the connection.

export type DataReceived<TDataType> = (data: TDataType) => void;

export class ObservableQueryConnection<TDataType> {

    private _socket!: WebSocket;
    private _disconnected = false;

    constructor(private readonly _route: string) {
    }

    connect(dataReceived: DataReceived<TDataType>) {
        const secure = document.location.protocol.indexOf('https') === 0;
        const url = `${secure ? 'wss' : 'ws'}://${document.location.host}${this._route}`;
        let timeToWait = 500;
        const timeExponent = 500;
        const retries = 100;
        let currentAttempt = 0;

        const connectSocket = () => {
            const retry = () => {
                currentAttempt++;
                if (currentAttempt > retries) {
                    console.log(`Attempted ${retries} retries for route '${this._route}'. Abandoning.`);
                    return;
                }
                console.log(`Attempting to reconnect for '${this._route}' (#${currentAttempt})`);

                setTimeout(connectSocket, timeToWait);
                timeToWait += (timeExponent * currentAttempt);
            };

            this._socket = new WebSocket(url);
            this._socket.onopen = (ev) => {
                console.log(`Connection for '${this._route}' established`);
                timeToWait = 500;
                currentAttempt = 0;
            };
            this._socket.onclose = (ev) => {
                if (this._disconnected) return;
                console.log(`Unexpected connection closed for route '${this._route}`);
                retry();
            };
            this._socket.onerror = (error) => {
                console.log(`Error with connection for '${this._route} - ${error}`);
                retry();
            };
            this._socket.onmessage = (ev) => {
                dataReceived(JSON.parse(ev.data));
            };
        };

        connectSocket();
    }

    disconnect() {
        console.log(`Disconnecting '${this._route}'`);
        this._disconnected = true;
        this._socket?.close();
    }
}

On top of this we then have a ObservableQueryFor construct which leverages this and provides a way to subscribe for changes.

export abstract class ObservableQueryFor<TDataType, TArguments = {}> implements IObservableQueryFor<TDataType, TArguments> {
    abstract readonly route: string;
    abstract readonly routeTemplate: Handlebars.TemplateDelegate<any>;

    abstract readonly defaultValue: TDataType;
    abstract readonly requiresArguments: boolean;

    /** @inheritdoc */
    subscribe(callback: OnNextResult, args?: TArguments): ObservableQuerySubscription<TDataType> {
        let actualRoute = this.route;
        if (args && Object.keys(args).length > 0) {
            actualRoute = this.routeTemplate(args);
        }

        const connection = new ObservableQueryConnection<TDataType>(actualRoute);
        const subscriber = new ObservableQuerySubscription(connection);
        connection.connect(callback);
        return subscriber;
    }
}

The subscription being returned:

export class ObservableQuerySubscription<TDataType> {
    constructor(private _connection: ObservableQueryConnection<TDataType>) {
    }

    unsubscribe() {
        this._connection.disconnect();
        this._connection = undefined!;
    }
}

We build our frontends using React and added a wrapper for this to make it even easier:

export function useObservableQuery<TDataType, TQuery extends IObservableQueryFor<TDataType>, TArguments = {}>(query: Constructor<TQuery>, args?: TArguments): [QueryResult<TDataType>] {
    const queryInstance = new query() as TQuery;
    const [result, setResult] = useState<QueryResult<TDataType>>(new QueryResult(queryInstance.defaultValue, true));

    useEffect(() => {
        if (queryInstance.requiresArguments && !args) {
            console.log(`Warning: Query '${query.name}' requires arguments. Will not perform the query.`);
            return;
        }

        const subscription = queryInstance.subscribe(_ => {
            setResult(_ as unknown as QueryResult<TDataType>);
        }, args);

        return () => subscription.unsubscribe();
    }, []);

    return [result];
}

The entire frontend abstraction can be found here.

Usage

To get WebSockets working, we will need to add the default ASP.NET Core middleware that handles it (read more here). Basically in your Startup.cs or your app builder add the following:

app.UseWebSockets()

With all of this we can now create a controller that watches a MongoDB collection:

public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _collection;

    public Accounts(IMongoCollection<DebitAccount> collection) => _collection = collection;

    [HttpGet]
    public ClientObservable<IEnumerable<DebitAccount>> AllAccounts()
    {
        var observable = new ClientObservable<IEnumerable<DebitAccount>>();
        var accounts = _accountsCollection.Find(_ => true).ToList();
        observable.OnNext(accounts);
        var cursor = _accountsCollection.Watch();

        Task.Run(() =>
        {
            while (cursor.MoveNext())
            {
                if (!cursor.Current.Any()) continue;
                observable.OnNext(_accountsCollection.Find(_ => true).ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

Notice the usage of the ClientObservable and how it can be used with anything.

MongoDB simplification – extension

The code in the controller above is typically a thing that will be copy/pasted around as it is a very common pattern. We figured that we will be pretty much doing the same for most of our queries and added convenience methods for MongoDB. They can be found here.

We can therefor package what we had in the controller into an extension API and make it more generalized.

public static class MongoDBCollectionExtensions
{
    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        Expression<Func<TDocument, bool>>? filter,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= _ => true;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        FilterDefinition<TDocument>? filter = null,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= FilterDefinition<TDocument>.Empty;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
            this IMongoCollection<TDocument> collection,
            Func<Task<IAsyncCursor<TDocument>>> findCall)
    {
        var observable = new ClientObservable<IEnumerable<TDocument>>();
        var response = await findCall();
        observable.OnNext(response.ToList());
        var cursor = collection.Watch();

        _ = Task.Run(async () =>
        {
            while (await cursor.MoveNextAsync())
            {
                if (!cursor.Current.Any()) continue;
                var response = await findCall();
                observable.OnNext(response.ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

With this glue in place, we now have something that makes it very easy to create something that observes a collection and sends any changes to the frontend:

[Route("/api/accounts/debit")]
public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _accountsCollection;

    public Accounts(
        IMongoCollection<DebitAccount> accountsCollection)
    {
        _accountsCollection = accountsCollection;
    }

    [HttpGet]
    public Task<ClientObservable<IEnumerable<DebitAccount>>> AllAccounts()
    {
        return _accountsCollection.Observe();
    }
}

Streaming JSON

A nice addition to ASP.NET Core 6 is the native support for IAsyncEnumerable<T> and streaming of JSON:

One benefit of this is you can now quite easily support both a WebSocket scenario and regular web requests. On our ClientObservable<T> we can then implement the IAsyncEnumerable<T> interface and create our own enumerator that supports this by observing the subject we have there.

    public class ClientObservable<T> : IClientObservable, IAsyncEnumerable<T>
    {
        readonly ReplaySubject<T> _subject = new();

        public ClientObservable(Action? clientDisconnected = default)
        {
            ClientDisconnected = clientDisconnected;
        }

        public Action? ClientDisconnected { get; set; }

        public void OnNext(T next) => _subject.OnNext(next);

        public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
        {
            using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
            var subscription = _subject.Subscribe(_ =>
            {
                var queryResult = new QueryResult(_!, true);
                var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
                var message = Encoding.UTF8.GetBytes(json);

                webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
            });

            var buffer = new byte[1024 * 4];
            var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

            while (!received.CloseStatus.HasValue)
            {
                received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
            }

            await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
            subscription.Dispose();

            ClientDisconnected?.Invoke();
        }

        public IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken = default) => new ObservableAsyncEnumerator<T>(_subject, cancellationToken);

        public object GetAsynchronousEnumerator(CancellationToken cancellationToken = default) => GetAsyncEnumerator(cancellationToken);
    }

The return type of ObervableAsyncEnumerator<T> can be implemented as follows:

public class ObservableAsyncEnumerator<T> : IAsyncEnumerator<T>
{
    readonly IDisposable _subscriber;
    readonly CancellationToken _cancellationToken;
    readonly ConcurrentQueue<T> _items = new();
    TaskCompletionSource _taskCompletionSource = new();

    public ObservableAsyncEnumerator(IObservable<T> observable, CancellationToken cancellationToken)
    {
        Current = default!;
        _subscriber = observable.Subscribe(_ =>
        {
            _items.Enqueue(_);
            if (!_taskCompletionSource.Task.IsCompletedSuccessfully)
            {
                _taskCompletionSource?.SetResult();
            }
        });
        _cancellationToken = cancellationToken;
    }

    public T Current { get; private set; }

    public ValueTask DisposeAsync()
    {
        _subscriber.Dispose();
        return ValueTask.CompletedTask;
    }

    public async ValueTask<bool> MoveNextAsync()
    {
        if (_cancellationToken.IsCancellationRequested) return false;
        await _taskCompletionSource.Task;
        _items.TryDequeue(out var item);
        Current = item!;
        _taskCompletionSource = new();
        if (!_items.IsEmpty)
        {
            _taskCompletionSource.SetResult();
        }

        return true;
    }
}

Conclusion

This post we’ve touched on an optimization and formalization of reactive Web programming. From a perspective of covering the most common use cases, we feel that this approach achieves that. It is not a catch-all solution, but with the way we’ve built it you do have some flexibility in how you use this. It is not locked down to be specifically just MongoDB. The ClientObservable is completely agnostic, you can use it for anything – all you need is to be able to observe something else and then call the OnNext method on the observable whenever new things appear.

From a user perspective I think we should aim for solutions that does not require the user to hit a refresh button. In order to do that, it needs to be simple for developers to enable it in their solutions. The solution presented here is geared towards that.

If you have any feedback, good or bad, improvements or pitfalls; please leave a comment.

Standard
.net, C#, Code Quality

Avoid code generation if compiler is in error state

One of the things discovered with usage of our proxy generator was that when working in the code and adding things like another property on a class/record. While typing we could see the generator running and spitting out files as we type. For instance, lets say we have the following:

public record DebitAccount(AccountId Id, AccountName Name, PersonId Owner, double Balance);

If I were to now after a build start typing for a fifth property in this, it would start spitting out things. First a file without any name, then as I typed the type I would get a file for each letter I added – depending on how fast I typed. So if this was a string type, I could be seeing s.ts, st.ts, str.ts and so on.

Turns out this is by design. One of the optimizations done for the dotnet build command is that it keeps the compiler process around. It starts a build-server that will handle incremental builds as things is happening and therefor be prepared for when we actually do a dotnet build and be as fast as it can be.

When doing proxy generation, this is obviously less than optimal. To avoid this, we added a check if there are any diagnostics errors from the compiler – if so, do not generate anything.

In our source generator we added a line at the top to avoid this:

public void Execute(GeneratorExecutionContext context)
{
    if (context.Compilation.GetDiagnostics().Any(_ => _.Severity == DiagnosticSeverity.Error)) return;
}

Standard
.net, Code Quality, Code Tips, Practices

Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
{
    public ConceptAs(T value)
    {
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;
    }

    public T Value { get; init; }
}

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.

Serialization

When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.

Summary

I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);

Standard
.net

Legacy C# gRPC package +  M1

I recently upgraded to a new MacBook with the  M1 CPU in it. In one of the projects I’m working on @ work we have a third party dependency that is still using the legacy package of gRPC and since we’ve started using .NET Core 6, which supports the M1 processor fully you get a runtime error when running M1 native and through the Roseatta translation. This is because the package does not include the OSX64-ARM64 version of the needed native .dylib for it to work. I decided to package up a NuGet package that includes this binary only so that one can add the regular package and this new one on top and make it work on the M1 CPUs. You can find the package here and the repository here.

Usage

In addition to your Grpc package reference, just add a reference to this package in your .csproj file:

<ItemGroup>
  <PackageReference Include="Grpc" Version="2.39.1" />
  <PackageReference Include="Contrib.Grpc.Core.M1" Version="2.39.1" />
</ItemGroup>

If you’re leveraging another package that implicitly pulls this in, you might need to explicitly include a package-reference to the Grpc package anyways – if your library works with the version this package is built for.

Summary

Although this package now exists, the future of gRPC and C# lies with a new implementation that does not need a native library; read more here. Anyone building anything new should go for the new package and hopefully over time all existing solutions will be migrated as well.

Standard
.net, C#

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

<ItemGroup>
   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->
</ItemGroup>

This will then include this reusable file for the compiler.

Standard
.net, C#

Machine Specifications – .NET Core

I’ve been working on a particular project, mostly in the design phase – but leading up to implementation I quickly hit a snag; my favorite framework and tools for running my tests – or rather, specs, are not in the .NET Core space yet. After kicking and screaming for my self for the most part, I decided to do something about it and contribute something back after having been using the excellent Machine.Specifications Specification by Example framework and accompanying tools for years.

The codebase was not really able to directly build on top of .NET Core – and I started looking at forking it and just #ifdefing my way through the changes. This would be the normal way of contributing in the open source space. Unfortunately, it quickly got out of hand – there simply are too many differences in order for me to work fast enough and achieve my own goals right now. So, allthough not a decision I took lightly; I decided to just copy across into a completely new repository the things needed to be able to run it on .NET Core. It now lives here.

Since .NET Core is still in the flux, and after the announcement of DNX being killed off and replaced by a new .NET CLI tool called dotnet – I decided to for now just do the simplest thing possible and not implement a command or a test framework extension. This will likely change as the tools mature over time. Focused on my own feedback loop right now.

Anywho, the conclusion I’ve come to is that I will have my own test/spec project right now be regular console apps with a single line of code executing the all the specs in the assembly. This is far from ideal, but a starting point so I can carry on. The next logical step is to look at improving this experience with something that runs the specs affected by a change either in the unit under test or the spec itself. If you want a living example, please have a look here.

Basically – the needed bits are Nuget packages that you can find here, here and here.
The first package do include a reference to the others. But right now the tooling is too flaky to predict wether or not intellisense actually works using things like OmniSharp with VSCode or similar, so I have been explicitly taking all three dependencies.

The next thing you need is to have a Program with a Main method that actually runs it by calling the AssemblyRunner that I’ve put in for now.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-55-21.png&quot; data-mce-src="2016-04-16_23-55-21.png"

Once you have this you can do a dotnet run and the output will be in the console.

<img data-position="3" src="https://03ab57ec0b644567a1e7442a.blob.core.windows.net/wp-content/2016/04/2016-04-16_23-40-52.png&quot; data-mce-src="2016-04-16_23-40-52.png"

.NET Core Version

Important thing to notice is that I’ve chosen to be right there on the bleeding edge of things, taking dependencies on packages and runtime versions from the MyGet feeds. The reason behind this is that some of the things that I’m using only exist in the latest bits. Scott Hanselman has a great writeup with regards to where we are today with .NET Core.

Future

Well, I’m not yet knee deep into this and not focusing my effort on this project. I’ll be building what I need, but of course – totally open to anyone wanting to contribute to this project. But if I were to say anything about my own vision and steps I can see right now that would be natural progressions for this it would be that I’d love to see the first step be an auto-watching CLI tool that will run the appropriate tests according to files being changed. I would not go all in and do a full analysis of call stacks and all to figure out what is changing, but rather have an approximation approach to it – similar to what we put in place for our test runner project for JavaScript called Forseti. The approximation was basically based on configurable conventions mapping a relationship between the systems under test and the tests – or specs as I prefer to refer to them as. After that I can see integration with VSCode – which is my favorite editor at the moment. Something similar to WallabyJS would be very cool.

Standard
.net, JavaScript, MVVM

Slashing away the hash bang in single-page applications in Bifrost

One of the things that has been discussed the last couple of years for single-page applications are how to deal with routing since rendering and composition in those kind of applications is being dealt with on the client. Many have been pointing to using the hash (#) as a technique since this is for one possible to use to change history in the browser without post-backing for browsers that does not have the new HTML5 history API. This works fine for applications and is very easy to respond to in the client, but the URLs become unfamiliar and looks a bit weird for deep-linking. In order for search crawlers to crawl content properly, a specification exist that also states the inclusion of a ! (bang) yielding a #! (hash bang) as the separator for the specific route.  In Bifrost we have been working quite a bit the last six months in order to get a model that we believe in for single-page applications. Many models build on relying partially on the server for rendering, but composing the rendered parts in the client. With Bifrost, we wanted a different model, we wanted everything to be based on regular static HTML files sitting on the server, without having any load on the server for rendering – just serve the files as is. Instead, we wanted to compose the application from files on the server. One of the challenges we wanted to crack was to have regular URLs without any # or #!, even for HTML4 browsers – or at least in any anchors linking inside the app. In order for this to work, you need to deal with requests coming to the server with routes that has no meaning for any server code running. One of the motivations were also to not have to do anything specific for any routes on the server, meaning that you wouldn’t need to configure anything for any new routes you wanted – everything should be done only once on the client.

The problem

The nature of a single-page application is that it basically has a start page, and all requests should go to this page. This page represents the composition of the application, it has enough information and scripts on it to be able to render the remainder of the app based on any URLs coming in. In order to accomplish this, the server must be able to take any URLs coming in and pretty much ignore the URL – unless of course there is something configured or a file exist at the specific URL. 

 

Our solution

Bifrost is for now built for .net and more specifically ASP.net, so we had to dig into that platform specifically to figure out a way to deal with this. What we discovered is a part of the request pipeline in ASP.net that we could hook into and do a rewrite of the path during a request. (The implementation can be found here.)

URL Flow

 

What about the client?

Another challenge is to deal with history without post-backing. The way we’ve built everything is that you as a developer or web designer does not have to think about wether or not this is a Bifrost app or just a regular HTML app, you just create your anchor tags as usual. Just create your links as before, no hash-bangs nothing special – have your full paths sit there. This also makes it work just great with search-engines and they will be able to crawl your content and get the proper deep-links and index you. But we need to hook into the browser still, so we don’t do a post-back to the server for any URL changes. The way we’ve chosen to deal with this is to hook into the body and deal with it through event-bubbling. Any click events occurring inside the document will then be captured and if it happens to be an anchor tag that is the source, we pull out the URL and rewrites history inside the browser. Since we’re using Baluptons History.js, we get support for HTML4 browsers as well. History.js will use hash in those scenarios were it can’t rewrite the path, but your URLs does not have to change – it will just use it internally and it will all be abstracted away.

 

 

Standard
.net, Bifrost, C#, CQRS, JavaScript, Patterns, Practices

CQRS in ASP.net MVC with Bifrost

If you’re in the .net space and you’re doing web development, chances are you’re on the ASP.net stack and you might even be using the MVC platform from Microsoft on top of it. When we started building Bifrost for the initial project we needed it for, we were also on the ASP.net MVC stack and quickly realised we needed to build something for the frontend part of the application to be able to facilitate the underlying backend built around the CQRS principles. This post will talk a little bit about the motivation, what we were trying to solve and what we came up with.

The Flow

Below you see a sample of a flow in the application. This particular sample shows a product page, it has details about the product and the price of course and what not, but also a simple button saying “Add to cart” – basically you want to add the product to your shopping cart.

Flow

Sure enough, it is possible to solve this without anything special – you have your Model that represents the product with the details, price being a complex thing that we need to figure out depending on wether or not you have configured to show VAT or not and also if you’re part of a price list – but something that is relatively easy to solve. On the execution side we have a command called AddItemToCart that we can with a simple ASP.net MVC form actually get populated properly :

NewImage

A regular MvcForm with hidden input elements for the properties on the command you need that are not visibles, and of course any input from the user are regular input fields, such as text boxes and others. Basically, by setting the correct name, the default model binder in ASP.net MVC should be able to deserialize the FORM into a command.

Validation

Now here comes the real issues with the above mentioned approach; validation. Validation is tied into the model, you can use any provider you want, the built in one or things like FluentValidation, like we settled on. But you quickly run into trouble with client-side validation. This is basically because the view is identifying one model, but the things you really want to validate are the commands. You want to validate before commands are executed, basically because after they are handled and events are published – the truth has been written and its too late to validate anything coming back on any models. So, how can one fix this? You could come up with an elaborate ModelBinder model that basically modified model state and what not, but seems to be very complicated, at least we thought so, of course after trying it out. We came up with something we call a CommandForm – so basically, instead of doing BeginForm() as above, we have extensions for the HtmlHelper that creates a CommandForm that gives you a new model within the using scope that gives you all the MVC goodies in a limited scope, including the ability to do client-side validation.

So now you get the following :

NewImage

Now you get a form that contains a new HtmlHelper for the command type given in the first generic parameter, and within the form you’ll also find the Command, if you need to set values on it before you add a hidden field.

This now gives you a model context within a view that is isolated and you can post that form alone without having to think about the model defined for the view, which really should a read only model anyways.

Worth mentioning is that there is also an AJAX version of the same BeginCommandForm() were you do Ajax.BeginCommandForm() for those who need that as well.

Features

Another thing that we wanted to do, as I mentioned in this post, was the isolation of Features – sort of applications within the applications, just part of the overall composition that composed the larger scope. We defined a feature to contain all the artefacts that build up a feature, the view, controller, any javascript, any CSS files, images, everything. We isolate them by having it all close to each other in a folder or namespace for the tier you’re working on, so for the frontend we had a Features folder at the root of the ASP.net MVC site and within it every feature was sitting there in their own folder with their respective artefacts. Then moving down to the backend we reflected the structure in every component, for instance we had a Component called Domain, within it you’d find the same structure. This way all the developers would know exactly were to go and do work, it just makes things a lot simpler. Anyways, in order to be able to accomplish this, one needs to do a couple of things. The first thing you need to do is just collapse the structure that the MVC templates creates for your project so that you don’t have the Controllers, Views and Models folders but a Features folder with the Web.config from the Views folder sitting in it at its root.

Then we need to handle static content property in the Features folder by permitting things like javascript files sitting alongside the view files, so you need to add the following within the <System.Web> tag in your Web.config file :

NewImage

Then you need to relocate the views master location formats for the view engines in ASP.net MVC :

NewImage

(Can be found here)

It will then find all your views in the features folder. You should now have a new structure. Only drawback, if you see it as one, is that tooling like Visual Studios built in “Add View” in the context menus and such stop functioning, but I would argue that the developer productivity is gained through a proper structure and you really don’t miss it that much. I guess you can get this back somehow with tools like Resharper, but personally I didn’t bother.

Conclusion

ASP.net MVC provides a lot of goodness when it comes to doing things with great separation in the Web space for .net developers. It also provides quite a few extension points, and you can really see that the developers at Microsoft that has been working on it has gone out of there way to make it extensible and make the code clean. Sure, its not perfect, but what is – its way better than anything we’ve seen. This is something that we enjoyed quite a bit in our own little CQRS Journey, we did try quite a few things, some of them worked out fine – like the CommandForm, and some didn’t. But we were quite happy with the productivity gain we got by adding these helpers, and it also made things a lot more explicit.

One conclusion that we did however reach at a point, ASP.net MVC and Bifrost and its interpretation of CQRS is a bit of a strange fit. We basically have a full pipeline, in quite a different manner than ASP.net MVC has – which is a focused frontend pipeline. So any security, validation and more is something that we started building into Bifrost and the need for ASP.net MVC became less and less important, and when we started down the journey of creating Single Page Applications with HTML and JavaScript as the only thing you need, you really don’t need it. The connection between the client and server would then be with Web requests and JSON and you need something like WebApi or similar, in fact we created our own simple thing in Bifrost to accommodate that even. But all this is for another post.

The MVC part of Bifrost can be found here, and Bifrosts official page is under construction here and the source here.

Standard
.net, 3D, Balder, C#, Cloud, Community, JavaScript, Personal

GeekRider – the goal, technical perspective

As I briefly mentioned earlier I am endeavoring on a project which is going to demand a lot from me physically, but also from a technical perspective. I have a lot of things on my plate, during daytime I’m 100% engaged with work at clients, nighttime is the time I have to squeeze in a lot of activities into. For one, I have two kids that needs my attention – and I have a golden rule of engaging with them from the time I get back from work till they’re in bed. This leaves some 2-4 hours left per day to do all the things I do. I therefor have to be smart with my time and make the most of it. Adding things into the schedule is hard and if I add something, it in general must have a synergy with something already in my schedule. In my schedule I have a couple of open-source projects that I focus a lot of my energy on; BalderBifrost and Forseti, so pretty much anything I put in must relate to these in some fashion. Geekrider arose concretely from this need of synergy. I need to focus more on physical exercise and brought in Geekrider with the synergy of pushing forward development on the open-source projects I’m involved in forward. Balder will hopefully serve the purpose of 3D visualization and bringing forward the a few features that I want to have in that project. As a general web platform, I could have gone for anything already out there, but I wanted to push forward features in Bifrost, I therefor decided to build the site from scratch on top of it and also push into the cloud by hosting it on AppHarbor. Since the site will become very JavaScript intensive, and I pretty much get allergic reactions when I don’t write tests or BDD style specifications for my code, the last project also will get some love; Forseti. The reasoning behind the project is that most test runners out there has so many moving parts in the form of dependencies to get up and running and they’re also very focused on running things in a browser. Forseti is aiming towards something very different, a headless runner for JavaScript tests based on Env.js not using by default any browsers to execute the tests/specs.

One of the goals for Bifrost is to make it easier for developers to create rich web based applications, promoting good software development practices. Today, the RIA space is rapidly changing and for the most part moving away from plugin technologies such as Flash or Silverlight and focusing more on the open standards found in HTML, CSS and JavaScript/EcmaScript.

From a fronted development perspective, Bifrost is taking on this latter part. Traditionally one would compose the resulting web page that is handed over to the client on the server. Multiple solutions exist out there for doing so, and specifically in the .net space, ASP.net and its derivatives are the most popular ones. Rendering, as this is often referred to, adds an extra load onto the server – not only is the server responsible for dealing with the request from the user, wether it is getting data or performing an action, but it also has to transform the result into something the client can show. On top of all this, it has to deal with security. This pattern is a very proven pattern, but in my opinion not the pattern we want to be doing moving forward, and therefor Bifrost will focus on a different pattern. Sure, Bifrost will not only be compatible, but also support out of the box the traditional route – but for now in an opinionated fashion by only supporting ASP.net MVC. The technique that Bifrost will be focusing in on is the Single Page Applications, were you basically hand over the “rendering” to the client and let the client compose the page by swapping in and out elements at runtime. This is in fact nothing new, ever since AJAX became the big thing, we’ve pretty much been doing this – but only for parts at a time and even letting parts of our page be swapped out for new versions being rendered by the server dynamically.

Bifrost will have a composition technique that is based on, as most things in the framework, conventions. The focus will be on Features and one can point to a feature simply by adding a <div/> tag and give it the attribute data-feature=”[name of feature]”. Based on the configurable convention, Bifrost will find the necessary files representing the feature. Looking at the page from Geekrider as it is at the time of writing this post, we’ll have the following.

NewImage

So, back on track. Now that we have this, what is the next logical step?  Up till now, Bifrost has been very server side rendering focused, sporting an extension for ASP.net MVC and taking advantage of that stack. That is about to change, or should I say, the fact that it has been the only way to use Bifrost is about to change. A set of REST endpoints will be exposed from Bifrost, enabling any client to interact with the framework. From a Web developer perspective, this is not good enough, we’re therefor working on bringing in a JavaScript library that will just nicely integrate with all this.

In addition to the goals summarized thus far, I’ve also got another goal for me personally; I want to become more productive with tools other than what I’m used to. I recently bought a MacBook Air, an impressive piece of hardware – but it doesn’t sport the same specs as my MacBook Pro or my iMac, I’ve therefor decided not to put any virtualization software on it to run Windows. This means I have to start using other tools than Microsofts Visual Studio for my development. For .net development, I’m for now using MonoDevelop and for general HTML, JavaScript and CSS development, I’m using TextMate. My long term goal is to be using TextMate for everything.

Summarizing, Geekrider will be the proof of concept for features added to Balder and Bifrost – driving forward with new thoughts and ideas. I will try to blog about the progress as much as my schedule can permit. This means I should keep myself from playing around or doing unnecessary stuff.

 

Standard