.net, C#, Code Tips

Tip: Improving async experience with Pulumi

Recently we’ve been working a lot with Pulumi for automating our cloud environments. We’re building out our own management tool and creating Pulumi stack definitions in C#. One thing that quickly became a pain was working with the Inputs and Output and running into code that became way too nested, looking a lot like the old TPL or JavaScript Promises with .ContinueWith() or .then().

We’re building our stacks using the Pulumi function:

PulumiFn.Create(async () =>
{
    // Automate things...
});

Within the Action we set up the things we want to automate. A scenario we have, is to create a configuration object that contain the connection string from a MongoDB Cluster running with Atlas. The file generated is stored in an Azure file share we create with Pulumi.

// Storage is an object being passed along with information about the Azure storage being used.
var getFileShareResult = GetFileShare.Invoke(new()
{
    AccountName = storage.AccountName,
    ResourceGroupName = resourceGroupName,
    ShareName = storage.ShareName
});

// Cluster is an object holding the MongoDB cluster information.
var getClusterResult = GetCluster.Invoke(new()
{
    Name = cluster.Name,
    ProjectId = cluster.ProjectId
});

// Get the values we need to be able to write the connection string
getFileShareResult.Apply(fileShare =>
{
    getClusterResult.Apply(clusterInfo =>
    {
        // Write the file with the connection string
        return clusterInfo;
    });

    return fileShare;
});

In this sample we’re just interested in 2 values and still its quite a few lines of code and nested scopes.

To improve on this, we ended up creating a couple of extension methods that helps us write regular async/await based code.

public static class OutputExtensionMethods
{
    public static Task<T> GetValue<T>(this Output<T> output) => output.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Output<T> output, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        output.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

And for Input it would be the same:

public static class InputExtensionMethods
{
    public static Task<T> GetValue<T>(this Input<T> input) => input.GetValue(_ => _);

    public static Task<TResult> GetValue<T, TResult>(this Input<T> input, Func<T, TResult> valueResolver)
    {
        var tcs = new TaskCompletionSource<TResult>();
        input.Apply(_ =>
        {
            var result = valueResolver(_);
            tcs.SetResult(result);
            return result;
        });
        return tcs.Task;
    }
}

With these we can now simplify the whole thing down to two lines of code:

// Get the values we need to be able to write the connection string
var fileShareResult = await getFileShareResult.GetValue(_ => _);
var clusterInfo = await getClusterResult.GetValue(_ => _);

// Write the file with the connection string
...

We’re interested to hear if there are better ways already with the Pulumi SDK or if we’re going about this in the wrong way. Please leave a comment with any input, much appreciated.

Standard
.net, C#, Code Tips

ASP.NET Core 6 – transparent WebSockets

Lets face it; I’m a framework builder. In the sense that I build stuff for other developers to use. The goal when doing so is that the developer using what’s built should feel empowered by its capabilities. Developers should have lovable APIs that put them in the pit of success and lets them focus on delivering the business value for their business. These are the thoughts that goes into what we do at work when building reusable components. This post represents some of these reusable components we build.

TL;DR

All the things discussed is documented here. Its backend implementation here, frontend here. Concrete backend example of this here and frontend here. Recommend reading my post on our proxy generation tool for more context.

Introduction

WebSocket support for ASP.NET and ASP.NET Core has been around forever. At its core it is very simple but at the same time crude and not as elegant or structured IMO as your average Controller. We started thinking about how we could simplify this. Sure there is the SignalR approach – which is a viable option (and I’ve written a couple of books about it a few years back here and here). But we wanted something that wouldn’t involve changing the programming model too much from a regular Controller.

One of the reasons we wanted to add some sparkling reactiveness into our software was that we are building software that is all focused on CQRS and Event Sourcing. With this we get into an eventual consistency game real quick for the read side. Once an action – or command in our case – is performed, the read side only updates as a consequence of an event being handled. Since we don’t really know when it is done and ready, we want to be able to notify the frontend with any changes as they become available.

Queries

One of the things that we do is to encapsulate the result of a query into a well known structure. Much like GraphQL does with not just relying on HTTP error codes as the means of communication success or not, we want to capture it in a well known structure that holds the details of whether or not the query was successful and eventually we’ll also put validation results, exception messages and such as well. Along side this, the actual result of the query is also kept on it. For now it looks like the following:

public record QueryResult(object Data, bool IsSuccess);

You’ll see this type used throughout this post.

Observables

We’re very fond of the concept of observables and use Reactive Extensions throughout our solution for different purposes. Our first construct is therefor a special kind of observable we call the ClientObservable. It is the encapsulation we will be using from our Controllers. Its responsibility is to do the heavy lifting of handling the WebSocket “dance” and also expose a clean API for us to provide data to it as things change. It also needs to deal with client closing connection and cleaning up after itself and all.

The basic implementation of looks like below:

public class ClientObservable<T> : IClientObservable
{
    readonly ReplaySubject<T> _subject = new();

    public ClientObservable(Action? clientDisconnected = default)
    {
        ClientDisconnected = clientDisconnected;
    }

    public Action? ClientDisconnected { get; set; }

    public void OnNext(T next) => _subject.OnNext(next);

    public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
    {
        using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
        var subscription = _subject.Subscribe(_ =>
        {
            var queryResult = new QueryResult(_!, true);
            var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
            var message = Encoding.UTF8.GetBytes(json);

            webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
        });

        var buffer = new byte[1024 * 4];
        var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

        while (!received.CloseStatus.HasValue)
        {
            received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
        }

        await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
        subscription.Dispose();

        ClientDisconnected?.Invoke();
    }
}

Since the class is generic, there is a non-generic interface that specifies the functionality that will be used by the next building block.

public interface IClientObservable
{
    Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions);

    object GetAsynchronousEnumerator(CancellationToken cancellationToken = default);
}

Action Filters

Our design goal was that Controller actions could just create ClientObservable instances and return these and then add some magic to the mix for it to automatically be hooked up properly.

For this to happen we can leverage Filters in ASP.NET Core. They run within the invocation pipeline of ASP.NET and can wrap itself around calls and perform tasks. We need a filter that will recognize the IClientObservable return type and make sure to handle the connection correctly.

public class QueryActionFilter : IAsyncActionFilter
{
    readonly JsonOptions _options;

    public QueryActionFilter(IOptions<JsonOptions> options)
    {
        _options = options.Value;
    }

    public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
    {
        if (context.HttpContext.Request.Method == HttpMethod.Get.Method
            && context.ActionDescriptor is ControllerActionDescriptor)
        {
            var result = await next();
            if (result.Result is ObjectResult objectResult)
            {
                switch (objectResult.Value)
                {
                    case IClientObservable clientObservable:
                        {
                            if (context.HttpContext.WebSockets.IsWebSocketRequest)
                            {
                                await clientObservable.HandleConnection(context, _options);
                                result.Result = null;
                            }
                        }
                        break;

                    default:
                        {
                            result.Result = new ObjectResult(new QueryResult(objectResult.Value!, true));
                        }
                        break;
                }
            }
        }
        else
        {
            await next();
        }
    }
}

With the filter in place, you typically add these during the configuration of your controllers e.g. in your Startup.cs during ConfigureServices – or using the minimal APIs:

services.AddControllers(_ => _.Filters.Add<QueryActionFilter>());

Client abstraction

We also built a client abstraction in TypeScript for this to provide a simple way to leverage this. It is built in layers starting off with a representation of the connection.

export type DataReceived<TDataType> = (data: TDataType) => void;

export class ObservableQueryConnection<TDataType> {

    private _socket!: WebSocket;
    private _disconnected = false;

    constructor(private readonly _route: string) {
    }

    connect(dataReceived: DataReceived<TDataType>) {
        const secure = document.location.protocol.indexOf('https') === 0;
        const url = `${secure ? 'wss' : 'ws'}://${document.location.host}${this._route}`;
        let timeToWait = 500;
        const timeExponent = 500;
        const retries = 100;
        let currentAttempt = 0;

        const connectSocket = () => {
            const retry = () => {
                currentAttempt++;
                if (currentAttempt > retries) {
                    console.log(`Attempted ${retries} retries for route '${this._route}'. Abandoning.`);
                    return;
                }
                console.log(`Attempting to reconnect for '${this._route}' (#${currentAttempt})`);

                setTimeout(connectSocket, timeToWait);
                timeToWait += (timeExponent * currentAttempt);
            };

            this._socket = new WebSocket(url);
            this._socket.onopen = (ev) => {
                console.log(`Connection for '${this._route}' established`);
                timeToWait = 500;
                currentAttempt = 0;
            };
            this._socket.onclose = (ev) => {
                if (this._disconnected) return;
                console.log(`Unexpected connection closed for route '${this._route}`);
                retry();
            };
            this._socket.onerror = (error) => {
                console.log(`Error with connection for '${this._route} - ${error}`);
                retry();
            };
            this._socket.onmessage = (ev) => {
                dataReceived(JSON.parse(ev.data));
            };
        };

        connectSocket();
    }

    disconnect() {
        console.log(`Disconnecting '${this._route}'`);
        this._disconnected = true;
        this._socket?.close();
    }
}

On top of this we then have a ObservableQueryFor construct which leverages this and provides a way to subscribe for changes.

export abstract class ObservableQueryFor<TDataType, TArguments = {}> implements IObservableQueryFor<TDataType, TArguments> {
    abstract readonly route: string;
    abstract readonly routeTemplate: Handlebars.TemplateDelegate<any>;

    abstract readonly defaultValue: TDataType;
    abstract readonly requiresArguments: boolean;

    /** @inheritdoc */
    subscribe(callback: OnNextResult, args?: TArguments): ObservableQuerySubscription<TDataType> {
        let actualRoute = this.route;
        if (args && Object.keys(args).length > 0) {
            actualRoute = this.routeTemplate(args);
        }

        const connection = new ObservableQueryConnection<TDataType>(actualRoute);
        const subscriber = new ObservableQuerySubscription(connection);
        connection.connect(callback);
        return subscriber;
    }
}

The subscription being returned:

export class ObservableQuerySubscription<TDataType> {
    constructor(private _connection: ObservableQueryConnection<TDataType>) {
    }

    unsubscribe() {
        this._connection.disconnect();
        this._connection = undefined!;
    }
}

We build our frontends using React and added a wrapper for this to make it even easier:

export function useObservableQuery<TDataType, TQuery extends IObservableQueryFor<TDataType>, TArguments = {}>(query: Constructor<TQuery>, args?: TArguments): [QueryResult<TDataType>] {
    const queryInstance = new query() as TQuery;
    const [result, setResult] = useState<QueryResult<TDataType>>(new QueryResult(queryInstance.defaultValue, true));

    useEffect(() => {
        if (queryInstance.requiresArguments && !args) {
            console.log(`Warning: Query '${query.name}' requires arguments. Will not perform the query.`);
            return;
        }

        const subscription = queryInstance.subscribe(_ => {
            setResult(_ as unknown as QueryResult<TDataType>);
        }, args);

        return () => subscription.unsubscribe();
    }, []);

    return [result];
}

The entire frontend abstraction can be found here.

Usage

To get WebSockets working, we will need to add the default ASP.NET Core middleware that handles it (read more here). Basically in your Startup.cs or your app builder add the following:

app.UseWebSockets()

With all of this we can now create a controller that watches a MongoDB collection:

public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _collection;

    public Accounts(IMongoCollection<DebitAccount> collection) => _collection = collection;

    [HttpGet]
    public ClientObservable<IEnumerable<DebitAccount>> AllAccounts()
    {
        var observable = new ClientObservable<IEnumerable<DebitAccount>>();
        var accounts = _accountsCollection.Find(_ => true).ToList();
        observable.OnNext(accounts);
        var cursor = _accountsCollection.Watch();

        Task.Run(() =>
        {
            while (cursor.MoveNext())
            {
                if (!cursor.Current.Any()) continue;
                observable.OnNext(_accountsCollection.Find(_ => true).ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

Notice the usage of the ClientObservable and how it can be used with anything.

MongoDB simplification – extension

The code in the controller above is typically a thing that will be copy/pasted around as it is a very common pattern. We figured that we will be pretty much doing the same for most of our queries and added convenience methods for MongoDB. They can be found here.

We can therefor package what we had in the controller into an extension API and make it more generalized.

public static class MongoDBCollectionExtensions
{
    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        Expression<Func<TDocument, bool>>? filter,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= _ => true;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    public static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
        this IMongoCollection<TDocument> collection,
        FilterDefinition<TDocument>? filter = null,
        FindOptions<TDocument, TDocument>? options = null)
    {
        filter ??= FilterDefinition<TDocument>.Empty;
        return await collection.Observe(() => collection.FindAsync(filter, options));
    }

    static async Task<ClientObservable<IEnumerable<TDocument>>> Observe<TDocument>(
            this IMongoCollection<TDocument> collection,
            Func<Task<IAsyncCursor<TDocument>>> findCall)
    {
        var observable = new ClientObservable<IEnumerable<TDocument>>();
        var response = await findCall();
        observable.OnNext(response.ToList());
        var cursor = collection.Watch();

        _ = Task.Run(async () =>
        {
            while (await cursor.MoveNextAsync())
            {
                if (!cursor.Current.Any()) continue;
                var response = await findCall();
                observable.OnNext(response.ToList());
            }
        });

        observable.ClientDisconnected = () => cursor.Dispose();

        return observable;
    }
}

With this glue in place, we now have something that makes it very easy to create something that observes a collection and sends any changes to the frontend:

[Route("/api/accounts/debit")]
public class Accounts : Controller
{
    readonly IMongoCollection<DebitAccount> _accountsCollection;

    public Accounts(
        IMongoCollection<DebitAccount> accountsCollection)
    {
        _accountsCollection = accountsCollection;
    }

    [HttpGet]
    public Task<ClientObservable<IEnumerable<DebitAccount>>> AllAccounts()
    {
        return _accountsCollection.Observe();
    }
}

Streaming JSON

A nice addition to ASP.NET Core 6 is the native support for IAsyncEnumerable<T> and streaming of JSON:

One benefit of this is you can now quite easily support both a WebSocket scenario and regular web requests. On our ClientObservable<T> we can then implement the IAsyncEnumerable<T> interface and create our own enumerator that supports this by observing the subject we have there.

    public class ClientObservable<T> : IClientObservable, IAsyncEnumerable<T>
    {
        readonly ReplaySubject<T> _subject = new();

        public ClientObservable(Action? clientDisconnected = default)
        {
            ClientDisconnected = clientDisconnected;
        }

        public Action? ClientDisconnected { get; set; }

        public void OnNext(T next) => _subject.OnNext(next);

        public async Task HandleConnection(ActionExecutingContext context, JsonOptions jsonOptions)
        {
            using var webSocket = await context.HttpContext.WebSockets.AcceptWebSocketAsync();
            var subscription = _subject.Subscribe(_ =>
            {
                var queryResult = new QueryResult(_!, true);
                var json = JsonSerializer.Serialize(queryResult, jsonOptions.JsonSerializerOptions);
                var message = Encoding.UTF8.GetBytes(json);

                webSocket.SendAsync(new ArraySegment<byte>(message, 0, message.Length), WebSocketMessageType.Text, true, CancellationToken.None);
            });

            var buffer = new byte[1024 * 4];
            var received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);

            while (!received.CloseStatus.HasValue)
            {
                received = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
            }

            await webSocket.CloseAsync(received.CloseStatus.Value, received.CloseStatusDescription, CancellationToken.None);
            subscription.Dispose();

            ClientDisconnected?.Invoke();
        }

        public IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken = default) => new ObservableAsyncEnumerator<T>(_subject, cancellationToken);

        public object GetAsynchronousEnumerator(CancellationToken cancellationToken = default) => GetAsyncEnumerator(cancellationToken);
    }

The return type of ObervableAsyncEnumerator<T> can be implemented as follows:

public class ObservableAsyncEnumerator<T> : IAsyncEnumerator<T>
{
    readonly IDisposable _subscriber;
    readonly CancellationToken _cancellationToken;
    readonly ConcurrentQueue<T> _items = new();
    TaskCompletionSource _taskCompletionSource = new();

    public ObservableAsyncEnumerator(IObservable<T> observable, CancellationToken cancellationToken)
    {
        Current = default!;
        _subscriber = observable.Subscribe(_ =>
        {
            _items.Enqueue(_);
            if (!_taskCompletionSource.Task.IsCompletedSuccessfully)
            {
                _taskCompletionSource?.SetResult();
            }
        });
        _cancellationToken = cancellationToken;
    }

    public T Current { get; private set; }

    public ValueTask DisposeAsync()
    {
        _subscriber.Dispose();
        return ValueTask.CompletedTask;
    }

    public async ValueTask<bool> MoveNextAsync()
    {
        if (_cancellationToken.IsCancellationRequested) return false;
        await _taskCompletionSource.Task;
        _items.TryDequeue(out var item);
        Current = item!;
        _taskCompletionSource = new();
        if (!_items.IsEmpty)
        {
            _taskCompletionSource.SetResult();
        }

        return true;
    }
}

Conclusion

This post we’ve touched on an optimization and formalization of reactive Web programming. From a perspective of covering the most common use cases, we feel that this approach achieves that. It is not a catch-all solution, but with the way we’ve built it you do have some flexibility in how you use this. It is not locked down to be specifically just MongoDB. The ClientObservable is completely agnostic, you can use it for anything – all you need is to be able to observe something else and then call the OnNext method on the observable whenever new things appear.

From a user perspective I think we should aim for solutions that does not require the user to hit a refresh button. In order to do that, it needs to be simple for developers to enable it in their solutions. The solution presented here is geared towards that.

If you have any feedback, good or bad, improvements or pitfalls; please leave a comment.

Standard
.net, C#, Code Quality

Avoid code generation if compiler is in error state

One of the things discovered with usage of our proxy generator was that when working in the code and adding things like another property on a class/record. While typing we could see the generator running and spitting out files as we type. For instance, lets say we have the following:

public record DebitAccount(AccountId Id, AccountName Name, PersonId Owner, double Balance);

If I were to now after a build start typing for a fifth property in this, it would start spitting out things. First a file without any name, then as I typed the type I would get a file for each letter I added – depending on how fast I typed. So if this was a string type, I could be seeing s.ts, st.ts, str.ts and so on.

Turns out this is by design. One of the optimizations done for the dotnet build command is that it keeps the compiler process around. It starts a build-server that will handle incremental builds as things is happening and therefor be prepared for when we actually do a dotnet build and be as fast as it can be.

When doing proxy generation, this is obviously less than optimal. To avoid this, we added a check if there are any diagnostics errors from the compiler – if so, do not generate anything.

In our source generator we added a line at the top to avoid this:

public void Execute(GeneratorExecutionContext context)
{
    if (context.Compilation.GetDiagnostics().Any(_ => _.Severity == DiagnosticSeverity.Error)) return;
}

Standard
Code Tips

Proxy generation of C# ASP.NET controller actions using Roslyn

TL;DR

All the things discussed can be found as code here basic documentation for it here. If you’re interested in the NuGet Package directly, you find it here. The sample in the repo uses it – read more here on how to run the sample.

Update: have look here as well for avoiding generation when there are errors.

Productivity

I’m a huge sucker for anything that can optimize productivity and I absolutely love taking something that me or any of my coworkers tend to repeat and make it go away. We tend to end up having rules we apply to our codebase, making them a convention – these are great opportunities for automation. One of these areas is the glue between the backend and the frontend. If your backend is written in C# and your frontend in JS/TS and you’re talking to the backend over APIs.

Instead of having a bunch of fetch calls in your frontend code with URLs floating around, I believe in wrapping these up nicely to be imported. This is what can be automated; generate proxy objects that can be used directly in code. In the past I’ve blogged about this with a runtime approach.

Anyone familiar with gRPC or GraphQL are probably already familiar with the concept of defining an API surface and having code generated. Also in the Swagger space you can generate code directly from the OpenAPI JSON definition.

Meet Roslyn Source Generators

With .NET and the Roslyn Compiler we can optimize this even further. With the introduction of source generators in Roslyn, we can be part of the compiler and generate what we need. Although its originally designed to generate C# code that will be part of the finished compiled assembly, there is nothing stopping us from outputting something else.

A generator basically has 2 parts to it; a syntax receiver and the actual generator. The syntax receiver visits the abstract syntax tree given by the compiler and can then decide what it finds interesting for the generator to generate from.

Our SyntaxReceiver is very simple, we’re just interested in ASP.NET Controllers, and consider all of these as candidates.

public class SyntaxReceiver : ISyntaxReceiver
{
    readonly List<ClassDeclarationSyntax> _candidates = new();

    /// <summary>
    /// Gets the candidates for code generation.
    /// </summary>
    public IEnumerable<ClassDeclarationSyntax> Candidates => _candidates;

    /// <inheritdoc/>
    public void OnVisitSyntaxNode(SyntaxNode syntaxNode)
    {
        if (syntaxNode is not ClassDeclarationSyntax classSyntax) return;
        if (!classSyntax.BaseList?.Types.Any(_ => _.Type.GetName() == "Controller") ?? false) return;
        _candidates.Add(classSyntax);
    }
}

The SourceGenerator is handed the syntax receiver with the candidates in it.

[Generator]
public class SourceGenerator : ISourceGenerator
{
    /// <inheritdoc/>
    public void Initialize(GeneratorInitializationContext context)
    {
        context.RegisterForSyntaxNotifications(() => new SyntaxReceiver());
    }

    /// <inheritdoc/>
    public void Execute(GeneratorExecutionContext context)
    {
        var receiver = context.SyntaxReceiver as SyntaxReceiver;
        // Build from what the syntax receiver deemed interesting.
    }
}

There are a few moving parts to our generator and approach, so I won’t get into details on the inner workings. You can find the full code of the generator we’ve built here.

In a nutshell

Our generator follows what we find to be a useful pattern. We’ve basically grouped our operations into Commands and Queries (I’m a firm believer of CQRS). This gives us basically two operation methods we’re interested in; [HttpPost] and [HttpGet]. In addition we’re saying that a Command (HttpPost) can be formalized as a type and is the only parameter on an [HttpPost] action using [FromBody]. Similar with Queries, actions that return an enumerable of something and can take parameters in the form of query string parameters ([FromQuery]) or from the route ([FromRoute]).

From this we generate type proxies for the input and output object types and use the namespace as the basis for a folder structure: My.Application.Has.Features gets turned into a relative path My/Application/Has/Features and is added to the output path.

Our generated code relies on base types and helpers we’ve put into a frontend package. Since we’re building our frontends using React, we’ve done things specifically for that as well – for instance for queries with a useQuery hook.

The way we do generation is basically through templates for the different types leveraging Handlebars.net.

The bonus

One of the bonuses one gets with doing this is that the new hot reload functionality of .NET 6 makes for a very tight feedback loop with these type of source generators as well. While running with dotnet watch run – it will continuously run while I’m editing in the C# code that is being marked as candidates by the syntax receiver. Below you’ll see C# on the right hand side while TypeScript is being generated on the left hand side while typing. Keep in mind though, if you have something that generates files with a filename based on something in the original code, you might find some interesting side-effects (ask me how I know 😂).

Conclusion

Productivity is a clear benefit for us, as the time jumping from backend to frontend is cut down. The context switch is also optimized, as a developer can go directly from doing something in the backend and immediately use it in the frontend without doing anything but compiling – which you’re doing anyways.

Another benefit you get with doing something like this is that you create yourself an anti corruption layer (ACL). Often ACLs are associated with going between something like different bounded contexts or different microservices, but the concept of having something in between that basically does the translation between two concepts and allowing for change without corrupting either parties. The glue that the proxies represents is such an ACL – we can change the backend completely and swap out our REST APIs in the future for something else, e.g. GraphQL, gRPC og WebSockets and all we need to change for the frontend to keep working is the glue part – our proxies and the abstraction in the frontend they leverage.

Standard
Uncategorized

Autofac + ASP.NET Core 6 + Hot Reload/Debug = crash

One of the cool things in .NET 6 is the concept of hot reload if doing something like dotnet watch run. This extends into ASP.NET to things like Razor pages. If your like me, wants a specific IoC container – like Autofac, you might run into problems with this and even running the debugger. The reason they behave the same is that the hot reload feature is actually leveraging edit&continue, a feature of the debugging facilities of the .NET Core infrastructure.

The problem I ran into was with .NET 6 preview 7 that it didn’t know how to resolve the constructor for an internal class in one of the Microsofts Razor assemblies. When calling MapControllers() on the endpoints:

app.UseEndpoints(endpoints => endpoints.MapControllers());

It would crash with the following:

Autofac.Core.DependencyResolutionException: An exception was thrown while activating Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionEndpointDataSourceFactory -> Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider -> λ:Microsoft.AspNetCore.Mvc.Infrastructure.IActionDescriptorChangeProvider[] -> Microsoft.AspNetCore.Mvc.HotReload.HotReloadService -> Microsoft.AspNetCore.Mvc.Razor.RazorHotReload.
       ---> Autofac.Core.DependencyResolutionException: None of the constructors found with 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type 'Microsoft.AspNetCore.Mvc.Razor.RazorHotReload' can be invoked with the available services and parameters:

My workaround for this is basically to just explicitly add razor pages, even though I’m not using it:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();
}

With that in place, I was able to debug and also use hot reloading for my code.

Standard
.net, Code Quality, Code Tips, Practices

Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
{
    public ConceptAs(T value)
    {
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;
    }

    public T Value { get; init; }
}

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.

Serialization

When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.

Summary

I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);

Standard
.net

Legacy C# gRPC package +  M1

I recently upgraded to a new MacBook with the  M1 CPU in it. In one of the projects I’m working on @ work we have a third party dependency that is still using the legacy package of gRPC and since we’ve started using .NET Core 6, which supports the M1 processor fully you get a runtime error when running M1 native and through the Roseatta translation. This is because the package does not include the OSX64-ARM64 version of the needed native .dylib for it to work. I decided to package up a NuGet package that includes this binary only so that one can add the regular package and this new one on top and make it work on the M1 CPUs. You can find the package here and the repository here.

Usage

In addition to your Grpc package reference, just add a reference to this package in your .csproj file:

<ItemGroup>
  <PackageReference Include="Grpc" Version="2.39.1" />
  <PackageReference Include="Contrib.Grpc.Core.M1" Version="2.39.1" />
</ItemGroup>

If you’re leveraging another package that implicitly pulls this in, you might need to explicitly include a package-reference to the Grpc package anyways – if your library works with the version this package is built for.

Summary

Although this package now exists, the future of gRPC and C# lies with a new implementation that does not need a native library; read more here. Anyone building anything new should go for the new package and hopefully over time all existing solutions will be migrated as well.

Standard
Code Tips, Practices

Specifications in xUnit

TL;DR

You can find a full implementation with sample here.

Testing

I wrote my first unit test in 1996, back then we didn’t have much tooling and basically just had executables that ran automatic test batteries, but it wasn’t until Dan North introduced the concept of Behavior-Driven Development in 2006 it truly clicked into place for me. Writing tests – or specifications that specify the behavior of a part of the system or a unit made much more sense to me. With Machine.Specifications (MSpec for short) it became easier and more concise to express your specifications as you can see from this post comparing an NUnit approach with MSpec.

The biggest problem MSpec had and still has IMO is its lack of adoption and community. This results in lack of contributors giving it the proper TLC it deserves, which ultimately lead to a lack of good consistent tooling experience. The latter has been a problem ever since it was introduced and throughout the years the integrated experience in code editors or IDEs has been lacking or buggy at best. Sure, running it from terminal has always worked – but to me it stops me a bit in the track as I’m a sucker for feedback loops and loves being in the flow.

xUnit FTW

This is where xUnit comes in. With a broader adoption and community, the tooling experience across platforms, editors and IDEs is much more consistent.

I set out to get the best of breed and wanted to see if I could get close to the MSpec conciseness and get the tooling love. Before I got sucked into the not invented here syndrom I researched if there were already solutions out there. Found a few posts on it and found the Observation sample In the xUnit samples repo to be the most interesting one. But I couldn’t get it to work with the tooling experience in my current setup (.NET 6 preview + VSCode on my Mac).

From this I set out to create something of a thin wrapper that you can find as a Gist here. The Gist contains a base class that enables the expressive features of MSpec, similar wrapper for testing exceptions and also extension methods mimicking Should*() extension methods that MSpec provides.

By example

Lets take the example from the MSpec readme:

class When_authenticating_an_admin_user
{
    static SecurityService subject;
    static UserToken user_token;

    Establish context = () => 
        subject = new SecurityService();

    Because of = () =>
        user_token = subject.Authenticate("username", "password");

    It should_indicate_the_users_role = () =>
        user_token.Role.ShouldEqual(Roles.Admin);

    It should_have_a_unique_session_id = () =>
        user_token.SessionId.ShouldNotBeNull();
}

With my solution we can transform this quite easily, maintaining structure, flow and conciseness. Taking full advantage of C# expression body definition (lambda):

class When_authenticating_an_admin_user : Specification
{
    SecurityService subject;
    UserToken user_token;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             user_token = subject.Authenticate("username", "password");

    [Fact] void should_indicate_the_users_role() =>
        user_token.Role.ShouldEqual(Roles.Admin);

    [Fact] void should_have_a_unique_session_id() =>
        user_token.SessionId.ShouldNotBeNull();
}

Since this is pretty much just standard xUnit, you can leverage all the features and attributes.

Catching exceptions

With the Gist, you’ll find a type called Catch. Its purpose is to provide a way to capture exceptions from method calls to be able to assert that the exception occurred or not. Below is an example of its usage, and also one of the extension methods provided in the Gist – ShouldBeOfExactType<>().

class When_authenticating_a_null_user : Specification
{
    SecurityService subject;
    Exception result;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

Contexts

With this approach one ends up being very specific on behaviors of a system/unit, this leads to multiple classes specifying different aspects of the same behavior in different contexts or different behaviors of the system/unit. To avoid having to do the setup and teardown of these within each of these classes, I like to reuse these by leveraging inheritance. In addition, I tend to put the reused contexts in a folder/namespace that is called given; yielding a more readable result.

Following the previous examples we now have two specifications and both requiring a context of the system being in a state with no user authenticated. By adding a file in the given folder of this unit and then adding a namespace segment og given as well, we can encapsulate the context as follows:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();
}

From this we can simplify our specifications by removing the establish part:

class When_authenticating_a_null_user : given.no_user_authenticated
{
    Exception result;

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

The Gist supports multiple levels of inheritance recursively and will run all the lifecycle methods such as Establish from the lowest level in the hierarchy chain and up the hierarchy (e.g. no_user_authenticated -> when_authenticating_a_null_user).

Teardown

In addition to Establish, there is its counterpart; Destroy. This is where you’d typically cleanup anything needing to be cleaned up – typically if you need to clean up some global state that was mutated. Take our context for instance and assuming the SecurityService implements IDisposable:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();

    void Destroy() => subject.Dispose();

}

Added benefit

One of the problems that has been with the MSpec approach is that its all based on statics since it is using delegates as “keywords”. Some of the runners have problems with this and async models, causing havoc and non-deterministic test results. Since xUnit is instance based, this problem goes away and every instance of the specification is in isolation.

Summary

This is probably just yet another solution to this and I’ve probably overlooked implementations out there, if that’s the case – please leave me a comment, would love to not have to maintain this myself 🙂. It has helped me get to a tighter feedback loop as I now can run or debug tests in context of where my cursor is in VSCode with a keyboard shortcut and see the result for that specification only. My biggest hope for the future is that we get a tooling experience in VSCode that is similar to Wallaby is doing for JS/TS testing. Windows devs using full Visual Studio also has the live unit testing feature. With .NET 6 and the hot reload feature I’m very optimistic on tooling going in this direction and we can shave the feedback loop even more.

Standard
Code Tips

Orleans and C# 10 global usings

If you’re using Microsoft Orleans and have started using .NET 6 and specifically C# 10, you might have come across an error message similar to this from the code generator:

  fail: Orleans.CodeGenerator[0]
        Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
  -- Code Generation FAILED -- 
  
  Exc level 0: System.InvalidOperationException: Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectGrainInterface(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000136+0x86
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectType(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000138+0x23
     at Orleans.CodeGenerator.CodeGenerator.AnalyzeCompilation() in Orleans.CodeGenerator.dll:token 0x6000009+0x9f
     at Orleans.CodeGenerator.MSBuild.CodeGeneratorCommand.Execute(CancellationToken cancellationToken) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000014+0x44f
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.SourceToSource(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000025+0x45b
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.Main(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000023+0x3d

The reason I got this was that I removed an explicit using statement, since I’m now “all in” on the global usings feature. By removing:

using System.Threading.Tasks;

… the code generator doesn’t understand the return type properly and resolves it as a unknown Task type.
Putting in this explicitly resolves the issue and the code generator goes on and does its thing.

Standard
.net, C#

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

<ItemGroup>
   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->
</ItemGroup>

This will then include this reusable file for the compiler.

Standard