Code Tips

Proxy generation of C# ASP.NET controller actions using Roslyn

TL;DR

All the things discussed can be found as code here basic documentation for it here. If you’re interested in the NuGet Package directly, you find it here. The sample in the repo uses it – read more here on how to run the sample.

Productivity

I’m a huge sucker for anything that can optimize productivity and I absolutely love taking something that me or any of my coworkers tend to repeat and make it go away. We tend to end up having rules we apply to our codebase, making them a convention – these are great opportunities for automation. One of these areas is the glue between the backend and the frontend. If your backend is written in C# and your frontend in JS/TS and you’re talking to the backend over APIs.

Instead of having a bunch of fetch calls in your frontend code with URLs floating around, I believe in wrapping these up nicely to be imported. This is what can be automated; generate proxy objects that can be used directly in code. In the past I’ve blogged about this with a runtime approach.

Anyone familiar with gRPC or GraphQL are probably already familiar with the concept of defining an API surface and having code generated. Also in the Swagger space you can generate code directly from the OpenAPI JSON definition.

Meet Roslyn Source Generators

With .NET and the Roslyn Compiler we can optimize this even further. With the introduction of source generators in Roslyn, we can be part of the compiler and generate what we need. Although its originally designed to generate C# code that will be part of the finished compiled assembly, there is nothing stopping us from outputting something else.

A generator basically has 2 parts to it; a syntax receiver and the actual generator. The syntax receiver visits the abstract syntax tree given by the compiler and can then decide what it finds interesting for the generator to generate from.

Our SyntaxReceiver is very simple, we’re just interested in ASP.NET Controllers, and consider all of these as candidates.

public class SyntaxReceiver : ISyntaxReceiver
{
    readonly List<ClassDeclarationSyntax> _candidates = new();

    /// <summary>
    /// Gets the candidates for code generation.
    /// </summary>
    public IEnumerable<ClassDeclarationSyntax> Candidates => _candidates;

    /// <inheritdoc/>
    public void OnVisitSyntaxNode(SyntaxNode syntaxNode)
    {
        if (syntaxNode is not ClassDeclarationSyntax classSyntax) return;
        if (!classSyntax.BaseList?.Types.Any(_ => _.Type.GetName() == "Controller") ?? false) return;
        _candidates.Add(classSyntax);
    }
}

The SourceGenerator is handed the syntax receiver with the candidates in it.

[Generator]
public class SourceGenerator : ISourceGenerator
{
    /// <inheritdoc/>
    public void Initialize(GeneratorInitializationContext context)
    {
        context.RegisterForSyntaxNotifications(() => new SyntaxReceiver());
    }

    /// <inheritdoc/>
    public void Execute(GeneratorExecutionContext context)
    {
        var receiver = context.SyntaxReceiver as SyntaxReceiver;
        // Build from what the syntax receiver deemed interesting.
    }
}

There are a few moving parts to our generator and approach, so I won’t get into details on the inner workings. You can find the full code of the generator we’ve built here.

In a nutshell

Our generator follows what we find to be a useful pattern. We’ve basically grouped our operations into Commands and Queries (I’m a firm believer of CQRS). This gives us basically two operation methods we’re interested in; [HttpPost] and [HttpGet]. In addition we’re saying that a Command (HttpPost) can be formalized as a type and is the only parameter on an [HttpPost] action using [FromBody]. Similar with Queries, actions that return an enumerable of something and can take parameters in the form of query string parameters ([FromQuery]) or from the route ([FromRoute]).

From this we generate type proxies for the input and output object types and use the namespace as the basis for a folder structure: My.Application.Has.Features gets turned into a relative path My/Application/Has/Features and is added to the output path.

Our generated code relies on base types and helpers we’ve put into a frontend package. Since we’re building our frontends using React, we’ve done things specifically for that as well – for instance for queries with a useQuery hook.

The way we do generation is basically through templates for the different types leveraging Handlebars.net.

The bonus

One of the bonuses one gets with doing this is that the new hot reload functionality of .NET 6 makes for a very tight feedback loop with these type of source generators as well. While running with dotnet watch run – it will continuously run while I’m editing in the C# code that is being marked as candidates by the syntax receiver. Below you’ll see C# on the right hand side while TypeScript is being generated on the left hand side while typing. Keep in mind though, if you have something that generates files with a filename based on something in the original code, you might find some interesting side-effects (ask me how I know 😂).

Conclusion

Productivity is a clear benefit for us, as the time jumping from backend to frontend is cut down. The context switch is also optimized, as a developer can go directly from doing something in the backend and immediately use it in the frontend without doing anything but compiling – which you’re doing anyways.

Another benefit you get with doing something like this is that you create yourself an anti corruption layer (ACL). Often ACLs are associated with going between something like different bounded contexts or different microservices, but the concept of having something in between that basically does the translation between two concepts and allowing for change without corrupting either parties. The glue that the proxies represents is such an ACL – we can change the backend completely and swap out our REST APIs in the future for something else, e.g. GraphQL, gRPC og WebSockets and all we need to change for the frontend to keep working is the glue part – our proxies and the abstraction in the frontend they leverage.

Standard
Uncategorized

Autofac + ASP.NET Core 6 + Hot Reload/Debug = crash

One of the cool things in .NET 6 is the concept of hot reload if doing something like dotnet watch run. This extends into ASP.NET to things like Razor pages. If your like me, wants a specific IoC container – like Autofac, you might run into problems with this and even running the debugger. The reason they behave the same is that the hot reload feature is actually leveraging edit&continue, a feature of the debugging facilities of the .NET Core infrastructure.

The problem I ran into was with .NET 6 preview 7 that it didn’t know how to resolve the constructor for an internal class in one of the Microsofts Razor assemblies. When calling MapControllers() on the endpoints:

app.UseEndpoints(endpoints => endpoints.MapControllers());

It would crash with the following:

Autofac.Core.DependencyResolutionException: An exception was thrown while activating Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionEndpointDataSourceFactory -> Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider -> λ:Microsoft.AspNetCore.Mvc.Infrastructure.IActionDescriptorChangeProvider[] -> Microsoft.AspNetCore.Mvc.HotReload.HotReloadService -> Microsoft.AspNetCore.Mvc.Razor.RazorHotReload.
       ---> Autofac.Core.DependencyResolutionException: None of the constructors found with 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type 'Microsoft.AspNetCore.Mvc.Razor.RazorHotReload' can be invoked with the available services and parameters:

My workaround for this is basically to just explicitly add razor pages, even though I’m not using it:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();
}

With that in place, I was able to debug and also use hot reloading for my code.

Standard
.net, Code Quality, Code Tips, Practices

Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
{
    public ConceptAs(T value)
    {
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;
    }

    public T Value { get; init; }
}

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.

Serialization

When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.

Summary

I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);

Standard
.net

Legacy C# gRPC package +  M1

I recently upgraded to a new MacBook with the  M1 CPU in it. In one of the projects I’m working on @ work we have a third party dependency that is still using the legacy package of gRPC and since we’ve started using .NET Core 6, which supports the M1 processor fully you get a runtime error when running M1 native and through the Roseatta translation. This is because the package does not include the OSX64-ARM64 version of the needed native .dylib for it to work. I decided to package up a NuGet package that includes this binary only so that one can add the regular package and this new one on top and make it work on the M1 CPUs. You can find the package here and the repository here.

Usage

In addition to your Grpc package reference, just add a reference to this package in your .csproj file:

<ItemGroup>
  <PackageReference Include="Grpc" Version="2.39.1" />
  <PackageReference Include="Contrib.Grpc.Core.M1" Version="2.39.1" />
</ItemGroup>

If you’re leveraging another package that implicitly pulls this in, you might need to explicitly include a package-reference to the Grpc package anyways – if your library works with the version this package is built for.

Summary

Although this package now exists, the future of gRPC and C# lies with a new implementation that does not need a native library; read more here. Anyone building anything new should go for the new package and hopefully over time all existing solutions will be migrated as well.

Standard
Code Tips, Practices

Specifications in xUnit

TL;DR

You can find a full implementation with sample here.

Testing

I wrote my first unit test in 1996, back then we didn’t have much tooling and basically just had executables that ran automatic test batteries, but it wasn’t until Dan North introduced the concept of Behavior-Driven Development in 2006 it truly clicked into place for me. Writing tests – or specifications that specify the behavior of a part of the system or a unit made much more sense to me. With Machine.Specifications (MSpec for short) it became easier and more concise to express your specifications as you can see from this post comparing an NUnit approach with MSpec.

The biggest problem MSpec had and still has IMO is its lack of adoption and community. This results in lack of contributors giving it the proper TLC it deserves, which ultimately lead to a lack of good consistent tooling experience. The latter has been a problem ever since it was introduced and throughout the years the integrated experience in code editors or IDEs has been lacking or buggy at best. Sure, running it from terminal has always worked – but to me it stops me a bit in the track as I’m a sucker for feedback loops and loves being in the flow.

xUnit FTW

This is where xUnit comes in. With a broader adoption and community, the tooling experience across platforms, editors and IDEs is much more consistent.

I set out to get the best of breed and wanted to see if I could get close to the MSpec conciseness and get the tooling love. Before I got sucked into the not invented here syndrom I researched if there were already solutions out there. Found a few posts on it and found the Observation sample In the xUnit samples repo to be the most interesting one. But I couldn’t get it to work with the tooling experience in my current setup (.NET 6 preview + VSCode on my Mac).

From this I set out to create something of a thin wrapper that you can find as a Gist here. The Gist contains a base class that enables the expressive features of MSpec, similar wrapper for testing exceptions and also extension methods mimicking Should*() extension methods that MSpec provides.

By example

Lets take the example from the MSpec readme:

class When_authenticating_an_admin_user
{
    static SecurityService subject;
    static UserToken user_token;

    Establish context = () => 
        subject = new SecurityService();

    Because of = () =>
        user_token = subject.Authenticate("username", "password");

    It should_indicate_the_users_role = () =>
        user_token.Role.ShouldEqual(Roles.Admin);

    It should_have_a_unique_session_id = () =>
        user_token.SessionId.ShouldNotBeNull();
}

With my solution we can transform this quite easily, maintaining structure, flow and conciseness. Taking full advantage of C# expression body definition (lambda):

class When_authenticating_an_admin_user : Specification
{
    SecurityService subject;
    UserToken user_token;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             user_token = subject.Authenticate("username", "password");

    [Fact] void should_indicate_the_users_role() =>
        user_token.Role.ShouldEqual(Roles.Admin);

    [Fact] void should_have_a_unique_session_id() =>
        user_token.SessionId.ShouldNotBeNull();
}

Since this is pretty much just standard xUnit, you can leverage all the features and attributes.

Catching exceptions

With the Gist, you’ll find a type called Catch. Its purpose is to provide a way to capture exceptions from method calls to be able to assert that the exception occurred or not. Below is an example of its usage, and also one of the extension methods provided in the Gist – ShouldBeOfExactType<>().

class When_authenticating_a_null_user : Specification
{
    SecurityService subject;
    Exception result;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

Contexts

With this approach one ends up being very specific on behaviors of a system/unit, this leads to multiple classes specifying different aspects of the same behavior in different contexts or different behaviors of the system/unit. To avoid having to do the setup and teardown of these within each of these classes, I like to reuse these by leveraging inheritance. In addition, I tend to put the reused contexts in a folder/namespace that is called given; yielding a more readable result.

Following the previous examples we now have two specifications and both requiring a context of the system being in a state with no user authenticated. By adding a file in the given folder of this unit and then adding a namespace segment og given as well, we can encapsulate the context as follows:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();
}

From this we can simplify our specifications by removing the establish part:

class When_authenticating_a_null_user : given.no_user_authenticated
{
    Exception result;

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

The Gist supports multiple levels of inheritance recursively and will run all the lifecycle methods such as Establish from the lowest level in the hierarchy chain and up the hierarchy (e.g. no_user_authenticated -> when_authenticating_a_null_user).

Teardown

In addition to Establish, there is its counterpart; Destroy. This is where you’d typically cleanup anything needing to be cleaned up – typically if you need to clean up some global state that was mutated. Take our context for instance and assuming the SecurityService implements IDisposable:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();

    void Destroy() => subject.Dispose();

}

Added benefit

One of the problems that has been with the MSpec approach is that its all based on statics since it is using delegates as “keywords”. Some of the runners have problems with this and async models, causing havoc and non-deterministic test results. Since xUnit is instance based, this problem goes away and every instance of the specification is in isolation.

Summary

This is probably just yet another solution to this and I’ve probably overlooked implementations out there, if that’s the case – please leave me a comment, would love to not have to maintain this myself 🙂. It has helped me get to a tighter feedback loop as I now can run or debug tests in context of where my cursor is in VSCode with a keyboard shortcut and see the result for that specification only. My biggest hope for the future is that we get a tooling experience in VSCode that is similar to Wallaby is doing for JS/TS testing. Windows devs using full Visual Studio also has the live unit testing feature. With .NET 6 and the hot reload feature I’m very optimistic on tooling going in this direction and we can shave the feedback loop even more.

Standard
Code Tips

Orleans and C# 10 global usings

If you’re using Microsoft Orleans and have started using .NET 6 and specifically C# 10, you might have come across an error message similar to this from the code generator:

  fail: Orleans.CodeGenerator[0]
        Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
  -- Code Generation FAILED -- 
  
  Exc level 0: System.InvalidOperationException: Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectGrainInterface(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000136+0x86
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectType(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000138+0x23
     at Orleans.CodeGenerator.CodeGenerator.AnalyzeCompilation() in Orleans.CodeGenerator.dll:token 0x6000009+0x9f
     at Orleans.CodeGenerator.MSBuild.CodeGeneratorCommand.Execute(CancellationToken cancellationToken) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000014+0x44f
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.SourceToSource(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000025+0x45b
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.Main(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000023+0x3d

The reason I got this was that I removed an explicit using statement, since I’m now “all in” on the global usings feature. By removing:

using System.Threading.Tasks;

… the code generator doesn’t understand the return type properly and resolves it as a unknown Task type.
Putting in this explicitly resolves the issue and the code generator goes on and does its thing.

Standard
.net, C#

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

<ItemGroup>
   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->
</ItemGroup>

This will then include this reusable file for the compiler.

Standard
Uncategorized

Red Cross Codeathon 2017

Some 6 months ago I found myself in a meeting where I had no clue what the topic was going to be or any prior knowledge to give away why I was there. Halfway through the meeting I found myself in complete awe at what was presented. The meeting was with the Norwegian Red Cross, the topic was how they wanted to take advantage of technology to gain insight into potential epidemics. The Norwegian Red Cross team had already done a couple of iterations on a software for dealing with this, trying out different technologies. They now had the real world experience from the versions they’ve been running and wanted to take it to the next level; professionalize the software – making something maintainable and sustainable. Red Cross does not have a software development branch within their organization and reached out to Microsoft for assistance and see how we could help. With Norwegian Red Cross being a nonprofit organization, they already get assistance from Microsoft through the Microsoft Philantropies. I was on Channel9 talking about the project, you can watch it here.

Taking the lead

With software there aren’t that many opportunities to really do good for mankind by applying the skillsets we already have(?). With what was presented and my position @ Microsoft, I started thinking about how there could be synergies and how it could all be brought together. My day to day work is an advisory type of role were I engage with ISVs around Norway and helping them move to the cloud or get the most out of Azure in general. With this work I get to meet a lot of people and I started thinking about opportunities of combining it all. In addition, I also felt that the most natural way for any of this software to be built would be to do it in the open with volunteers. Since Red Cross does not have any in-house developers and the cost of hiring consultants are very high. Besides, having external resources to do the work is not the best sustainable model for living software – ideally you’d want to do it in-house. With volunteers however, one would apply one of the core principles of Red Cross itself, that of volunteerism, as the Red Cross bases its work more than 17 million volunteers worldwide.

Understanding and getting the word out

We’ve had the dialog going the last 6 months on how it could be done – both from a process perspective, but also whether or not to base it on volunteer work and even go open source or not. I reached out to Richard Campbell to hear how they’ve been running the Humanitarian Toolbox for American Red Cross in the U.S.. He put me in contact with the product owner they’ve been having for the allReady project. From this, I gained even more confidence that the choice was right to do this on a volunteer basis as they’ve had 132 contributors pitch in on that particular project (@ the time of writing this post). As large organizations come, Red Cross also relies on a certain amount of red-tape and in general internal processes to make decisions.

In middle of June 2017 the NDC developer conference was held in Oslo. We were lucky to get a slot to talk about the project, what Norwegian Red Cross had done and our plans for the architecture for the new implementation. Richard Campbell joined Tonje Tingberg from Norwegian Red Cross and myself on stage (you can watch it on Vimeo here). I was really nervous whether or not there would be anyone coming to the talk, as we didn’t pitch it from a technology perspective – but was super glad and proud of my colleagues in the development community that wanted to learn more. We close to filled the room and the response from people was enormous, we got into good conversations right after and also good mail dialogs after NDC as well. This reinforced the belief that this could be done with volunteers. A couple of weeks ago I got a call from Tonje Tingberg bringing the happy news – Red Cross wants to move forward with the proposed model.

Codeathon – first call to action

The downside to basing everything on volunteer work is of course the fact that you have less control over when things get done. In order to get condensed and focused work done one needs a mechanism where you gather people in the same room for a couple of days.Building on the experience from Humanitarian Toolbox, Red Cross will be hosting a codeathon – not a hackathon or a hackfest – but more like a marathon for coding. The first of these will be held the weekend of 29 September – 1 October at the Norwegian Red Cross in Oslo. If you’re interested in joining and helping out; please sign up here. We will establish a core team that will be putting in place the framework for how we’re going to be building it and making sure we get as much work done as possible during the codeathon. Once you’re signed up, we will follow up with you to make sure you get to do what you want to do.

Learning Experience

One of the opportunities with this is learn. Not only from the project itself, but working together with others. In a room filled with developers and architects you’re bound to pick up a thing or two that can be brought back to your daily work. The solution will be built using modern techniques, state of the art architecture and utilizing the cloud as much as we can. It is also a great way to learn more about how to work in the open source community if you haven’t already got experience with doing open source.

Wrapping up

Seeing the impact of the work that Red Cross is doing really puts things in perspective. Bringing knowledge to the table is vital in helping others that don’t have the resources we are accustomed to. With the type of technical know-how we have as software developers, we can really make a difference. In our line of work, we focus on being problem solvers – trying to make smarter, more efficient systems. Imagine transferring this and saving lives by just using the power of our brains; this is what we as a community can bring to the table. I’m so glad I was asked to join that meeting months ago – finally I can help in a way I know how to.

Red Cross Norway has also put out a post on this with all the details as well. You can find it here.

 

 

 

 

 

/* Style Definitions */

table.MsoNormalTable

{mso-style-name:”Table Normal”;

mso-tstyle-rowband-size:0;

mso-tstyle-colband-size:0;

mso-style-noshow:yes;

mso-style-priority:99;

mso-style-parent:””;

mso-padding-alt:0cm 5.4pt 0cm 5.4pt;

mso-para-margin:0cm;

mso-para-margin-bottom:.0001pt;

mso-pagination:widow-orphan;

font-size:12.0pt;

font-family:”Calibri”,sans-serif;

mso-ascii-font-family:Calibri;

mso-ascii-theme-font:minor-latin;

mso-hansi-font-family:Calibri;

mso-hansi-theme-font:minor-latin;

mso-fareast-language:EN-US;}

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Standard
Bifrost, C#, Cloud, CQRS

Bifrost roadmap first half 2017 (new name)

This type of post is the first of its kind, which is funnny enough seeing that Bifrost has been in development since late 2008. Recently development has moved forward quite a bit and I figured it was time to jot down whats cooking and what the plan is for the next six months – or perhaps longer.

First of all, its hard to commit to any real dates – so the roadmap is more a “this is the order in which we’re going to develop”, rather than a budget of time.

We’ve also set up a community standup that we do every so often – not on a fixed schedule, but rather when we feel we have something new to talk about. You can find it here.

1.1.3

One of the things we never really had the need for was to scale Bifrost out. This release is focusing on bringing this back. At the beginning of the project we had a naïve way of scaling out – basically supporting a 2 node scale out, no consideration for partitioning or actually checking if events had been processed or not. With this release we’re revisiting this whole thing and at the same time setting up for success moving forward. One of the legacies we’ve been dragging behind us is the that all events where identified by their CLR types, maintaining the different event processors was linked to this – making it fragile if one where to move things around. This is being fixed by identifying application structure rather than the CLR structure in which the event exist in. This will become convention based and configurable. With this we will enable RabbitMQ as the first supported scale out mechanism. First implementation will not include all partitioning, but enabling us to move forward and get that in place quite easily. It will also set up for a more successful way of storing events in an event store. All of this is in the middle of development right now. In addition there are minor details related to the build pipeline and automating everything. Its a sound investment getting all versioning and build details automated. This is also related to the automatic building and deployment of documentation, which is crucial for the future of the project. We’ll also get an Azure Table Storage event store in place for this release, which should be fairly straight forward.

1.1.4

Code quality has been set as the focus for this release. Re-enabling things like NDepend, static code analysis.

1.1.5

Theme of this version is to get the Web frontend sorted. Bifrost has a “legacy” ES5 implementation of all its JavaScript. In addition it is very coupled to Knockout, making it hard to use things like Angular, Aurelia or React. The purpose of this release is to decouple the things that Bifrost bring to the table; proxy generation and frontend helpers such as regions, operations and more. Also start the work of modernizing the code to ES2015 and newer by using BabelJS. Also move away from Forseti, our proprietary JavaScript test runner over to more commonly used runners.

Inbetween minor releases

From this point to the next major – it is a bit fuzzy. In fact, we might prioritize to push the 2.0.0 version rather than do anything inbetween. We’ve defined a version 1.2.0 and 1.3.0 with issues we want to deal with, but might decide to move these to 2.0.0 instead. The sooner we get to 2.0.0, the better in many ways.

2.0.0

Version 2.0.0 is as indicated; breaking changes. First major breaking change; new name. The project will transition over to be called Dolittle as the GitHub organization we already have for it. Besides this, the biggest breaking change is that it will be broken up into a bunch of smaller projects – all separated and decoupled. We will try to independently version them – meaning they will take on a life of their own. Of course, this is a very different strategy than before – so it might not be a good idea and we might need to change the strategy. But for now, thats the idea and we might keep major releases in sync.

The brand Dolittle is something I’ve had since 1997 and own domains such as dolittle.com, dolittle.io and more related to it. These will be activated and be the landing page for the project.

Standard
Uncategorized

Bifrost; Getting back to it…

Its been a while since I wrote anything about Bifrost. In fact the last post I did was about me not maintaining it anymore. The thing is; its been an empty year for me personally since February when I announced it. I didn’t realize it until I was at a partner who wanted to dive deep on SOLID, DDD, CQRS, EventSourcing and more and we only had a couple of days to prototype something. We talked it over and we decided that doing Bifrost would get us there quicker… what a relief… I’m so glad we did that. All of a sudden it all became very clear to me; I need to continue the work – its just too much fun. I had a hunch, but didn’t see it all that clear. A few months back I started pulling things from Bifrost into a new project called Cratis and making it more focused. Never kind of thinking that it should go back into Bifrost.

So, what am I doing about it. Well, first of all; I took down the post announcing the stop in maintenance. It didn’t make sense to have it there when coming to this realization that I need to push on. The second thing I did – in order to get back into the mood and understanding Bifrost (even though I wrote most of it) again, was to start writing the proper documentation that it deserves. This now sits here. The next thing that will happen is that development will be picked up again.

From the top of my head, this is what needs to be done:

  1. Add a support for running on Azure in a distributed manner – with a working sample
  2. Clean up. Remove platforms not being used.
  3. Simplify code. Make it more focused.
  4. Modernise it. Make it run on .NET Core
  5. Rewrite JavaScript to be ES2015+
  6. Break it apart into many small GitHub projects that can be maintained individually

In between there might be some features that sneaks in. But the majority of new development will have to happen after these things have happened.

Alongside with it all; more documentation, more samples, more videos – just simply more. 🙂

Really looking forward to getting back into this and see what 2017 have in store for Bifrost work.

Standard