Uncategorized

Autofac + ASP.NET Core 6 + Hot Reload/Debug = crash

One of the cool things in .NET 6 is the concept of hot reload if doing something like dotnet watch run. This extends into ASP.NET to things like Razor pages. If your like me, wants a specific IoC container – like Autofac, you might run into problems with this and even running the debugger. The reason they behave the same is that the hot reload feature is actually leveraging edit&continue, a feature of the debugging facilities of the .NET Core infrastructure.

The problem I ran into was with .NET 6 preview 7 that it didn’t know how to resolve the constructor for an internal class in one of the Microsofts Razor assemblies. When calling MapControllers() on the endpoints:

app.UseEndpoints(endpoints => endpoints.MapControllers());

It would crash with the following:

Autofac.Core.DependencyResolutionException: An exception was thrown while activating Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionEndpointDataSourceFactory -> Microsoft.AspNetCore.Mvc.Infrastructure.DefaultActionDescriptorCollectionProvider -> λ:Microsoft.AspNetCore.Mvc.Infrastructure.IActionDescriptorChangeProvider[] -> Microsoft.AspNetCore.Mvc.HotReload.HotReloadService -> Microsoft.AspNetCore.Mvc.Razor.RazorHotReload.
       ---> Autofac.Core.DependencyResolutionException: None of the constructors found with 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type 'Microsoft.AspNetCore.Mvc.Razor.RazorHotReload' can be invoked with the available services and parameters:

My workaround for this is basically to just explicitly add razor pages, even though I’m not using it:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();
}

With that in place, I was able to debug and also use hot reloading for my code.

Standard
Uncategorized

Red Cross Codeathon 2017

Some 6 months ago I found myself in a meeting where I had no clue what the topic was going to be or any prior knowledge to give away why I was there. Halfway through the meeting I found myself in complete awe at what was presented. The meeting was with the Norwegian Red Cross, the topic was how they wanted to take advantage of technology to gain insight into potential epidemics. The Norwegian Red Cross team had already done a couple of iterations on a software for dealing with this, trying out different technologies. They now had the real world experience from the versions they’ve been running and wanted to take it to the next level; professionalize the software – making something maintainable and sustainable. Red Cross does not have a software development branch within their organization and reached out to Microsoft for assistance and see how we could help. With Norwegian Red Cross being a nonprofit organization, they already get assistance from Microsoft through the Microsoft Philantropies. I was on Channel9 talking about the project, you can watch it here.

Taking the lead

With software there aren’t that many opportunities to really do good for mankind by applying the skillsets we already have(?). With what was presented and my position @ Microsoft, I started thinking about how there could be synergies and how it could all be brought together. My day to day work is an advisory type of role were I engage with ISVs around Norway and helping them move to the cloud or get the most out of Azure in general. With this work I get to meet a lot of people and I started thinking about opportunities of combining it all. In addition, I also felt that the most natural way for any of this software to be built would be to do it in the open with volunteers. Since Red Cross does not have any in-house developers and the cost of hiring consultants are very high. Besides, having external resources to do the work is not the best sustainable model for living software – ideally you’d want to do it in-house. With volunteers however, one would apply one of the core principles of Red Cross itself, that of volunteerism, as the Red Cross bases its work more than 17 million volunteers worldwide.

Understanding and getting the word out

We’ve had the dialog going the last 6 months on how it could be done – both from a process perspective, but also whether or not to base it on volunteer work and even go open source or not. I reached out to Richard Campbell to hear how they’ve been running the Humanitarian Toolbox for American Red Cross in the U.S.. He put me in contact with the product owner they’ve been having for the allReady project. From this, I gained even more confidence that the choice was right to do this on a volunteer basis as they’ve had 132 contributors pitch in on that particular project (@ the time of writing this post). As large organizations come, Red Cross also relies on a certain amount of red-tape and in general internal processes to make decisions.

In middle of June 2017 the NDC developer conference was held in Oslo. We were lucky to get a slot to talk about the project, what Norwegian Red Cross had done and our plans for the architecture for the new implementation. Richard Campbell joined Tonje Tingberg from Norwegian Red Cross and myself on stage (you can watch it on Vimeo here). I was really nervous whether or not there would be anyone coming to the talk, as we didn’t pitch it from a technology perspective – but was super glad and proud of my colleagues in the development community that wanted to learn more. We close to filled the room and the response from people was enormous, we got into good conversations right after and also good mail dialogs after NDC as well. This reinforced the belief that this could be done with volunteers. A couple of weeks ago I got a call from Tonje Tingberg bringing the happy news – Red Cross wants to move forward with the proposed model.

Codeathon – first call to action

The downside to basing everything on volunteer work is of course the fact that you have less control over when things get done. In order to get condensed and focused work done one needs a mechanism where you gather people in the same room for a couple of days.Building on the experience from Humanitarian Toolbox, Red Cross will be hosting a codeathon – not a hackathon or a hackfest – but more like a marathon for coding. The first of these will be held the weekend of 29 September – 1 October at the Norwegian Red Cross in Oslo. If you’re interested in joining and helping out; please sign up here. We will establish a core team that will be putting in place the framework for how we’re going to be building it and making sure we get as much work done as possible during the codeathon. Once you’re signed up, we will follow up with you to make sure you get to do what you want to do.

Learning Experience

One of the opportunities with this is learn. Not only from the project itself, but working together with others. In a room filled with developers and architects you’re bound to pick up a thing or two that can be brought back to your daily work. The solution will be built using modern techniques, state of the art architecture and utilizing the cloud as much as we can. It is also a great way to learn more about how to work in the open source community if you haven’t already got experience with doing open source.

Wrapping up

Seeing the impact of the work that Red Cross is doing really puts things in perspective. Bringing knowledge to the table is vital in helping others that don’t have the resources we are accustomed to. With the type of technical know-how we have as software developers, we can really make a difference. In our line of work, we focus on being problem solvers – trying to make smarter, more efficient systems. Imagine transferring this and saving lives by just using the power of our brains; this is what we as a community can bring to the table. I’m so glad I was asked to join that meeting months ago – finally I can help in a way I know how to.

Red Cross Norway has also put out a post on this with all the details as well. You can find it here.

 

 

 

 

 

/* Style Definitions */

table.MsoNormalTable

{mso-style-name:”Table Normal”;

mso-tstyle-rowband-size:0;

mso-tstyle-colband-size:0;

mso-style-noshow:yes;

mso-style-priority:99;

mso-style-parent:””;

mso-padding-alt:0cm 5.4pt 0cm 5.4pt;

mso-para-margin:0cm;

mso-para-margin-bottom:.0001pt;

mso-pagination:widow-orphan;

font-size:12.0pt;

font-family:”Calibri”,sans-serif;

mso-ascii-font-family:Calibri;

mso-ascii-theme-font:minor-latin;

mso-hansi-font-family:Calibri;

mso-hansi-theme-font:minor-latin;

mso-fareast-language:EN-US;}

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Standard
Uncategorized

Bifrost; Getting back to it…

Its been a while since I wrote anything about Bifrost. In fact the last post I did was about me not maintaining it anymore. The thing is; its been an empty year for me personally since February when I announced it. I didn’t realize it until I was at a partner who wanted to dive deep on SOLID, DDD, CQRS, EventSourcing and more and we only had a couple of days to prototype something. We talked it over and we decided that doing Bifrost would get us there quicker… what a relief… I’m so glad we did that. All of a sudden it all became very clear to me; I need to continue the work – its just too much fun. I had a hunch, but didn’t see it all that clear. A few months back I started pulling things from Bifrost into a new project called Cratis and making it more focused. Never kind of thinking that it should go back into Bifrost.

So, what am I doing about it. Well, first of all; I took down the post announcing the stop in maintenance. It didn’t make sense to have it there when coming to this realization that I need to push on. The second thing I did – in order to get back into the mood and understanding Bifrost (even though I wrote most of it) again, was to start writing the proper documentation that it deserves. This now sits here. The next thing that will happen is that development will be picked up again.

From the top of my head, this is what needs to be done:

  1. Add a support for running on Azure in a distributed manner – with a working sample
  2. Clean up. Remove platforms not being used.
  3. Simplify code. Make it more focused.
  4. Modernise it. Make it run on .NET Core
  5. Rewrite JavaScript to be ES2015+
  6. Break it apart into many small GitHub projects that can be maintained individually

In between there might be some features that sneaks in. But the majority of new development will have to happen after these things have happened.

Alongside with it all; more documentation, more samples, more videos – just simply more. 🙂

Really looking forward to getting back into this and see what 2017 have in store for Bifrost work.

Standard
Uncategorized

The Code Lab

I’ve been wanting to try out a new format, or at least a new format for me; live interactive webcast. I got inspired when I saw some ex-colleagues of mine started something called “Kodepanelet” and figured I had to do something similar. The goal is to have a one hour show on a regular basis where anyone can chime in on social media and ask technical questions and I’ll try to be as agile as possible answering, or probably preferably show. Some things I can answer live – other things might need preparation for a later show, or a follow up in a blogpost or similar. This part is still a bit fuzzy. To be honest, since this is all new and I’m trying something out – the format will more than likely change over time.

The concept is called “The Code Lab”;

TheCodeLab.png

I will be using the Facebook live video streaming system for this and you’ll find the Facebook page for the concept here.

Details

First airing will be on Friday the 4th of November @ 13:00 CET (1PM), here.

 

What can you expect?

First of all, this is supposed to be for software developers. And I can’t do more than I know or can easily acquire knowledge on. My background stretches from games development including graphics – low level programming to line of business applications and architecting highly available durable and scalable systems. I also have a knack for process and how to build teams. I consider myself pretty open-minded, and as a result I’ve always been all over the place when it comes to platforms. These days I find that I am probably most experienced in macOS and Windows, but an apprentice in Linux – trying to gain more experience there. I try to be polyglot in languages, but have most experience in C/C++, C# and JavaScript. Thrive in the backend as well as the frontend and loves my patterns and eat my SOLIDs for breakfast. I focus a lot of my time on cloud, and specifically Azure. Containers, MicroServices and in general decoupled software is something I am really passionate about.</>

How can you help?

Content is always king – I’m hoping people find this interesting and want to jump in with good questions. To get the topics right, there is a survey on Facebook

Health warning

If you’re out to troll me, I’ll try to ignore you. 🙂

Standard
Uncategorized

Identifying core values

Those who have worked with me have most likely heard a few times and probably also got bored by me saying we have to establish a core set of common values before we do any code writing. What I mean by that is that you can’t have a team work in the same general direction without the team actually believing in the same things. Take a thing like being test driven. If you have a split in the team and not a common consensus of wether or not automated testing is good or bad for your team and your product, you’ll end up having tiring discussions and not establishing a climate for establishing a good culture. Worst case; you might hurt your velocity directly by not adressing a bunch of the elephants in the room and over time establish a declining velocity and not notice that it is declining.

Elephants

Yes, there always seem to be an elephant in the room. And they are so hard to get out of the room. Subjects that are impossible to discuss, because you really can’t reach an agreement. Every place I’ve been to have these, and in my experience, there seem to be more than average in software development. I think this comes from a bunch of reasons, but in my experience it boils down to not adressing the underlying issues and also in many cases, really understand why we go to work. In our industry there are some elephants seem to be quite common, things like my favorites; performance and the “keep it simple stupid”-principle. Performance is by some the trump card, whenever a discussion gets out of hand, you might run the risk of this card; hoping it will kill the discussion and we can move along.

Business Value

Why we as developers go to work is something that can be hard to remember. Our job is to add business value to the place we work. It is so easy to forget this fundamental thing and end up doing things that is not related to this, often based on an established culture of developers having the ability to just ignore this fact. By having the power to create something in code makes us experts in our fields, we do something that not many people understand what is. This power comes with great responsibility, we should not abuse this just because we want to do something exciting. Its easy to bullshit a manager into thinking that using technology X will solve all problems, while you could have done this with a different approach in shorter amount of time but you never got the chance of trying out X. Don’t get me wrong, we should always look at tools that actually improve our productivity or helps us get our business value done more accurately or delivered faster or better. Just don’t make up excuses to get to try it out to get it on your resumé.

Lack of understanding

Lets face it, not everyone wants to learn new tricks all the time. And that is actually fine. Being conservative to change can help balance out those who wants to change everything all the time. That being said, there are also those who are conservative because its convenient, because it suits them to not necessarily go in the right direction and hinders development and moving forward. A great way to protect your own job for someone who has been there since the dawn of time and really just want to continue doing the tasks that has been mastered. In our industry I think you’d be crazy to even try to want something like this. Our industry moves quite rapidly compared to most others and knowing only things that belong to the past is not necessarily a benefit. I think this is also important to understand for some of the elephants in the room, people protecting themselves; they just don’t necessarily understand the new ways of doing things and don’t have the full motivation to actually bother learning it. A good manager should be able to pick up on this and make sure the team is motivated and keep the risk down for the company by actually facilitating for learning.

The User

Who are these users we hear about. Adding business value is good for business, but not thinking about the user and keeping them close could be a disaster for business. So even though you think you’re adding something to your system that you believe will add business value, the users might not want it because you made it in a way the users won’t understand. This is something I see all the time; developers making software look like their development tools and thinking the users will just intuitively get it. Heck, I’ve done this myself on numerous occasions. Its like we get tunnel vision, thinking that everyone thinks like us. They don’t! We are possibly the worst frame of reference for what a user wants. This is one of the things that I think brings more elephants in the room that are not up for discussion. Because we’re used to do something, that wasn’t good for the user, we should just continue down that path.

Being pragmatic

This I hear a lot, and it gets presented as an accusation, as if one is not pragmatic. Short and simple; it is an abused term. It is something that gets twisted to fit as a trump card and thrown on the table to stop discussions. It doesn’t really reflect a value of any sort, its just a way of hiding any core value of actually bubbling to the surface. Its often related back to the “keep it simple, stupid”-principle and is just an instrument used to belittle the other person(s) in the argument. I look at other professions whenever I encounter things like this and ask my self, would they throw this card — ever? Take a plumber, would he pragmatically drop putting in a pipe or even worse pragmatically decide to not comply with building regulations. We can do much better than this. The KISS principle is a really good example of something that has completely different meanings depending on the person you talk to as well. For some this would mean; “… put everything in a stored procedure in the database…” type of KISS, while for me it means I adhere to the SOLID principles amongst other things. I don’t like these terms – they don’t bring anything positive to the table.

Core Values

Back to the original intent of this post; establishing core values. This is not an easy task, it is something that can take a while and it is not something you delegate to one person and hope it solves everything. It is a team effort. The team has to do this together and they are going to have to work together to get the wheels smoothly running. An approach that I’ve had great result in slight variations with, is to let every team member write down on three post-its the three most important things to them. One is not to discuss the items with anyone else, the items can be anything related to what the person consider important to be able to do their job. Every team member has to do it themselves for themselves. Put this up on a wall and let every team member present the case for their items and what they mean. You then optimize the wall by grouping the things that are the same. We now want to cut the list down, we can’t have all the items as our core most important values. Then you give the team 3 votes they can distribute on the things they now find the most interesting with the knowledge that we’re not keeping them all — chose wisely. You should now have a ranked list the things with the most votes are more important and you do a cut off at the number of items you decided to keep. I usually tend to keep it at 10. With smaller teams you might consider increasing the number of post-its and votes, so that you get more than your cutoff, even after grouping. A variation of this that I’ve used was after a disaster of a release at a company were we had a yearly release-cycle. After the disaster and weeks of firefighting I put the team together and asked them to come up with 3 post-its each for how we could make the next release even worse. This triggered something very interesting, people got really creative and every one had great ideas of how to really sabotage the next release. We then voted for the things we found most relevant and that we wanted most to do. We then took the list and we converted it into things we should not be doing and then had a cut off of 10 items that became the law of the land for the development team.

What you now have produced is something that can kill the things used to kill discussions, you have a list of things you all have agreed upon is important. It is the compromise, the things that you as a team believe in. Whenever nonsense arguments are being thrown into the mix, the different trump cards or elephants that sit in the room, you can refer back to the list.

Sometimes it is not clear cut what things mean, and you can end up having discussions about the meaning of it all. Try to capture what people are saying when presenting their post-its, put this in writing and get the team to read through and commit to this. But even after doing that, it might be hard to understand. A tool that can help with getting a common understanding is doing pair-programming as part of how you work. Circulate who pairs and start building the dynamics into the team. You end up discussing some of the items on the list and you will eventually break down barriers and create a team that at the very minimum executes more in unison. Typically the elephants can actually be taken out of the room and you will hopefully and most likely also address the lack of understanding and get to the need of learning much easier.

An example of the process is below, this is from a real exercise.

The things that made it into the list

The things that didn’t make it into the list

The board of votes

Conclusion

Though probably not perfect, it is a way of getting the conversation going and making everyone aware of the fact that we have to be on the same page. You should never underestimate the importance of having the team thinking pretty much the same, and you need to address the core belief system in order for you to be close to getting your team thinking in the same terms. Having a team that anticipate each other and can have a dynamics together that pulls in the same direction gives higher velocity in the long term and also helps people keep motivated. Nothing is worse than not being motivated to go to work because of elephants and established truths. It can feel constraining and for someone not in the same mindset, completely arbitrary because the world has moved on ages ago.

Bottom line; communication is super important and probably the hardest thing to get right.

Standard
Uncategorized

New and interesting challenges

First of July I’m starting @ Microsoft Norway in the Developer Experience (DX) team. My official, on the paper, title is Technical Evangelist – which might sound scary enough. But in reality it’s a multifaceted position ranging from sure enough evangelism in the sense of talking about Microsoft products and promotoing these, to advisory for clients, community work, blogging and more. My particular role will be geared towards the cloud; Azure, but I will be sure to keep in touch with the entire stack – as I really love keep on top of things. I spend a lot of time in front of the computer both work and non-work related, so I’ll be sure to keep up.

I’ve worked closely with Microsoft since 2001, since 2008 I’ve been a Microsoft MVP and at times its felt like I’ve been an employee without having the privileges an employee has. So when my new boss said to me a couple of weeks ago; “Welcome home..” – that is the feeling I actually do have.

The timing for me is perfect, both on a personal level, professional level and seeing all the cool things that really excites me that is going on at Microsoft these days. I’m bursting with joy for how Microsoft has turned around and really looking forward to engage more in the coming years with this.

How does this impact other things. Well, I’ve had a company on and off since 1997 called Dolittle. At first as a way of picking up freelance work and the last few years more focused. I am closing down the company. Keeping the brand though, with domains and all, but putting it to sleep as a company. For other things, the opensource projects I’m involved in; Balder, Bifrost, Forseti and more. I’ll keep on working on them when I have a chance. Maybe not as focused as before, as I’ve had the pleasure and luxury of being able to maintain the projects at work for the last few years. They’ve been with me for years and are babies to me, so it would be hard to let them go. Besides, they serve a great purpose for me in keeping up with the whats going on the world of development. I am a developer after all and if I am to do a good job talking about it, I need to maintain my knowledge.

I’m super excited and honored to become a member of the team. Really looking forward to it.

Standard
Uncategorized

Improving Angular experience with some convention magic

Disclaimer: I’m a line of business app developer – not a Web site developer. My view on things is colored by my experience with building large enterprise applications that has larger teams of of developers working together and needs to keep velocity and code quality at a very high standard through years of development.

I’ve been on a short assignment with a client and they wanted to establish ways for them to work with their Web development. They’re a .net shop with little experience in Web – only thick client technologies like WPF. They had very few requirements, but they wanted to go for things that are fairly established and had a guy that had done some AngularJS. For me it was the first time I did Angular in any structured way – previously I’ve just dabbled with it, basically with the intention of supporting it for Bifrost. The first thing that struck me with Angular is the explicitness of just about everything and how everything needs to be configured with code. Obviously, I’m not going to claim to be an expert in Angular, and I’d love to be corrected if I’m wrong. But from my little experience I started itching, and I had to scratch the itch.

Routes and conventions

In Angular, with the Angular Route extension, routing is one of the things you can do. A fairly simple and consistent API. One typically ends with a pattern as follows:

var application = angular.module("MyApp", ["", "ngRoute"]);
application.config(["$routeProvider"], function($routeProvider) {
$routeProvider.when("/some/route", { templateUrl: "/some/path/to/view.html", controller: "MyController" });
});
application.controller("MyController", function() { /* Put your controller code here ... */ });

On one side one tells what to do when a route occurs – pointing it to a view and a controller that represents it. Then we need to configure the controller by its name and then point it to a function that represents the controller.
My claim is that looking at your app, you either have formalized a pattern or you have a pattern by accident for how these things are put together. This is a great time to formalize it by creating something that represents it and can automate it as a convention.

For instance, you probably will see that for most parts of your app there is a relationship between the routes and where things are placed on disk. In my experience, routing is more often than not something the end users really don’t care about. I do have a feeling that we put too much thought and effort into something like this, while the end users will just copy / paste / click links and don’t care how they look. With this in mind and if there is more in reality a correlation between routes and disk location, we can automate this whole thing.

Directives
Another aspect of Angular that is really useful are the directives, but again as with routes and controllers, one has to set this up very explicitly. This is something that could easily be automated. For instance, you could have a folder in your frontend project called Directives and every folder within it represents a directive by the folder name, within this a directive could then be represented by View.html for the view a Controller.js for the controller and a Link.js for the link part.

Proxy generation FTW
Something we’ve had great success with from our Bifrost development is proxy generation. With backend code written in a different language than the frontend, its just great to augment the frontend with code generated to make the transition between the two less painful. But regardless of the divine of having 2 languages in your system, generating code for automation can really be a lifesaver. With a fixed convention, developers on your team gets less options. You might be arguing that is a bad thing, but I argue its a very good thing. If you’re the only one on your team or you’re two guys – you can probably cope with full flexibility. But applying a regime makes it easier to do things right – or according to the regime, at least. And that should be a good thing. Another benefit of defining regimes is that in some cases when automating things and generating code you get the opportunity for doing cross cutting concerns. With a regime in place for routing for instance and pointing it by convention to a controller that matches the view, you could inject a man in the middle controller that could do a lot of interesting cross cutting concerns, for instance logging, error handling, security or other more domain specific concerns.

Code please
We are going to build support for Angular into Bifrost, and with it we will provide configurable conventions for routing and directives.

As part doing a Visual Studio 2015 Deep Dive for Microsoft recently I created a sample for Visual Studio 2015 showing off how to do all this, but now just using JavaScript and the new Grunt and NodeJS support coming with Visual Studio 2015. You can find the entire sample that uses SignalRKarmaJasmine over at GitHub here.

Standard
Bifrost, Uncategorized

Bifrost and Proxy generation

One of the things we consider to be one of the most successful things we’ve added to Bifrost is the bridge between the client and the server in Web solutions. Earlier this year we realized that we wanted to be much more consistent between the code written in our “backend” and our “frontend”, bridging the gap between the two.  And out of this realization came generation of proxy objects for artifacts written in C# that we want to have exposed in our JavaScript code. If you’re a node.js developer you’re probably asking yourself; WHY..   Well, we don’t have the luxury to be writing it all in JavaScript right now, but it would be interesting leveraging what we know now and build a similar platform on top of node.js, or for the Ruby world for that matter – but thats for a different post.  One aspect of our motivation for doing this was also that we find types to be very helpful; and yes – JavaScript is a dynamic language but its not typeless, so we wanted the same usefulness that the types have been playing for our backend code in the frontend as well. The types represent a certain level of metadata and we leverage the types all through our system.

Anywho, the principle was simple; use .net reflection for the types we wanted represented in JavaScript and generate pretty much an exact copy of those types in corresponding namespaces in the client. Namespaces, although different between different aspects of the system come together with a convention mechanism built into Bifrost – this also being a post on its own that should be written :), enough with the digressions.

Basically, in the Core library we ended up introducing a CodeGeneration namespace – which holds the JavaScript constructs we needed to be able to generate the proxies we needed.

CodeGeneration_NS

There are two key elements in this structure; CodeWriter and LanguageElement – the latter looking like this:

public interface ILanguageElement
{
    ILanguageElement Parent { get; set; }
    void AddChild(ILanguageElement element);
    void Write(ICodeWriter writer);
}

Almost everything sitting inside the JavaScript namespace are language elements of some kind – to some extent some of them being a bit more than just a simple language element, such as the Observable type we have which is a specialized element for KnockoutJS. Each element has the responsibility of writing themselves out, they know how they should look like – but elements aren’t responsible for doing things like ending an expression, such as semi-colons or similar. They are focused on their little piece of the puzzle and the generator will do the rest and make sure to a certain level that it is legal JavaScript.

The next part os as mentioned the CodeWriter:

public interface ICodeWriter
{
    void Indent();
    void Unindent();
    void WriteWithIndentation(string format, params object[] args);
    void Write(string format, params object[] args);
    void NewLine();
}

Very simple interface basically just dealing with indentation, writing and adding new lines.

In addition to the core framework for building the core structure, we’ve added quite a few helper methods in the form of extension methods to much easier generate common scenarios – plus at the same time provide a more fluent interface for putting it all together without having to have .Add() methods all over the place.

So if we dissect the code for generating the proxies for what we call queries in Bifrost (queries run against a datasource, typically a database):

public string Generate()
{
    var typesByNamespace = _typeDiscoverer.FindMultiple&lt;IReadModel&gt;().GroupBy(t =&gt; t.Namespace);
    var result = new StringBuilder();

    Namespace currentNamespace;
    Namespace globalRead = _codeGenerator.Namespace(Namespaces.READ);

    foreach (var @namespace in typesByNamespace)
    {
        if (_configuration.NamespaceMapper.CanResolveToClient(@namespace.Key))
            currentNamespace = _codeGenerator.Namespace(_configuration.NamespaceMapper.GetClientNamespaceFrom(@namespace.Key));
        else
            currentNamespace = globalRead;

        foreach (var type in @namespace)
        {
            var name = type.Name.ToCamelCase();
            currentNamespace.Content.Assign(name)
                .WithType(t =&gt;
                    t.WithSuper(&quot;Bifrost.read.ReadModel&quot;)
                        .Function
                            .Body
                                .Variant("self", v =>; v.WithThis())
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .WithPropertiesFrom(type, typeof(IReadModel)));
            currentNamespace.Content.Assign("readModelOf" + name.ToPascalCase())
                .WithType(t =>
                    t.WithSuper("Bifrost.read.ReadModelOf")
                        .Function
                            .Body
                                .Variant("self", v => v.WithThis())
                                .Property("name", p => p.WithString(name))
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .Property("readModelType", p => p.WithLiteral(currentNamespace.Name+"." + name))
                                .WithReadModelConvenienceFunctions(type));
        }

        if (currentNamespace != globalRead)
            result.Append(_codeGenerator.GenerateFrom(currentNamespace));
    }

    result.Append(_codeGenerator.GenerateFrom(globalRead));
    return result.ToString();
}

Thats all the code needed to get the proxies for all implementations of an interface called IQueryFor<>, it uses a subsystem in Bifrost called TypeDiscoverer that deals with all types in the running system.

Retrofitting behavior, after the fact..

Another discovery we’ve had is that we’re demanding more and more from our proxies – after they showed up, we grew fond of them right away and just want more info into them. For instance; in Bifrost we have Commands representing the behavior of the system using Bifrost, commands are therefor the main source of interaction with the system for users and we secure these and apply validation to them. Previously we instantiated a command in the client and asked the server for validation metadata for the command and got this applied. With the latest and greatest, all this information is now available on the proxy – which is a very natural place to have it. Validation and security are knockout extensions that can extend observable properties and our commands are full of observable properties. So we introduced a way to extend observable properties on commands with an interface for anyone wanting to add an extension to these properties:

public interface ICanExtendCommandProperty
{
 void Extend(Type commandType, string propertyName, Observable observable);
}

These are automatically discovered as with just about anything in Bifrost and hooked up.

The end result for a command with the validation extension is something like this:

Bifrost.namespace("Bifrost.QuickStart.Features.Employees", {
    registerEmployee : Bifrost.commands.Command.extend(function() {
        var self = this; this.name = &quot;registerEmployee&quot;;
        this.generatedFrom = "Bifrost.QuickStart.Domain.HumanResources.Employees.RegisterEmployee";
        this.socialSecurityNumber = ko.observable().extend({
            validation : {
                "required": {
                    "message":"'{PropertyName}' must not be empty."
                }
            }
        });
        this.firstName = ko.observable();
        this.lastName = ko.observable();
    })
});

Conclusion
As I started with in this post; this has proven to be one the most helpful things we’ve put into Bifrost – it didn’t come without controversy though. We were met with some skepticism when we first started talking about, even with claims such as “… it would not add any value …”. Our conclusion is very very different; it really has added some true value. It enables us to get from the backend into the frontend much faster, more precise and with higher consistency than before. It has increased the quality of what we’re doing when delivering business value. This again is just something that helps the developers focus on delivering the most important thing; business value!

Standard
Uncategorized

Touring south of California

In February I will be in south of California doing a couple of talks at different venus; 4 in total. 2 of them has been announced already here and here, when I get the links for the two remaining ones I will update this post with the details, so stay tuned. Thanks to Kim Schmidt from vNext_OC for making this happen and for asking me to drop by.

Basically the talks will be on two different topics.

Below you’ll see the different topics with the synopsis of them. So, if you’re nearby these venues, don’t hesitate to stop by. Also worth mentioning, Charles Petzold is giving talk at one of the user groups I’ll be doing a talk at, be sure to not miss it as well – more details can be found here.

 

Let’s focus on business value

Creating software is very hard, a lot of practices has been developed over the years to accommodate this and make it easier. Some of these are DDD (Domain Driven Design), SOLID, BDD (Behavior Driven Development) and concrete architectural patterns such as CQRS (Command Query Responsibility Segregation) and MVVM (Model View ViewModel) came as reactions to these practices. Einar will take you on a tour through all of the above mentioned subjects and show you concretely how you can achieve true developer productivity by applying all these. By utilising an open source framework called Bifrost, Einar will show end to end how these practices and patterns can come to life and can really let you as a developer hit the ground running and at the same time capture and deliver true business value without sacrificing code quality. All this and also cloud ready!

It’s primetime, a JavaScript story

It’s pretty fair to say that JavaScript is not a fad; it is by far
the most widespread programming language out there and also the most available runtime we have, ranging from toasters to the web, and even to the backend development through Node.js. Its probably also fair to say that we should really embrace it and start treating it like a first class citizen of our day
to day work. In this talk, Einar will take you on a tour of how you can work
with JavaScript with similar patterns you’re already used to from the rest of
your server code. Writing tests or specifications that proves your code is also
important, Einar will show how to get started with this and how you can achieve more testable JavaScript by applying patterns like MVVM (Model View ViewModel) using KnockoutJS


Standard