When developing a distributed solution using NServiceBus one the usual caveat is that you end up with tons of projects in the solution just to mimic the deployment topology of the production environment.

note: it is not a NServiceBus issue, it is in the nature of the distributed development process.

Currently there is an open issue, in the GitHub repository, followed by an interesting discussion where the core of the issue is a request to support multiple endpoints in the same host, hosting each endpoint in a different AppDomain so to have per endpoint configuration.

Do we really need it?

I mean: do we really need different configurations for each endpoint? my general answer is that at development time this is generally not a real issue. Why am I focusing on the development time? because if you dive into the above discussion what emerges is that the annoying issue is having too many processes that start, and that need to be maintained, during the development process.


The new question now is: can we, on the developer machine, merge everything down in a single host process? if configurations can be “merged”, without generating conflicting configurations, the answer is simply yes.


The first thing is to understand what happens when we bootstrap the bus via the NServiceBus configuration API or using an EndpointConfig class. NServiceBus scans all the assemblies looking for message handlers, sagas and classes that implements NServiceBus configuration interfaces, such as but not only, IConfigureThisEndpoint.

Given the fact that all this process is done via reflection looking in the “bin” folder what prevents us to:

  • Create a dummy class library project that will be used only on the developers machines;
  • Add a reference via NuGet to the NServiceBus.Host package;
  • Define our endpoints and sagas and all the rest related to the business logic in another, or lots of them as per the deployment topology, class library project;
  • Add a reference to the business logic into the dummy host project;
  • Define all the configuration in the config file of the dummy project;
  • Run the dummy project on the developer machine;

In the end we can think about hosts only as aggregation services whose only role is to give us all the plumbing required to live in the operating system that is hosting us. All what we need now are a bunch of scripts and config transforms that can split down pieces to prepare segmented hosts to be deployed in production.

We are currently using the above approach to manage a pretty complex solution that in production is deployed into 16 different windows services, on several machines, but on the developers machines is made of 3 dummy hosts, why 3 and not 1? configurations :-) we aggregated all the endpoints that conceptually share the same configuration, ending up with 3 main groups.