When dealing with distributed systems it’s very tempting, and apparently easy and simple, to solve the “we need to present data to users” problem by building read models. Read models designed to satisfy presentation requirements, containing data coming from different services.

This series of articles is not focused on CQRS. There are amazing resources available out there about the topic.

Looking for a ViewModel Composition framework? Take a look at the open source ServiceComposer.AspNetCore.

Choices

Traditionally read models are presented in a very fancy way. I performed a Google Images search and picked one:

Related image

Most of the proposed implementations imply that there is some sort of synchronization between the write storage, RDBMS in the image, and the read storage, the incredibly fast thing.

There is nothing preventing us to sync, in some way, data from different services to the incredibly fast read storage. For example:

  • Marketing sends name and description.
  • Sales sends price.
  • Warehouse sends availability.
  • Shipping sends shipping options.

Given that all services share the same identifier, a Product read model can be crafted and stored.

Keep It Simple Stupid

Is there a simpler solution? So far we’ve used code snippets like:

class SampleHandler : IHandleRequests
{
   public bool Matches(RouteData routeData, string httpVerb, HttpRequest request)
   {
      /* omitted for clarity */
   }
   
   public async Task Handle(dynamic vm, RouteData routeData, HttpRequest request)
   {
      /* omitted for clarity */
   }
}

the infrastructure executing the above code could, at runtime, store the resulting ViewModel, the dynamic vm, in a dedicated incredibly fast storage, and query that storage for all subsequent requests for the same URL. We’ve basically created a read model at runtime at the first request. So far, so good.

You know what?

That’s a cache. It’s the sad truth, but we should be honest with ourselves: what we just designed is a cache.

Cache introduces an ownership dilemma

As soon as the solution is identified for what it is, and especially given the distributed system context, it immediately shines as something we probably don’t want.

Caching comes with its own set of issues, distributed caching is far worse.

One of the biggest architectural issues that comes along with caching composed read models is that there is no service owning it anymore.

Businesses change their mind

A business requirement in Marketing could change the way data should be presented to users, in essence invalidating all the cached read models. Marketing doesn’t own the resulting model, thus the only option we’re left with is to invalidate the whole set. Product read models are wiped from the “cache” and:

  • in the first scenario, the fancy diagram, we end up with an incredibly fast but empty read storage. The storage needs to be re-synchronized, with the risk of huge delays
  • in the second scenario, the cached ViewModel, we end with an empty cache that has to be gradually rebuilt

In both scenarios we’ll end up with a huge load on the entire system. All services will be hammered, in one way or another, for the sake of nothing since the only thing that changed is Marketing.

This is very good sample of coupling.

Not everything can be cached

Shipping owns shipping options. Shipping options depend on the currently logged in user. If they live in Italy and are trying to ship from the US, available options are different than if they live in the US. It might even happen that something is not available at all if shipped outside of the US.

This simply means that shipping options need to be evaluated on the fly, for each request, taking into account different factors.

Shipping options cannot be cached, unless we want to cache all possible combinations.

Conclusion

Read models are an interesting solution, and like all solutions they are not a silver bullet. We described two scenarios in which read models cause pain that we can avoid by using ViewModel Composition. This doesn’t mean that read models should be avoided like the plague: they have their place, and we’ll talk about that in a future article. There is a place for caches as well, but we need to uncover full vertical slices first.

Articles in this series:


Header image: Photo by Ksenia Makagonova on Unsplash