Serial vs Parallel task execution

This time let’s talk a bit about the difference between serial and parallel task execution.

The idea is simple: if we have two or more operations depending one from another (eg. the result of one goes as input into another), then we need to run them in serial, one after the other.

Total execution time will be the sum of the time taken by the single steps. Plain and easy.

What if instead the operations don’t interact? Can they be executed each in its own path so we can collect the results later on? Of course! That is called parallel execution .

parallel car racing track

It’s like those electric racing tracks: each car gets its own lane, they can’t hit/interfere each other and the race is over when every car completes the circuit.

So how can we do that? Luckily for us, in .NET we can use Task.WhenAll() or Task.WaitAll() to run a bunch of tasks in parallel.

Both the methods do more or less the same, the main difference is that Task.WaitAll waits for all of the provided Task objects to complete execution, blocking the current thread until everything has completed.

Task.WhenAll instead returns a Task that can be awaited on its own. The calling method will continue when the execution is complete but you won’t have a thread hanging around waiting.

So in the end, the total time will be more or less (milliseconds heh) the same as the most expensive operation in the set.

I’ve prepared a small repository on Github to demonstrate the concepts, feel free to take a look. It’s a very simple .NET Core console application showing how to execute two operations in serial and then in parallel.

Here’s a screenshot I got on my Macbook Pro:

Using Decorators to handle cross-cutting concerns

I was actually planning of posting this article here but I was migrating to another server the last week and it took one week for the domain to point to the new DNS. Turns out this gave me the chance to try Medium instead, so published my first article there.

This time I’ll be writing about a very simple but powerful technique to reduce boiler-plate caused by cross-cutting concerns. In this post we’ll explore a simple way to encapsulate them in reusable components using the Decorator pattern.

Let’s first talk a bit about “cross cutting concerns”. On Wikipedia we can find this definition:

Cross-cutting concerns are parts of a program that rely on or must affect many other parts of the system.

In a nutshell, they represent almost everything not completely tied to the domain of the application but that can affect in some way the behaviour of its components.

Examples can be:
– caching
– error handling
– logging
– instrumentation

Instrumentation for instance can lead to a lot of boilerplate code which eventually will create clutter and pollute your codebase. You’ll basically end up with a lot of code like this:

Of course, being IT professionals, you can quickly come up with a decent solution, find the common denominator, extract the functionality, refactor and so on.

So…how would you do it? One option would be to use the Decorator pattern! It’s a very common pattern and quite easy to understand:

Basically you have a Foo class that you need somewhere that implements a well known interface, and you need to wrap it into some cross-cutting concern. All you have to do is:

  1. create a new container class implementing the same interface
  2. inject the “real” instance
  3. write your new logic where you need
  4. call the method on the inner instance
  5. sit back and enjoy!

Very handy. Of course it can be quite awkward in case your interface has a lot of methods, but in that case you might have to reconsider your architecture as it is probably breaking SRP.

One option would be moving to CQSCQRS. In the next post of the series we will see a practical example and discuss why those patterns can be an even more interesting option when combined with Decorators.

Stay tuned!

© 2019 Davide Guida

Theme by Anders NorenUp ↑