Handling Authentication and Authorization in Microservices – Part 2

In the previous post we saw a way for handling authentication using an API Gateway and an Identity Provider. Just to refresh the concept, here’s the basic diagram:

The Client will call the API Gateway, which will ask the Identity Provider to (ehm) provide the user details. The Client will get redirected to an external area where the user can login. At this point the Provider will generate a token and pass it back to the Client, which will then attach it to every call to the API Gateway.

So now that we have our user, we need a way to check his permissions in every microservice.

An option could be using Claims-based authorization , which basically means storing every permission that the user has on the entire system as claims on the token.

When a microservice receives a request, it will decode the token, verify it and then check if the user has the required permission claim for the requested action.

It’s a very easy mechanism to implement and works pretty well but also means that we’re sending back and forth a “fat” token, bloated with a lot of useless information for most of the calls. The permission claims are an interesting information only for the microservice that cover that specific bounded context. All the other will still receive the data but it won’t add any value.

Another option is to add an Authorization microservice, something like this:

Authorization Service
Authorization Service

This new microservice will basically own all the permissions for every user in the system.

When a microservice receives a request, it will decode the token and verify it. So far so good. Then, if the action requires authorization, it will make a call to the Authorization service, asking if the user has a specific permission.

This way we have decoupled the decision from the microservices, moving it to a specific service that does a single job: handling user permissions (and probably stuff like profiles/roles too ).

Handling Authentication and Authorization in Microservices – Part 1

In the last few weeks I’ve started working mainly on a quite important part of the system: adding authentication and authorization to some of the microservices that compose the whole application.

For those who don’t know, I work for a quite well know Company on the internal sales tools. In a nutshell we could say that is an enormous e-commerce, but of course there’s more on the plate than that.

But let’s go back to the topic. As I was saying, I’ve been tasked to add authentication and authorization to a bunch of microservices. Of course we were already checking the user identity before. And yes, we care a lot about who can do what. But we are constantly pushing to do more and more and add functionalities on top of the others, so one nice day we got from the architects a whole lot of new requirements to implement.

And so the fun had begun.

In this post I won’t go of course into the details of the actual implementation, however I’ll share with you one of the strategies that can be applied to solve this kind of task.

First of all, for those of you who still don’t know, authentication is the process of identifying who the user actually is. Hopefully, if the credentials are correct, this will generate some kind of user object containing few useful details (eg: name, email and so on).

Box for Guess Who, courtesy of geekyhobbies.com

Authorization instead means figuring out what the user can do on the system. Can he read data? Can he create contents? I guess you got the point.

In the microservice world authorization can be handled more granularly if the bounded contexts are defined properly.

Now that the basics are covered, let’s try to move on to the juicy part! Normally, when talking about microservices, one of the most common architectural design patterns is the API Gateway :

API Gateway

The idea here is to have a layer in the middle between the client and the actual microservices. This Gateway can build and prepare the DTOs based on the client type (eg: a mobile might see less data than a desktop), do logging, caching, and handle authentication as well. There are of course many more things it could do but that’s a topic for another post.

So how this gateway can help us? We need to introduce another block: the Identity Provider.

API Gateway and Identity Provider

Here’s the flow: the Gateway will call the Identity Provider, possibly redirecting the user to a “safe zone” where he/she can enter the credentials and get redirected back to our application, this time with a nice token containing the user details.

There are several types of token we can use, cool kids these days are using JWT. Quoting the docs:

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. 

Pretty neat, isn’t it?

Now we got our user authenticated in the Gateway, but we still need to handle authorization at the microservice level. We’ll see how in the next post of this series!

Serial vs Parallel task execution

This time let’s talk a bit about the difference between serial and parallel task execution.

The idea is simple: if we have two or more operations depending one from another (eg. the result of one goes as input into another), then we need to run them in serial, one after the other.

Total execution time will be the sum of the time taken by the single steps. Plain and easy.

What if instead the operations don’t interact? Can they be executed each in its own path so we can collect the results later on? Of course! That is called parallel execution .

parallel car racing track

It’s like those electric racing tracks: each car gets its own lane, they can’t hit/interfere each other and the race is over when every car completes the circuit.

So how can we do that? Luckily for us, in .NET we can use Task.WhenAll() or Task.WaitAll() to run a bunch of tasks in parallel.

Both the methods do more or less the same, the main difference is that Task.WaitAll waits for all of the provided Task objects to complete execution, blocking the current thread until everything has completed.

Task.WhenAll instead returns a Task that can be awaited on its own. The calling method will continue when the execution is complete but you won’t have a thread hanging around waiting.

So in the end, the total time will be more or less (milliseconds heh) the same as the most expensive operation in the set.

I’ve prepared a small repository on Github to demonstrate the concepts, feel free to take a look. It’s a very simple .NET Core console application showing how to execute two operations in serial and then in parallel.

Here’s a screenshot I got on my Macbook Pro:

Know your data structures – List vs Dictionary vs HashSet

Are there any cases when it doesn’t really matter how your data is structured, as long as you’re fulfilling the task at hand? Or is it always important to use the perfect data structure for the job? Let’s find out!

Those collections have quite different purposes and use cases. Specifically, Lists should be used when all you have to do is stuff like enumerating the items or accessing them randomly via index.

Lists are very similar to plain arrays. Essentially they are an array of items that grow once its current capacity is exceeded. It’s the standard and probably the most used collection. Items can be accessed randomly via the [] operator at constant time. Adding or removing at the end costs O(1) as well, except when capacity is exceeded. Doing it in the beginning or the middle requires all the items to be shifted.

Dictionaries and HashSets instead are specialised collections intended for fast-lookup scenarios. They basically map the item with a key built using an hash function. That key can be later on used to quickly retrieve the associated item.

They both share more or less the same asymptotic complexity for all the operations. The real difference is the fact that with a Dictionary we can create key-value pairs (with the keys being unique), while with an HashSet we’re storing an unordered set of unique items.

It’s also extremely important to note that when using HashSets, items have to properly implement GetHashCode() and Equals() .


On Dictionaries instead that is obviously needed for the Type used as key.

I wrote a very small profiling application to check lookup times of List, Dictionary and Hashset. Let’s do a quick recap of what these collections are. It first generates an array of Guids and uses it as source dataset while running the tests.

The code is written in C# using .NET Core 2.2 and was executed on a Macbook Pro mid 2012. Here’s is what I’ve got:

Collection creation
Collection creation

Lists here perform definitely better, likely because Dictionaries and HashSets have to pay the cost of creating the hash used as key for every item added.

Collection creation and lookup
Collection creation and lookup

Here things start to get interesting: the first case shows the performance of creation and a single lookup. More or less the same stats as simple creation. In the second case instead lookup is performed 1000 times, leading to a net win of Dictionary and HashSets. This is obviously due to the fact that a lookup on a List takes linear time ( O(n) ), being constant instead ( O(1) ) for the other two data structures.

Lookup of a single item
Lookup of a single item

In this case Dictionaries and HashSet win in both executions, due to the fact that the collections have been populated previously.

Lookup in a Where()
Lookup in a Where()

For the last example the system is looping over an existing dataset and performing a lookup for the current item. As expected, Dictionaries and HashSet perform definitely better than List.

It’s easy to see that in almost all the cases makes no difference which data structure is used if the dataset is relatively small, less than 10000 items. The only case where the choice matters is when we have the need to cross two collections and do a search.

Using Decorators to handle cross-cutting concerns — Part 2 : a practical example

In my previous article I discussed a bit about how to use the Decorator pattern to implement cross-cutting concerns and reduce clutter in your codebase.

Today it’s going to be a bit more practical: we’ll be looking at a small demo I published on Github that makes use of Decorators as well as some other interesting things like .NET Attributes, CQRS and Dependency Injection.

I’m not going to deep dive into the details of CQRS as it would obviously take too much time and it’s outside the scope of this article. I’m using it here because query/command handlers usually expose just one method so there is no need to implement a big interface. Also, I like the pattern a lot 🙂

So let’s go straight to the code! The repository is available here: https://github.com/mizrael/cross-cutting-concern-attributes

It’s a very small .NET Core WebAPI application, nothing particularly fancy. No infrastructure of course, there’s no need for this article.

There’s just one API controller, exposing a single GET endpoint to retrieve a list of “values”. I might have called it “stuff” instead of “values”, it’s just an excuse to retrieve some data from the backend.

As you may have noticed, there’s no direct reference to the query handler in the API controller: I prefer to use MediatR to avoid injecting too many things in the constructor. It has become an habit so I’m doing it even when there’s just one dependency.

For those who don’t know it, MediatR acts as a simple in-process message bus, allowing quick dispatch of commands, queries and events. So, basically, it’s a very handy tool when implementing CQRS.

The ValuesArchiveHandler class handles the actual execution of the query. Actually it’s not doing much, apart from returning a fixed list of strings.

What we’re interested into actually is that small attribute, [Instrumentation] . It is just a marker, the real grunt-work will be elsewhere. I could have used an interface as well of course, but there are several reasons why I didn’t.

First of all, I prefer to avoid empty interfaces: an interface is a contract, and an interface without method doesn’t define any contract.

Moreover, attributes can always be configured to not propagate to descendant types automatically, something you cannot do with interfaces.

Now, take a look at the InstrumentationQueryHandlerDecorator class. It’s a query handler Decorator, so it gets an instance of a query handler injected in the constructor, and uses it in the Handle() method.

This decorator is not doing anything particular fancy, it’s just using Stopwatch to track how much time the inner handler is taking to complete.

What we’re interested into is the constructor: there the system is checking if the inner instance has been marked with the [Instrumentation] attribute, flipping a boolean value based on the result. That bool will then be used in the Handle() method to turn the instrumentation on or off. That’s it!

I’m using StructureMap as my IoC container and I’m taking care of the handler registration here . In the same file I also decorate all the query handlers with the InstrumentationQueryHandlerDecorator .

Keep in mind that I could have added some smarts here and check at registration time if a particular handler had been decorated with the [Instrumentation] attribute.

That would probably be a better solution as it would avoid runtime type checks, handling everything during the application bootstrap.

I’ll probably add this to the repository, I left it out to keep things simple 🙂

This article is also available on Medium as part of a series:

© 2019 Davide Guida

Theme by Anders NorenUp ↑