CategoryMicroservices

Handling Authentication and Authorization in Microservices – Part 2

In the previous post we saw a way for handling authentication using an API Gateway and an Identity Provider. Just to refresh the concept, here’s the basic diagram:

The Client will call the API Gateway, which will ask the Identity Provider to (ehm) provide the user details. The Client will get redirected to an external area where the user can login. At this point the Provider will generate a token and pass it back to the Client, which will then attach it to every call to the API Gateway.

So now that we have our user, we need a way to check his permissions in every microservice.

An option could be using Claims-based authorization , which basically means storing every permission that the user has on the entire system as claims on the token.

When a microservice receives a request, it will decode the token, verify it and then check if the user has the required permission claim for the requested action.

It’s a very easy mechanism to implement and works pretty well but also means that we’re sending back and forth a “fat” token, bloated with a lot of useless information for most of the calls. The permission claims are an interesting information only for the microservice that cover that specific bounded context. All the other will still receive the data but it won’t add any value.

Another option is to add an Authorization microservice, something like this:

Authorization Service
Authorization Service

This new microservice will basically own all the permissions for every user in the system.

When a microservice receives a request, it will decode the token and verify it. So far so good. Then, if the action requires authorization, it will make a call to the Authorization service, asking if the user has a specific permission.

This way we have decoupled the decision from the microservices, moving it to a specific service that does a single job: handling user permissions (and probably stuff like profiles/roles too ).

Handling Authentication and Authorization in Microservices – Part 1

In the last few weeks I’ve started working mainly on a quite important part of the system: adding authentication and authorization to some of the microservices that compose the whole application.

For those who don’t know, I work for a quite well know Company on the internal sales tools. In a nutshell we could say that is an enormous e-commerce, but of course there’s more on the plate than that.

But let’s go back to the topic. As I was saying, I’ve been tasked to add authentication and authorization to a bunch of microservices. Of course we were already checking the user identity before. And yes, we care a lot about who can do what. But we are constantly pushing to do more and more and add functionalities on top of the others, so one nice day we got from the architects a whole lot of new requirements to implement.

And so the fun had begun.

In this post I won’t go of course into the details of the actual implementation, however I’ll share with you one of the strategies that can be applied to solve this kind of task.

First of all, for those of you who still don’t know, authentication is the process of identifying who the user actually is. Hopefully, if the credentials are correct, this will generate some kind of user object containing few useful details (eg: name, email and so on).

Box for Guess Who, courtesy of geekyhobbies.com

Authorization instead means figuring out what the user can do on the system. Can he read data? Can he create contents? I guess you got the point.

In the microservice world authorization can be handled more granularly if the bounded contexts are defined properly.

Now that the basics are covered, let’s try to move on to the juicy part! Normally, when talking about microservices, one of the most common architectural design patterns is the API Gateway :

API Gateway

The idea here is to have a layer in the middle between the client and the actual microservices. This Gateway can build and prepare the DTOs based on the client type (eg: a mobile might see less data than a desktop), do logging, caching, and handle authentication as well. There are of course many more things it could do but that’s a topic for another post.

So how this gateway can help us? We need to introduce another block: the Identity Provider.

API Gateway and Identity Provider

Here’s the flow: the Gateway will call the Identity Provider, possibly redirecting the user to a “safe zone” where he/she can enter the credentials and get redirected back to our application, this time with a nice token containing the user details.

There are several types of token we can use, cool kids these days are using JWT. Quoting the docs:

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. 

Pretty neat, isn’t it?

Now we got our user authenticated in the Gateway, but we still need to handle authorization at the microservice level. We’ll see how in the next post of this series!

Testing the boundaries of your Web APIs

How do you make sure an entire software you wrote works? And how would you do that if your system doesn’t have a UI? Well, simply by testing the boundaries of course!

From time to time I like to extract pieces of code from what I’m working on and create small repos just to showcase a single functionality or idea. 

This time I’m putting some efforts on TDD on APIs and after few refactorings I came up with a nice structure that you can use as a starting skeleton for a simple system. You can find all the sources here on GitHub.

The demo is very simple, just a single controller that stores and provides user details. Nothing fancy. The user model class exposes only three properties: id, full name and email.

Few points worth noting though:

  • the class is immutable. I wrote a bit about the concept here.
  • I’m adopting the Special Case (or Null Object) pattern a lot these days. Hence the NullUser static property.

Persistence is done in-memory as it’s obviously outside the scope. Moreover, as you can see the Tests project contains only the end-to-end tests, no unit/integration test to cover the persistence layer.

The testing infrastructure is where things gets interesting, even though it’s actually fairly straightforward. An XUnit Fixture is firing up a TestServer and bootstrapping the application using (possibly) the same settings as the real system.

A shared WebHostBuilderFactory class is indeed responsible of building the required IWebHostBuilder instance.

That’s it!

Ok, just to be honest, I got the idea from Mark Seemann : he has a very interesting course on Pluralsight named “Outside-in TDD“. If you have the chance, I strongly suggest you to watch it.

So, now that we have our nice infrastructure ready, all we have to do is write our tests! Being this a Web API, these might be considered either “functional” or “end to end”.

Honestly I think it’s simply a naming thing and doesn’t change the fact that probably these should be the first tests you would write.

Why? Because (and Mark explains it really well in his course) you’re ensuring from the consumer’s perspective that your APIs do what they’re expected to do.

You’ll be “testing the boundaries”.

But most importantly, you’re validating your acceptance criteria and making sure your system works. 

Everything else is just an implementation detail.

So what are we testing here? The routes of course! Our API is managing users, and being it RESTful, we’re asserting that all the http verbs are doing what we expect to do. 

Most of these tests should derive directly from the acceptance criteria written by your Product Owner. In case you don’t have one but instead rely on some (even vague) specifications, a good starting point is simply testing inputs and outputs. 

Happy testing!

List of useful Docker commands

In the last few weeks I’ve been doing several experiments with Docker, just trying to grasp the main idea and maybe even come up with something useful.

As often happens with tools these days, there’s an entire world of command line tools that you should learn.

OR you can just be lazy as me and keep a list of the most common ones 🙂

I’ll be updating this list from time to time, just to keep track of what I’ve used so far and be a quick reference for my sloppy memory.

docker login [HOST] # in case you need auth to pull or push images
docker build -t [TAG]
docker images # shows a list of the local images along with the size
docker push [TAG]
docker rm $(docker ps -a -q) # this will remove ALL your containers. Be careful!
docker system prune -a # this will remove ALL your images and containers. Very useful when you run low on space, but be careful!

The next command will display the Docker daemon logs, extremely useful when you don’t know what happened (basically me most of the time) and you’re looking for an answer, even if obscure.

It depends on the OS you’re running on:

  • Ubuntu (old using upstart ) – /var/log/upstart/docker.log
  • Ubuntu (new using systemd ) – sudo journalctl -fu docker.service
  • Boot2Docker – /var/log/docker.log
  • Debian GNU/Linux – /var/log/daemon.log
  • CentOS – /var/log/daemon.log | grep docker
  • CoreOS – journalctl -u docker.service
  • Fedora – journalctl -u docker.service
  • Red Hat Enterprise Linux Server – /var/log/messages | grep docker
  • OpenSuSE – journalctl -u docker.service
  • OSX – ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/d‌​ocker.log
  • Windows – Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

(thanks, Scott)

How I used Travis CI to deploy Barfer on Azure

Ok this time I want to talk a little bit about how I used Travis CI to deploy Barfer on Azure. Last time I mentioned how much helped having a Continuous Delivery system up and running so I thought it would be worth expanding a little bit the concept.

Some of you may say: “Azure already has a continuous delivery facility, why using another service?”. Well there’s a bunch of reasons why:

  • when I started writing Barfer I had no idea I was going to deploy on Azure
  • in case I move to another hosting service, I’ll just have to change a little bit the deployment configuration
  • CD on Azure is clunky and to be honest I still don’t know if I like Kudu.

Let me spend a couple of words on the last point. Initially I wanted to deploy all the services on a Linux web-app. Keep in mind that I wrote Barfer using Visual Studio Code on a Macbook Pro. So I linked the GitHub repo to the Azure Web App and watched my commits being transformed into Releases.

Well, turns out that Kudu on Linux is not exactly friendly. Also, after the first couple of commits it was not able to delete some hidden files in the node_modules folder. Don’t ask me why.

I spent almost a week banging my head on that then at the end I did two things:

  1. moved to a Windows Web-App
  2. dropped Azure CD and moved to Travis CI

Consider also that I had to deploy 2 APIs, 1 web client and 1 background service (the RabbitMQ subscriber) and to be honest I have absolutely no desire of learning how to write a complex deployment script. I want tools to help me doing the job, not blocking my way.

The Travis CI interface is  very straightforward: once logged in, link the account to your GitHub one and pick the repository you want to deploy. Then all you have to do is create a .yml script with all the steps you want to perform and you’re done.

Mine is quite simple: since I am currently deploying all the services together (even though each one has its own independent destination), the configuration I wrote makes use of 5 Build Stages. The first one runs all the unit and integration tests then for every project there’s a custom script that

  1. downloads the node packages (or fetches them from the cache)
  2. builds the sources
  3. deletes the unnecessary files
  4. pushes all the changes to Azure

The whole process takes approx 10 minutes to run due to the fact that for every commit all the projects will be deployed, regardless where the actual changes are. I have to dig deeper into some features like tags or special git branches, but I will probably just split the repo, one per project. I just have to find the right way to manage the shared code. 

© 2019 Davide Guida

Theme by Anders NorenUp ↑