CategoryTesting

Testing the boundaries of your Web APIs

How do you make sure an entire software you wrote works? And how would you do that if your system doesn’t have a UI? Well, simply by testing the boundaries of course!

From time to time I like to extract pieces of code from what I’m working on and create small repos just to showcase a single functionality or idea. 

This time I’m putting some efforts on TDD on APIs and after few refactorings I came up with a nice structure that you can use as a starting skeleton for a simple system. You can find all the sources here on GitHub.

The demo is very simple, just a single controller that stores and provides user details. Nothing fancy. The user model class exposes only three properties: id, full name and email.

Few points worth noting though:

  • the class is immutable. I wrote a bit about the concept here.
  • I’m adopting the Special Case (or Null Object) pattern a lot these days. Hence the NullUser static property.

Persistence is done in-memory as it’s obviously outside the scope. Moreover, as you can see the Tests project contains only the end-to-end tests, no unit/integration test to cover the persistence layer.

The testing infrastructure is where things gets interesting, even though it’s actually fairly straightforward. An XUnit Fixture is firing up a TestServer and bootstrapping the application using (possibly) the same settings as the real system.

A shared WebHostBuilderFactory class is indeed responsible of building the required IWebHostBuilder instance.

That’s it!

Ok, just to be honest, I got the idea from Mark Seemann : he has a very interesting course on Pluralsight named “Outside-in TDD“. If you have the chance, I strongly suggest you to watch it.

So, now that we have our nice infrastructure ready, all we have to do is write our tests! Being this a Web API, these might be considered either “functional” or “end to end”.

Honestly I think it’s simply a naming thing and doesn’t change the fact that probably these should be the first tests you would write.

Why? Because (and Mark explains it really well in his course) you’re ensuring from the consumer’s perspective that your APIs do what they’re expected to do.

You’ll be “testing the boundaries”.

But most importantly, you’re validating your acceptance criteria and making sure your system works. 

Everything else is just an implementation detail.

So what are we testing here? The routes of course! Our API is managing users, and being it RESTful, we’re asserting that all the http verbs are doing what we expect to do. 

Most of these tests should derive directly from the acceptance criteria written by your Product Owner. In case you don’t have one but instead rely on some (even vague) specifications, a good starting point is simply testing inputs and outputs. 

Happy testing!

Unit, integration, end-to-end tests: do I need all of them?

Yes. I mean, don’t even think about it. You’ll need all of them, probably in different measures, but there is no “we shipped to production without tests”.

Tests are the first rampart separating you from madness and failure.
Why madness? Try to do even a small refactoring after you’ve deployed your app. Without automatic tests you’ll have to manually probe the entire system (or systems if you’re on microservices).

Why failure ? Simple, just think on the long run. Maintenance will quickly become a hell and adding new features will soon bring you to the infamous “it’s better if we re-build this from scratch”.

So! Where should we start? From the pyramid!

the test pyramid

The test pyramid. Image taken directly from Martin Fowler’s article. Thanks, Martin.

Starting from the bottom, you’ll begin with writing the unit tests. “Unit” here means that you’re testing a single small atomic piece of your system, a class, a function, whatever. You won’t connect to any external resource (eg. database, remote services) and you’ll be mocking all the dependencies. 
So, ideally you’ll be checking that under specific circumstances a method is throwing an exception or the cTor is populating the class properties or the result of a computation is a specific value giving a controlled input.
Also, unit tests have to be extremely fast, in the order of milliseconds, giving you a very quick and generic feedback of your system.

Next is the “Service” layer or, more commonly, “Integration”. This is where things start to get interesting. Integration tests check that two or more pieces fit correctly and the cogs are oiled and greased.  So stuff like your Persistence layer, access to the database, ability to create or update data and so on. They might take more time than  unit tests and probably will be in a lesser number, but their value is extremely high.

Then we have the “UI” or “end-to-end” tests. Here we’re making sure that the whole system is working, inspecting from the outside, with little to none knowledge of the inner mechanism. You’ll be checking that your API routes are returning the right HTTP statuses, setting the proper headers and eating the right content types.

In the end it’s all a matter of perception. The point of view is moving from the inside of the system, the developer perspective, to the outside: the consumer perspective.

There are of course other typologies of tests, acceptance, smoke, functional and so on. But if you begin adding the coverage using this pyramid you’ll save an awful lot of headaches and keep your system maintainable and expandable.

How to reset the entities state on a Entity Framework Db Context

I had two bad days. Those days wasted chasing a stupid bug. I had a test class with 4 test cases on my infrastructure layer. If executed one by one, they pass. If the whole suite was executed, only the first one was passing.

At the end I found out that it was due to the Entity Framework Core db Context tracking the state of the entities. More or less. 
In a nutshell, every time a call to SaveChanges() fails,  the subsequent call on the same instance of the db context will retry the operations. 

So let’s say your code is making an INSERT with bad data and fails. Maybe you catch that and then you do another write operation reusing the db context instance.

Well that will fail too. Miserably.

Maybe it’s more correct to say that the second call will look for changes on the entities and will try to commit them. Which is basically the standard and expected behaviour.

Since usually db context instances are created for every request this might not be a big issue.

However, in case you are writing tests using XUnit Fixtures, the instance is created once per test class and reused for all the tests in that class. So in this case it might affect test results.

A potential solution is to reset the state of the changed entities, something like this:

Another option is to avoid entirely reusing the db context and generating a new one from scratch.
In my code the db context was registered on the DI container and injected as dependency. I changed the consumer classes to use a Factory instead and that fixed the tests too 🙂

Unit testing MongoDB in C# part 4: the tests, finally

More than a year. Wow, that’s a lot, even for me! In the last episode of this series we discussed about how to create the Factories for our Repositories. I guess now it’s time to put an use to all those interfaces and finally see how to unit test our MongoDB repositories 🙂

Remember: we are not testing the driver here. The MongoDB team is responsible for that. Not us. 

What we have to do instead is to make sure all our classes follow the SOLID principles and are testable. This way we can create a fake implementation of the low level data access layer and inject it in the classes we have to test. Stop.

Let’s have a look at the code:

In our little example here I am testing a CQRS Command Handler, the one responsible for creating a user. Our handler has an IDbContext as dependency, which being an interface allows us to use the Moq Nuget package to create a fake context implementation. 

Also, we have to instruct the mockDbContext instance to return a mock User Repository every time we access the .Users property.

At this point all we have to do is to create the sut, execute the method we want to test and Verify() our expectations. 

Let’s make a more interesting example now:

Now that we have created the user, we may want also to update some of his details. The idea here is to instruct the mockRepo instance to return a specific user every time the FinstOneAsync method is executed.

Again, now we just need to verify the expectations and we’re done!

Note that in this case we are making an assumption about the inner mechanism of the Handle() method of the UpdateUserHandler class. Personally I tend to stick with Black Box Testing, but sometimes (eg. now) you might be forced to use White Box Testing instead. If you don’t know what I am talking about, there’s a nice article here you may want to read.

 

CQRS: on Commands and Validation

Let’s have a quick discussion about CQRS. There’s a lot to say to be honest, so let’s try to focus on just one thing today: validating your Commands (who knows, I could start a series after this, we’ll see).

The idea is simple: how can I make sure that the data I am passing to my Command Handler is valid?

Also, what is the definition of “valid” ?

There are several aspect to take in consideration, several “levels” of validation. I could just make sure the Command object is not null and/or the data it contains is not empty. Or I could run the validation against some kind of context and check the application Business Rules.

As you can imagine, having different levels means that we could have different implementations scattered in various places/layers of our architecture. For example I could have the API Controller (or whatever outmost layer you have) check for null and perform some Business Context validation later, before or directly in the Command Handler.

In my last project however, I decided to keep things simple and keep my validation in just one place.

Initially the right spot was the Command Handler itself, but of course this would have violated the SRP.

A quick and immediate solution was to have a separate instance of a IValidator<TCommand> injected in the handler. Easy.

Then I realised that my handlers are more “close to the metal” than expected: in most of the cases they access directly the DAL (passing through some kind of IDbContext) and I didn’t wanted to rewrite the call to the IValidator in case I had to switch the persistence layer.

Luckily enough, there’s a nice pattern that came into rescue: the Decorator! As explained very clearly on the SimpleInjector docs, you can create a ValidationCommandHandlerDecorator class, inject an IValidator<TCommand> and let your IoC do the rest.

Maaaaagic.

Bonus tip: in some cases you may want to skip completely the validation. Maybe you have a very good reason or maybe you’re just lazy. Whatever.

In this case, all you have to do is to write some kind of NullValidator<TCommand> class and instruct your IoC to use it when a specific validator is missing for that Command.

© 2018 Davide Guida

Theme by Anders NorenUp ↑