CategoryMicroservices

Testing the boundaries of your Web APIs

How do you make sure an entire software you wrote works? And how would you do that if your system doesn’t have a UI? Well, simply by testing the boundaries of course!

From time to time I like to extract pieces of code from what I’m working on and create small repos just to showcase a single functionality or idea. 

This time I’m putting some efforts on TDD on APIs and after few refactorings I came up with a nice structure that you can use as a starting skeleton for a simple system. You can find all the sources here on GitHub.

The demo is very simple, just a single controller that stores and provides user details. Nothing fancy. The user model class exposes only three properties: id, full name and email.

Few points worth noting though:

  • the class is immutable. I wrote a bit about the concept here.
  • I’m adopting the Special Case (or Null Object) pattern a lot these days. Hence the NullUser static property.

Persistence is done in-memory as it’s obviously outside the scope. Moreover, as you can see the Tests project contains only the end-to-end tests, no unit/integration test to cover the persistence layer.

The testing infrastructure is where things gets interesting, even though it’s actually fairly straightforward. An XUnit Fixture is firing up a TestServer and bootstrapping the application using (possibly) the same settings as the real system.

A shared WebHostBuilderFactory class is indeed responsible of building the required IWebHostBuilder instance.

That’s it!

Ok, just to be honest, I got the idea from Mark Seemann : he has a very interesting course on Pluralsight named “Outside-in TDD“. If you have the chance, I strongly suggest you to watch it.

So, now that we have our nice infrastructure ready, all we have to do is write our tests! Being this a Web API, these might be considered either “functional” or “end to end”.

Honestly I think it’s simply a naming thing and doesn’t change the fact that probably these should be the first tests you would write.

Why? Because (and Mark explains it really well in his course) you’re ensuring from the consumer’s perspective that your APIs do what they’re expected to do.

You’ll be “testing the boundaries”.

But most importantly, you’re validating your acceptance criteria and making sure your system works. 

Everything else is just an implementation detail.

So what are we testing here? The routes of course! Our API is managing users, and being it RESTful, we’re asserting that all the http verbs are doing what we expect to do. 

Most of these tests should derive directly from the acceptance criteria written by your Product Owner. In case you don’t have one but instead rely on some (even vague) specifications, a good starting point is simply testing inputs and outputs. 

Happy testing!

List of useful Docker commands

In the last few weeks I’ve been doing several experiments with Docker, just trying to grasp the main idea and maybe even come up with something useful.

As often happens with tools these days, there’s an entire world of command line tools that you should learn.

OR you can just be lazy as me and keep a list of the most common ones 🙂

I’ll be updating this list from time to time, just to keep track of what I’ve used so far and be a quick reference for my sloppy memory.

docker login [HOST] # in case you need auth to pull or push images
docker build -t [TAG]
docker images # shows a list of the local images along with the size
docker push [TAG]
docker rm $(docker ps -a -q) # this will remove ALL your containers. Be careful!
docker system prune -a # this will remove ALL your images and containers. Very useful when you run low on space, but be careful!

The next command will display the Docker daemon logs, extremely useful when you don’t know what happened (basically me most of the time) and you’re looking for an answer, even if obscure.

It depends on the OS you’re running on:

  • Ubuntu (old using upstart ) – /var/log/upstart/docker.log
  • Ubuntu (new using systemd ) – sudo journalctl -fu docker.service
  • Boot2Docker – /var/log/docker.log
  • Debian GNU/Linux – /var/log/daemon.log
  • CentOS – /var/log/daemon.log | grep docker
  • CoreOS – journalctl -u docker.service
  • Fedora – journalctl -u docker.service
  • Red Hat Enterprise Linux Server – /var/log/messages | grep docker
  • OpenSuSE – journalctl -u docker.service
  • OSX – ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/d‌​ocker.log
  • Windows – Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

(thanks, Scott)

How I used Travis CI to deploy Barfer on Azure

Ok this time I want to talk a little bit about how I used Travis CI to deploy Barfer on Azure. Last time I mentioned how much helped having a Continuous Delivery system up and running so I thought it would be worth expanding a little bit the concept.

Some of you may say: “Azure already has a continuous delivery facility, why using another service?”. Well there’s a bunch of reasons why:

  • when I started writing Barfer I had no idea I was going to deploy on Azure
  • in case I move to another hosting service, I’ll just have to change a little bit the deployment configuration
  • CD on Azure is clunky and to be honest I still don’t know if I like Kudu.

Let me spend a couple of words on the last point. Initially I wanted to deploy all the services on a Linux web-app. Keep in mind that I wrote Barfer using Visual Studio Code on a Macbook Pro. So I linked the GitHub repo to the Azure Web App and watched my commits being transformed into Releases.

Well, turns out that Kudu on Linux is not exactly friendly. Also, after the first couple of commits it was not able to delete some hidden files in the node_modules folder. Don’t ask me why.

I spent almost a week banging my head on that then at the end I did two things:

  1. moved to a Windows Web-App
  2. dropped Azure CD and moved to Travis CI

Consider also that I had to deploy 2 APIs, 1 web client and 1 background service (the RabbitMQ subscriber) and to be honest I have absolutely no desire of learning how to write a complex deployment script. I want tools to help me doing the job, not blocking my way.

The Travis CI interface is  very straightforward: once logged in, link the account to your GitHub one and pick the repository you want to deploy. Then all you have to do is create a .yml script with all the steps you want to perform and you’re done.

Mine is quite simple: since I am currently deploying all the services together (even though each one has its own independent destination), the configuration I wrote makes use of 5 Build Stages. The first one runs all the unit and integration tests then for every project there’s a custom script that

  1. downloads the node packages (or fetches them from the cache)
  2. builds the sources
  3. deletes the unnecessary files
  4. pushes all the changes to Azure

The whole process takes approx 10 minutes to run due to the fact that for every commit all the projects will be deployed, regardless where the actual changes are. I have to dig deeper into some features like tags or special git branches, but I will probably just split the repo, one per project. I just have to find the right way to manage the shared code. 

#hashtags just landed on #Barfer!

Yeah I know, I blogged yesterday. I probably have too much spare time these days (my wife is abroad for a while) and Barfer has become some kind of obsession.

You don’t know what Barfer is? Well go immediately check my last article. Don’t worry, I’ll wait here.

So the new shiny things are #hashtags! Yeah, exactly: now you can barf adding your favourite #hashes and you can even use them to filter #barfs!

The implementation is for now very simple, just a string array containing the tags, as you can see from the Barf interface defined here.

The command handle responsible for storing the Barf uses a regex to parse the text and extract all the tags (apart from checking for xss but that’s another story).

On the UI then before getting rendered every Barf goes through the same regex but this time the #tag is replaced with a link to the archive.

Quick&dirty.

Next thing in line would be adding some analytics to them but that would require a definitely bigger community 😀

I also went through some small refactoring and cleaning of the frontend code, although I will probably move to an SPA sooner or later. Thing is, I’m still not sure if using React, Angular or Vue so in the meantime I’m focusing on the backend.

There are so many features I would like to add that to be honest I prefer to not focus on the frontend for now. Maybe I’ll start looking for somebody who helps me on that.

One thing I’m quite happy for now but I plan to rework is CI/CD. Well for now I’m working alone on this so probably I can’t really talk about integration. But whatever.
As I wrote already, I’m using Travis CI and I’m very happy with the platform. Even though I’m still on the free tier, the features are awesome and flexibility is huge.  I’ll probably write a blog post on this in the next few days.

In the meanwhile, #happy #barfing! 

I’m becoming a Barfer!

More than a month! My last post on this blog was more than one month ago. I should write more often. No wait let me rephrase that: I should write on this blog more often.

Why? How I spent my last month? Barfing, here’s how!

Ok, let’s add more details. A while ago I decided it was a good idea starting to move a little bit away from the .NET world and explore what’s around. And  NodeJs arrived, of course with Typescript: I am 100% sure it’s basically impossible to write a semi-reliable system without some kind of type checking (along with 10000 other things). 

Then I said: “I don’t want to just read a manual, what can I write with it?”. For some crazy reason I opted for a Twitter clone. Yeah, I was really bored.

Early in the analysis phase RabbitMQ and MongoDb joined the party. I was not exactly familiar with RabbitMQ so I thought was a good opportunity to learn something new.

In order to speedup the development and to obtain certain features (eg. authentication ) I have used a bunch of third party services

The system uses a microservices architecture with a front-end that act as api-gateway. For each service I’ve taken the CQRS path along with Publish/Subscribe. An Azure WebJob is scheduled to run continuously and listen to the various events/messages on the queues. I’ll blog more about the architecture but that’s it more or less.

What am I expecting from this? Well it’s simple: nothing. I mean, I’m not expecting people to use it (even though would be very nice), I am just exploring new possibilities. Nothing more.

Oh yeah, before I forget: https://barfer.azurewebsites.net/ . Enjoy!

© 2018 Davide Guida

Theme by Anders NorenUp ↑