CategorySoftware Architecture

How I used Travis CI to deploy Barfer on Azure

Ok this time I want to talk a little bit about how I used Travis CI to deploy Barfer on Azure. Last time I mentioned how much helped having a Continuous Delivery system up and running so I thought it would be worth expanding a little bit the concept.

Some of you may say: “Azure already has a continuous delivery facility, why using another service?”. Well there’s a bunch of reasons why:

  • when I started writing Barfer I had no idea I was going to deploy on Azure
  • in case I move to another hosting service, I’ll just have to change a little bit the deployment configuration
  • CD on Azure is clunky and to be honest I still don’t know if I like Kudu.

Let me spend a couple of words on the last point. Initially I wanted to deploy all the services on a Linux web-app. Keep in mind that I wrote Barfer using Visual Studio Code on a Macbook Pro. So I linked the GitHub repo to the Azure Web App and watched my commits being transformed into Releases.

Well, turns out that Kudu on Linux is not exactly friendly. Also, after the first couple of commits it was not able to delete some hidden files in the node_modules folder. Don’t ask me why.

I spent almost a week banging my head on that then at the end I did two things:

  1. moved to a Windows Web-App
  2. dropped Azure CD and moved to Travis CI

Consider also that I had to deploy 2 APIs, 1 web client and 1 background service (the RabbitMQ subscriber) and to be honest I have absolutely no desire of learning how to write a complex deployment script. I want tools to help me doing the job, not blocking my way.

The Travis CI interface is  very straightforward: once logged in, link the account to your GitHub one and pick the repository you want to deploy. Then all you have to do is create a .yml script with all the steps you want to perform and you’re done.

Mine is quite simple: since I am currently deploying all the services together (even though each one has its own independent destination), the configuration I wrote makes use of 5 Build Stages. The first one runs all the unit and integration tests then for every project there’s a custom script that

  1. downloads the node packages (or fetches them from the cache)
  2. builds the sources
  3. deletes the unnecessary files
  4. pushes all the changes to Azure

The whole process takes approx 10 minutes to run due to the fact that for every commit all the projects will be deployed, regardless where the actual changes are. I have to dig deeper into some features like tags or special git branches, but I will probably just split the repo, one per project. I just have to find the right way to manage the shared code. 

#hashtags just landed on #Barfer!

Yeah I know, I blogged yesterday. I probably have too much spare time these days (my wife is abroad for a while) and Barfer has become some kind of obsession.

You don’t know what Barfer is? Well go immediately check my last article. Don’t worry, I’ll wait here.

So the new shiny things are #hashtags! Yeah, exactly: now you can barf adding your favourite #hashes and you can even use them to filter #barfs!

The implementation is for now very simple, just a string array containing the tags, as you can see from the Barf interface defined here.

The command handle responsible for storing the Barf uses a regex to parse the text and extract all the tags (apart from checking for xss but that’s another story).

On the UI then before getting rendered every Barf goes through the same regex but this time the #tag is replaced with a link to the archive.

Quick&dirty.

Next thing in line would be adding some analytics to them but that would require a definitely bigger community 😀

I also went through some small refactoring and cleaning of the frontend code, although I will probably move to an SPA sooner or later. Thing is, I’m still not sure if using React, Angular or Vue so in the meantime I’m focusing on the backend.

There are so many features I would like to add that to be honest I prefer to not focus on the frontend for now. Maybe I’ll start looking for somebody who helps me on that.

One thing I’m quite happy for now but I plan to rework is CI/CD. Well for now I’m working alone on this so probably I can’t really talk about integration. But whatever.
As I wrote already, I’m using Travis CI and I’m very happy with the platform. Even though I’m still on the free tier, the features are awesome and flexibility is huge.  I’ll probably write a blog post on this in the next few days.

In the meanwhile, #happy #barfing! 

I’m becoming a Barfer!

More than a month! My last post on this blog was more than one month ago. I should write more often. No wait let me rephrase that: I should write on this blog more often.

Why? How I spent my last month? Barfing, here’s how!

Ok, let’s add more details. A while ago I decided it was a good idea starting to move a little bit away from the .NET world and explore what’s around. And  NodeJs arrived, of course with Typescript: I am 100% sure it’s basically impossible to write a semi-reliable system without some kind of type checking (along with 10000 other things). 

Then I said: “I don’t want to just read a manual, what can I write with it?”. For some crazy reason I opted for a Twitter clone. Yeah, I was really bored.

Early in the analysis phase RabbitMQ and MongoDb joined the party. I was not exactly familiar with RabbitMQ so I thought was a good opportunity to learn something new.

In order to speedup the development and to obtain certain features (eg. authentication ) I have used a bunch of third party services

The system uses a microservices architecture with a front-end that act as api-gateway. For each service I’ve taken the CQRS path along with Publish/Subscribe. An Azure WebJob is scheduled to run continuously and listen to the various events/messages on the queues. I’ll blog more about the architecture but that’s it more or less.

What am I expecting from this? Well it’s simple: nothing. I mean, I’m not expecting people to use it (even though would be very nice), I am just exploring new possibilities. Nothing more.

Oh yeah, before I forgot: https://barfer.azurewebsites.net/ . Enjoy!

How to configure StructureMap to inject a typed logger

Logging is an essential part of every application. Might be dead simple as Console.WriteLine() or a complex third party library but every piece of software needs a way to communicate its status.

In my last project I have decided to use the wonderful NLog . One of the good things this library has is the possibility to add some contextual information to the messages, including (by default) the name of the calling class. Something like this:

2017-10-06 17:01:06.0417|INFO|MyAwesomeProgram.MyAwesomeClass|This is my message

As you can see, I have outlined with different colors the level, the name of the caller and the text.

This can be easily accomplished by something like this:

Using a static cTor to init the logger helps to  get a better stack trace in case GetCurrentClassLogger() throws an exception for some reason.

However, in case you’re using a DI container (as you should), it may become complicate to inject a valid Logger instance into each class.

Getting back to my project, for this I am using StructureMap as DI container. Usually I tend to stick with Simple Injector, but I just joined the team and it’s very important to follow the existing conventions.

The idea here is to use a wrapper class around the logger instance that takes as cTor parameter the name of the calling class. I’m using a wrapper just to avoid coupling with a third party library: usually you wouldn’t change from a logging library to another unless you’ve a very good reason for, but this principle applies basically to anything.

Next step is to configure StructureMap to return each time a new instance of the LoggerWrapper with the proper calling class name. And that’s actually the easy part!
Here’s the full code:

As you can see the magic happens during the StructureMap setup: context.ParentType represents the type of the class that will receive the LoggerWrapper instance.

 

Now let’s take a step back. Logging falls into the category of “cross-cutting concerns” , which basically means that logging can be considered part of the “infrastructure” and of course not of the business logic of the application. Caching can be another good example.

That said, unless you really need to  write specific log messages in specific points of your application, another option could be using the Decorator pattern . The idea is pretty straightforward:

  1. create a FooLoggingDecorator class
  2. inject the logger
  3. wrap all the class methods
  4. add logging where needed

Now it’s up to you to decide which approach to take, they both have pros and cons. For example, injecting the logger in some case can imply a SRP violation. On the other hand, using a decorator requires wrapping all the inner class methods. Keep that in mind when you add stuff to your interfaces 🙂

[EDIT]
As Jeremy wrote in his comment, the StructureMap documentation suggests to use a logger convention instead which would  be (quoting):

“significantly more efficient at runtime because the decision about which Logger to use is only done once upfront” 

Don’t worry, Heimdall will watch over all your microservices.

TL;DR : I wrote a service registry tool, named Heimdall, go and fork it!

Long version: almost every time I am working on a piece of code I get stuck on something and after a while I get new ideas for new projects. This may lead to a huge number of useless git repos, each one with a partially functional software, but it also pushes me to work on new things each day.

This time I was working on a super-secret project (that I will of course share very soon) based on a nice microservices architecture and I soon realized I needed some kind of Service Registry. The project was quite small so I was not really interested in a complex tool like a router with load balancing functions or similia so I decided to code the thing myself.

For the ones of you that don’t know what a Service Registry is and what it does, allow me to give you some context.
Imagine you’re a client that needs to consume some APIs. You could of course use a configuration file for storing the endpoints but in case you’re cloud-based, urls can change often.

Also, what if you want some nice features like multiple instances, autoscaling and load balancing?

The answer is simple: use a registry! 

Every service will register itself during initialization, allowing clients to query the registry and know the endpoint (possibly the best one).

I found this concept pretty useful so I decided to create a poor man’s version myself, using ASP.NET Core, MongoDB and React and I named it Heimdall, the guardian god of the Norse mythology .
The list of features for now is very scarce, you can just register a service, add/remove endpoints and query, but I have a full roadmap ready 🙂

Oh and I also added help pages using Swagger !

© 2017 Davide Guida

Theme by Anders NorenUp ↑