CategoryTypescript

A static website is better.

Yes, you read that right: a static website is better.

Now take a deep breath and follow me. To read this post you’ve politely asked your browser to fetch data from a url. This has triggered a chain of servers up to the one hosting this blog which has identified the right php script to execute and returned a somewhat well-formatted html string in output. At this point your nice browser can start rendering the page, while at the same time performing some other requests to download images, styles, scripts and so on.

Phew.

That’s a lot of steps.

Some of them cannot be avoided (you have to download something sooner or later), but for sure we can try to make the life of our servers easier by reducing the workload. Maybe by optimizing the database calls. Or maybe by simply serving static html files and nothing else.

At this point some of you might argue that we’re not in the early ’90s. I agree, even though I really love grunge music. 

But…think of those websites that have little to none dynamic content. Maybe not even a contact form. Something very similar to this blog. What’s the ultimate way to speedup a website like this?

I can see some of you mouthing “use a cache”. Yes, I can use a cache. Of course I do. But caching is difficult.

Also, what if I want to completely avoid exposing my admin panel url? I’m a paranoid maniac, absolutely afraid of hackers and don’t want to be pwned. And I’m also too lazy to do nasty tricks like ip whitelisting or VPNs.

So what’s the solution? I said it already: use plain old html files. Make everything static and easier.

Ok, that won’t be a silver bullet but bear with me: I’m talking about a very specific niche of websites, with content that doesn’t change by user and with little to no forms. These should be static already but you know better than me that installing WordPress is incredibly easy and cheap.

So this is why I’ve started working on Statifier! For now it is a simple NodeJs tool that will crawl your website, download pages and assets and perform some very simple string replacements. Nothing else.

But it works. With some hiccups, but it works. Still doubting? Take a look here, you won’t be disappointed.

Yes, I could have used an existing tool like Grav or even better, Jekyll, but why give up on the amazing WP admin panel?

 

Feature Gating part 3 : how can we check the gates?

In this article we’ll explore few ways to check if the feature gates are opened or not. This is the third episode of our series about Feature Gating, last time we discussed about the optimal persistence method for the flags.

The first approach is a static config object injected as dependency in the class cTor:

It’s simple, easy to implement and does the job. The configuration object can be instantiated in the composition root reading data from whatever is your persistence layer  and you’re done.

Drawbacks? It’ static. That means you cannot vary your flags based on custom conditions (eg. logged user, time, geolocation).

So what can we do? Something like this:

Replacing the configuration object with a specific service will do the job. This is probably the most common situation and personally I’m quite a fan. The only drawback is the infamous tech debt: very soon the code will be filled with if/else statements. Should we leave them? Remove them? If yes, when? We will discuss in another article a simple strategy for that.

Speaking about strategy, it’s a very interesting pattern that we can exploit:

The idea is to encapsulate the new and the old logic in two classes (lines 10 and 11) and generate a third one which will use the previous featureService to pick the right instance. Finally all you have to do is to inject that class in the consumer and you’re done.  

Next time: this is nice, but is really useful? What do we really get using Feature Gating?

How I used Travis CI to deploy Barfer on Azure

Ok this time I want to talk a little bit about how I used Travis CI to deploy Barfer on Azure. Last time I mentioned how much helped having a Continuous Delivery system up and running so I thought it would be worth expanding a little bit the concept.

Some of you may say: “Azure already has a continuous delivery facility, why using another service?”. Well there’s a bunch of reasons why:

  • when I started writing Barfer I had no idea I was going to deploy on Azure
  • in case I move to another hosting service, I’ll just have to change a little bit the deployment configuration
  • CD on Azure is clunky and to be honest I still don’t know if I like Kudu.

Let me spend a couple of words on the last point. Initially I wanted to deploy all the services on a Linux web-app. Keep in mind that I wrote Barfer using Visual Studio Code on a Macbook Pro. So I linked the GitHub repo to the Azure Web App and watched my commits being transformed into Releases.

Well, turns out that Kudu on Linux is not exactly friendly. Also, after the first couple of commits it was not able to delete some hidden files in the node_modules folder. Don’t ask me why.

I spent almost a week banging my head on that then at the end I did two things:

  1. moved to a Windows Web-App
  2. dropped Azure CD and moved to Travis CI

Consider also that I had to deploy 2 APIs, 1 web client and 1 background service (the RabbitMQ subscriber) and to be honest I have absolutely no desire of learning how to write a complex deployment script. I want tools to help me doing the job, not blocking my way.

The Travis CI interface is  very straightforward: once logged in, link the account to your GitHub one and pick the repository you want to deploy. Then all you have to do is create a .yml script with all the steps you want to perform and you’re done.

Mine is quite simple: since I am currently deploying all the services together (even though each one has its own independent destination), the configuration I wrote makes use of 5 Build Stages. The first one runs all the unit and integration tests then for every project there’s a custom script that

  1. downloads the node packages (or fetches them from the cache)
  2. builds the sources
  3. deletes the unnecessary files
  4. pushes all the changes to Azure

The whole process takes approx 10 minutes to run due to the fact that for every commit all the projects will be deployed, regardless where the actual changes are. I have to dig deeper into some features like tags or special git branches, but I will probably just split the repo, one per project. I just have to find the right way to manage the shared code. 

#hashtags just landed on #Barfer!

Yeah I know, I blogged yesterday. I probably have too much spare time these days (my wife is abroad for a while) and Barfer has become some kind of obsession.

You don’t know what Barfer is? Well go immediately check my last article. Don’t worry, I’ll wait here.

So the new shiny things are #hashtags! Yeah, exactly: now you can barf adding your favourite #hashes and you can even use them to filter #barfs!

The implementation is for now very simple, just a string array containing the tags, as you can see from the Barf interface defined here.

The command handle responsible for storing the Barf uses a regex to parse the text and extract all the tags (apart from checking for xss but that’s another story).

On the UI then before getting rendered every Barf goes through the same regex but this time the #tag is replaced with a link to the archive.

Quick&dirty.

Next thing in line would be adding some analytics to them but that would require a definitely bigger community 😀

I also went through some small refactoring and cleaning of the frontend code, although I will probably move to an SPA sooner or later. Thing is, I’m still not sure if using React, Angular or Vue so in the meantime I’m focusing on the backend.

There are so many features I would like to add that to be honest I prefer to not focus on the frontend for now. Maybe I’ll start looking for somebody who helps me on that.

One thing I’m quite happy for now but I plan to rework is CI/CD. Well for now I’m working alone on this so probably I can’t really talk about integration. But whatever.
As I wrote already, I’m using Travis CI and I’m very happy with the platform. Even though I’m still on the free tier, the features are awesome and flexibility is huge.  I’ll probably write a blog post on this in the next few days.

In the meanwhile, #happy #barfing! 

I’m becoming a Barfer!

More than a month! My last post on this blog was more than one month ago. I should write more often. No wait let me rephrase that: I should write on this blog more often.

Why? How I spent my last month? Barfing, here’s how!

Ok, let’s add more details. A while ago I decided it was a good idea starting to move a little bit away from the .NET world and explore what’s around. And  NodeJs arrived, of course with Typescript: I am 100% sure it’s basically impossible to write a semi-reliable system without some kind of type checking (along with 10000 other things). 

Then I said: “I don’t want to just read a manual, what can I write with it?”. For some crazy reason I opted for a Twitter clone. Yeah, I was really bored.

Early in the analysis phase RabbitMQ and MongoDb joined the party. I was not exactly familiar with RabbitMQ so I thought was a good opportunity to learn something new.

In order to speedup the development and to obtain certain features (eg. authentication ) I have used a bunch of third party services

The system uses a microservices architecture with a front-end that act as api-gateway. For each service I’ve taken the CQRS path along with Publish/Subscribe. An Azure WebJob is scheduled to run continuously and listen to the various events/messages on the queues. I’ll blog more about the architecture but that’s it more or less.

What am I expecting from this? Well it’s simple: nothing. I mean, I’m not expecting people to use it (even though would be very nice), I am just exploring new possibilities. Nothing more.

Oh yeah, before I forget: https://barfer.azurewebsites.net/ . Enjoy!

© 2018 Davide Guida

Theme by Anders NorenUp ↑