AuthorDavide Guida

How to dockerize your WordPress website

This is my first post on Docker so please be gentle. I am going to start with something easy: how to create a Docker container to host a WordPress website.

One word before we start: don’t do this on a production server! There are more rules and checks you have to do, this is just an introduction, good for a dev/demo environment.

So, I assume you already have docker and docker-compose installed on your system, but in case you don’t, there’s an excellent documentation on the Docker’s website.

The first step is to fire up the terminal, create a folder and save the contents of this gist to a file named docker-compose.yml .

Next step is to fire up the containers with this command:

docker-compose up -d

the -d is the “detached mode”, allows you to run the containers in background and keep using the terminal.

Now, if you take a look at our Compose configuration, on line 22 we have specified an host address for the mysql instance. It might be already correct, but better check. 
From the command line, first run

docker ps

to get the list of running containers. You should see something like this:

using docker ps to get the list of containers

using docker ps to get the list of containers

Grab the name of the db container and run

docker inspect wptestfull_mysql_1

You will get a big JSON object exposing the status and all the available properties of the container. At the end you should get this:

using docker inspect to get the container ip address

using docker inspect to get the container ip address

there you go, our IPAddress. Copy it into the docker-compose file and run again docker-compose up -d .

Now we need to create a db user. Open your browser and go to http://localhost:8181/ , you will see the phpMyAdmin interface. Create the user, the db and set the allowed hosts to % . This way we will allow connections from every address. Again: don’t do this on production!

Now all you have to do is to load http://localhost:8080/ and setup your WordPress instance 🙂

Next time: what if we want more than one website?

Don’t worry, Heimdall will watch over all your microservices.

TL;DR : I wrote a service registry tool, named Heimdall, go and fork it!

Long version: almost every time I am working on a piece of code I get stuck on something and after a while I get new ideas for new projects. This may lead to a huge number of useless git repos, each one with a partially functional software, but it also pushes me to work on new things each day.

This time I was working on a super-secret project (that I will of course share very soon) based on a nice microservices architecture and I soon realized I needed some kind of Service Registry. The project was quite small so I was not really interested in a complex tool like a router with load balancing functions or similia so I decided to code the thing myself.

For the ones of you that don’t know what a Service Registry is and what it does, allow me to give you some context.
Imagine you’re a client that needs to consume some APIs. You could of course use a configuration file for storing the endpoints but in case you’re cloud-based, urls can change often.

Also, what if you want some nice features like multiple instances, autoscaling and load balancing?

The answer is simple: use a registry! 

Every service will register itself during initialization, allowing clients to query the registry and know the endpoint (possibly the best one).

I found this concept pretty useful so I decided to create a poor man’s version myself, using ASP.NET Core, MongoDB and React and I named it Heimdall, the guardian god of the Norse mythology .
The list of features for now is very scarce, you can just register a service, add/remove endpoints and query, but I have a full roadmap ready 🙂

Oh and I also added help pages using Swagger !

Command Handlers return values in CQRS

I have recently came across this very interesting blog post by Jimmy Bogard ( the guy behind Mediatr, if you don’t know him). Talking about CQRS, he makes a good point about how the user should be informed of the result of a Command execution.

Should the Command Handler return a value?
Should the Command Handler throw an exception?

These are just some of the strategies we may take, another option could be just logging the operation and forget anything happened. Whatever.

A while ago I blogged about how to validate the Commands and ensure the data passed to the Command Handler is valid. This is the strategy I’m adopting in my projects lately. Why? Several reasons: first of all I want to keep Command execution separated from the validation, also Commands should be some sort of “fire and forget” operations. Let me clarify this a little bit.
In my opinion Command execution should be a boolean operation: the system can either execute it or not. Stop. I should be able to know ahead if a Command can be executed and that’s the validation phase. If I finally manage to get to the Handler, I know that the data I have is valid and I can run Command. No need to return a “true”.

So what should I do to make all the pieces fit?

  1. Use the Decorator pattern to ensure each Command Handler executes some sort of validation
  2. Analyze the Command and check its validity against business rules or anything else you want
  3. the Validator gives back a Validation result containing a (hopefully empty) list of errors
  4. in case something went wrong, throw a specialized Exception, for example something like this

Since most of the projects I am working on lately is composed of some sort of Web API based on .NET Core, I decided to create an Exception Filter that will eventually return to the user a JSON object with the details of the validation.

 

Bonus
Some of you may have noticed that some of the links in this post point to this Github repository, LibCore. It’s a small set of utilities I am writing, mantaining and using in my projects. I thought it would be useful to share the sources, maybe just to hear comments from the community. Food for thought.

Unit testing MongoDB in C# part 4: the tests, finally

More than a year. Wow, that’s a lot, even for me! In the last episode of this series we discussed about how to create the Factories for our Repositories. I guess now it’s time to put an use to all those interfaces and finally see how to unit test our MongoDB repositories 🙂

Remember: we are not testing the driver here. The MongoDB team is responsible for that. Not us. 

What we have to do instead is to make sure all our classes follow the SOLID principles and are testable. This way we can create a fake implementation of the low level data access layer and inject it in the classes we have to test. Stop.

Let’s have a look at the code:

In our little example here I am testing a CQRS Command Handler, the one responsible for creating a user. Our handler has an IDbContext as dependency, which being an interface allows us to use the Moq Nuget package to create a fake context implementation. 

Also, we have to instruct the mockDbContext instance to return a mock User Repository every time we access the .Users property.

At this point all we have to do is to create the sut, execute the method we want to test and Verify() our expectations. 

Let’s make a more interesting example now:

Now that we have created the user, we may want also to update some of his details. The idea here is to instruct the mockRepo instance to return a specific user every time the FinstOneAsync method is executed.

Again, now we just need to verify the expectations and we’re done!

Note that in this case we are making an assumption about the inner mechanism of the Handle() method of the UpdateUserHandler class. Personally I tend to stick with Black Box Testing, but sometimes (eg. now) you might be forced to use White Box Testing instead. If you don’t know what I am talking about, there’s a nice article here you may want to read.

 

Yet another “How to use SASS with WordPress” guide

Yes, it’s another one. If you lookup on Google there are are tons of articles about how to use SASS in a WordPress theme, so why writing another one?

Well, the answer is simple. Because I can. Because I am bored. Because I’m going to give you the sources with no fuss.

First of all, take a look at this repo.

As you can easily notice, it contains part of the standard WordPress folder structure and a bunch of other files. And trust me, I am not that kind of guy who adds files for nothing.

The main idea here is to have Gulp search and watch for our .scss files in the child theme folder and build the final style.css files every time something changes. Nice isn’t it?

Before we start we need of course to install some dependencies. Fire up the Terminal and run:

sudo npm install -g gulp

just to make sure we have Gulp installed globally (that’s why you need sudo for that). Then run:

npm install gulp gulp-sass gulp-clean gulp-autoprefixer –save-dev

 We’ll discuss about those packages later.

 
I have added a “sass” folder inside “twentysixteen-child” that contains all our SASS files.

The style.scss file is our main entry point and as you can see from the repo, contains all the boilerplate code required by WordPress to discover the child theme.

I tend to include a _base.scss file that contains all the basic dependencies like variables and mixins. Then in style.scss I append all the page templates, like _home.scss in our small example.

 

 

 

Now let’s talk about the Gulp configuration file. The first lines contain the dependencies we need in our tasks, gulp, sass, clean and autoprefixer (more on this later).

Then we have the paths we want our SASS compiler to run on. As you can see I am using the child theme path as a base concatenated to the others.

After this we can start with the tasks. The first one is responsible of removing all the files from the previous build (basically just one, style.css ).

Then we have the actual SASS compilation. I am passing to the sass() function an empty configuration object, but there are several options available, for example you may want to compress the result.

The “postprocess” task is responsible of every post-compilation step we may want to perform on the output css file. In our case, I am using a very useful library named Autoprefixer that adds all the vendor-specific prefixes. If you’re interested, there’s a nice article on CSS-tricks.com, you can read it here.

The last bit is the “watch” task. This is interesting: basically it tells Gulp to monitor our /sass/ folder and every time there’s a change, to run again the “build” task. That’s it!

Now all you have to do, if you’re using Visual Studio Code like me, is to hit cmd+shift+p  and type “Configure Task Runner”:

then pick Gulp as your default Task Runner. If you take a look at the tasks.json file in the repo, you will notice that I have added some more custom configuration just to instruct VS Code to use the “default” task as main entrypoint.

That’s it!

© 2017 Davide Guida

Theme by Anders NorenUp ↑