The Baffling World Of RabbitMQ and Docker

Recently I decided to switch my entire Ansible-managed infrastructure (for one project) over to Docker – or specifically, docker-compose. Part of this setup needs RabbitMQ.

I had no trouble getting the official RabbitMQ image to pull and build. I could set a default username, password, and vhost. And all of this worked just fine – I could use this setup without issue.

However, as I am migrating an existing project, I already had a bunch of queues, exchanges, bindings, users… etc.

What I didn’t want is to have some manual step where I have to remember to import the definitions.json file whenever building – or rebuilding – the environment.

Ok, so this seems a fairly common use case, I should imagine. But finding a solution wasn’t as easy as I expected. In hindsight, it’s quite logical, but then… hindsight 🙂

Please note that I am not advocating using any of this configuration. I am still very much in the learning phase, so use your own judgement.

Here is the relevant part of my docker-compose.yml file :

Then I went to my old / existing RabbitMQ server and from the “Overview” page, I went to the “Import / export definitions” section (at the bottom of the page), and did a “Download broker definitions”.

This gives a JSON dump, which as it contains a bunch of sensitive information, I have doctored for display here:

You could – at this point – go into your Docker-ised RabbitMQ, and repeat the process for “Import / export definitions”, do the “upload broker definitions” step and it should all work.

The downside is – as mentioned above – if you delete the volume (or go to a different PC) then unfortunately, your queues etc don’t follow you. No good.

Now, my solution to this is not perfect. It is a static setup, which sucks. I would like to make this dynamic, but for now, what I have is good enough. Please do shout up if you know of a way to make this dynamic, without resorting to a bunch of shell scripts.

Ok, so I take the definitions.json file, and the other config file, rabbitmq.config, and I copy them into the RabbitMQ directory that contains my Dockerfile:

For completeness, the enabled_plugins  file contents are simply:

And the rabbitmq.config file is:

And the Dockerfile :

(yes, just that one line)

Now, to get these files to work seems like you would need to override the existing files in the container. To do this, I used additional config in the docker-compose volumes section:

Note here the new volumes.

Ok, so down, rebuild, and up:

The output is a bit messy, but the problem is the RabbitMQ container has already exited, but should still be running.

To view the logs for RabbitMQ at this stage is really easy – though a bit weird.

What I would like to do is to get RabbitMQ to write its log files out to my disk. But adding in a new volume isn’t solving this problem – one step at a time (I don’t have a solution to this issue just yet, I will add another blog post when I figure this out). The issue is that RabbitMQ is writing its logs to tty by default.


Ok, bit odd.

Without going the long way round, the solution here is – as I said at the start – logical, but not immediately obvious.

As best I understand this, the issue is the provided environment variables now conflict with the user / pass combo in the definitions file.

Simply commenting out the environment variables fixes this:

Another down, build, up…

And this time things look a lot better:

Hopefully that helps someone save a little time in the future.

Now, onto the logs issue… the fun never stops.


Symfony + Docker: My Journey So Far

Please note: This is a copy of my weekly email message. Please feel free to join the mailing list (the form is on the sidebar <<<) if you like this kind of thing. No pressure, it’s cool 🙂

I want to start by saying a huge thank you to everyone who replied to last week’s email. I was very grateful of each and every one of your replies, and want to reassure everyone: the Symfony tutorials will continue 😀

Over the next few weeks I will touch on this feedback a little more, as it has led to some interesting changes in the way I am thinking about the site.

Again, thank you, I really do appreciate and value your feedback.

I did something I was reluctant to do this week: I started playing with Docker.

This isn’t the first time I’ve had a bash with it. I tried it in late 2014 when I first became aware of it. This was around version 1.3, as best I recall. The reliance on the CLI – many of the commands being long and unwieldy – completely put me off.

I put it down almost as quickly as I’d picked it up. New and shiny, maybe, but confusing as heck.

Fast forward a year and I was on site with a FTSE 250 company who were spinning up a big docker swarm. Based on my in-depth research (read: 2 hours cursing followed by giving up) only a year earlier, I thought they must be mad. But hey, if big businesses were giving this a shot, then maybe there must be something too it, right?

So I picked it back up, and had another go. Everything was roughly as I remembered it. I still didn’t gel with the CLI.

However, there was now a new way of working with Docker: docker-compose.

The benefit of docker-compose being that you can have a single config file which manages the creation of many docker containers. Awesome! Given that I had no understanding of building one container, being able to spin up many containers couldn’t possibly go wrong.

Anyway, I persisted somewhat but I kept hitting on hurdles. Figuring out how to get Symfony’s cache and log permissions to work was frustrating me. There weren’t that many blog posts or tutorials to be found, and given that Ansible was working just fine for me, I once again consigned docker to the “not ready for me yet” pile.

If docker had gone away at this point, I can’t say I would have missed it. I really didn’t get the hype.

Now, whilst all this was happening, GitLab released version 8.0.

One of the very coolest features in GitLab 8.0 was their integration with GitLab CI (Continuous Integration). I watched the release video and I thought – oh my lord, I need me some of that! (

I absolutely love GitLab. It’s probably my single most used tool on any given day. The idea of being able to commit my code and have it spin up an environment that matches my dev / live setup, and then run my test suite for each commit?! What is this magic?

Alas, in reality this was not quite as easy as they made it look / sound.

A lot of this was due to the way I was working. I heavily used Ansible. And GitLab heavily expected me to be using Docker.

Undeterred, I spent night after night hacking away until I figured out how to make GitLab CI runner work with VirtualBox. Then, because this entire process was such a massive pain in the backside, I documented it all and video’d it ( I had suffered so much I hoped others would never have too.

But it griped me. This process was painful. If things changed in my environment – which they inevitably always do – I would need to build a new image, copy that over to my runner, test it all out… testing the test runners… how deep does this rabbit hole go?

What frustrated me most was that when this process works, it is incredible.

The alternative was to use Docker. In doing so, GitLab could spin up containers as needed and remove all of this pain. The caveat to this would be changing my entire development workflow. And I was so invested in Ansible this was incredibly unappealing.

Once more, I consigned docker to the “cool, but not for me just yet” pile.

Then GitLab dun’ did it again.

I urge you to watch the release video and not think: GIMME GIMME GIMME!

Now, I watched this video on Wednesday dinner time. Shortly thereafter I had to do a major production deploy involving Symfony, RabbitMQ, LetsEncrypt, Redis, MySQL, Node JS with PM2, a raft of cron jobs… and that’s just what I remember without looking into the Ansible configs more deeply.

The idea is simple – run a single command and let Ansible deploy all this stuff, reliably and in a managed fashion.

In (my) reality, most any new build involves some nightmare. For example, in dev you don’t need to worry about SSL. In prod, you do. This is not part of your dev Ansible scripts so manages to cause a few hours of headscratching whilst the dev / prod environments are integrated. Often this involves commenting out part of a script, running the first script, uncommenting out that part, and running another script. Reliable? Reproducible? Hmmm.

Then what about RabbitMQ in production? It needs to be protected with SSL. How does this fit into the config for LetsEncrypt… more hours lost.

And so on. I know many of these problems are specific to the way I work. But I am 100% convinced I’m not the only one who ends up in situations like this.

Anyway, I rage quit.

Deploying this stack can wait. Fortunately it’s important, but not urgent.

During all this time I couldn’t help but be seduced – once more – by the allure of docker, and GitLab’s one click easy deploys.

So this time I set myself a goal. I would take my entire Ansible orchestrated stack for this complex environment, and I would docker-ise it.

At this point I wish I could say the CLI issues had gone away. They haven’t. But this time I spent most of Thursday morning with my head in the Docker documentation. Then I started implementing a super basic config from scratch.

From there, I composed a few containers, and suddenly, it started to make sense.

I hit on a whole range of issues getting Symfony to work with Docker though. Permissions, but of course. Cache and Logs, the two Symfony stalwarts. But I’m past that now. And amazingly, it’s insanely quick. Like 1ms page renders quick.

I still have some way to go with this. And I’ve not started integrating with GitLab CI at all just yet. But there is major promise there.

Hopefully within the next week or so I will be able to share my docker scripts. There’s likely ways to improve on these yet. Still, it’s highly exciting.

With all this in mind, I would be super interested in hearing from you if you have put Docker + Symfony into production. That seems like the next major hurdle.

If you have any Docker experience – good, bad, or otherwise – please do hit reply and let me know. I’m particularly interested in those unexpected pitfalls I just know are out there, and would definitely appreciate a heads up if you are aware of any.

Video Update

This week there have been three new videos added to the site.

Firstly we finished up the “Change Password” implementation.

On the surface, it may seem a little crazy to have 6 videos covering how to accept a user’s password change form submission. After all, this is only the front end, right? We have another two videos ( covering how to do this on the back end!

Well, the first time you do anything like this you inevitably have to lay a whole bunch of foundations. All of these foundations make subsequent work much faster.

An example of this would be in how we can now process Symfony form errors on our front end. This will come in handy for any time we need to make a POST request – not just for changing passwords, but for registration, login, and any other Create / Update you do in your app.

Also, we extracted the repeated password fields out into their own separate component. This will allow us to re-use this component wherever we need to accept passwords – such as in the Registration screen, which is up next.

Finally, we looked at a way that Redux Saga provides for us to speed things up by allowing us to run many processes in parallel. When combined with our form error helper function, this becomes super useful should you have a form with many fields, just as one example.

Next up we got an intro into the many events that Symfony’s Workflow Component dispatches for us during any single transition.

In case you haven’t been following along, a transition is when an object that’s flowing through our workflow goes from one place to another. These sorts of things happen all the time in our applications, even if we don’t have a formal process (or shiny Symfony component) to manage them.

In the example we have been working through, a potential customer (a prospect) can sign up with our system. When signing up, this potential customer goes through the transition of ‘sign up’. This takes them from the place of ‘prospect’ to the place of ‘free customer’.

As part of this transition, the workflow component dispatches a bunch of useful events. Each of these events can be hooked into, and if needed, some extra action can be taken. This might be as simple as logging some data out somewhere, but could be much more involved – potentially even triggering a whole different workflow.

This video is free for all, and explains when and where each of these events comes from.

As part of the Workflow Component, a helpful utility class is included called the AuditTrailListener.

It’s got a bit of a computer science-y name, but the concept is simple enough: It listens for events, and then logs information about those events out to your Symfony log file.

In the previous video we saw just how many of these events are dispatched for every single transition.

What the provided AuditTrailListener cannot do is hook into some of the more unique events. However, we can implement our own version of this listener to capture as much information as we need. This also demonstrates exactly how you would listen to, and act upon any workflow event in your system.

How very handy! 🙂

Next week will see the final parts of the React, Redux, and Redux Saga With Symfony 3 course being uploaded. I’ve really enjoyed creating and sharing this course, and I hope you’ve found it useful too.

Until then, have a great weekend. Cross your fingers, we may even get some sunshine. Hey, that’s a big deal here in the UK 🙂

Crisis? What Crisis?

Depending on how closely you follow the Symfony community, you may or may not have seen the Grafikart story last weekend. If you didn’t see it, the gist of it is that lawyers demanded Grafikart remove their Symfony tutorial videos from YouTube.

As best I understand it, the laywer(s) believed they had found potential issues with trademark violations on their tutorials.

The aftermath of this was particularly interesting to me, what with a large part of CodeReviewVideos content being in a very similar category – i.e. Symfony tutorial videos, a number of which are on YouTube.

Now, thankfully, without much delay, Fabien Potencier got involved and resolved the issue:

Phew. Ok, so crisis averted.

However, whilst all this was happening I do have to say that there was an anxious knot in my stomach. It concerns me that the vast majority of the content on the site is about Symfony, and if this had gone the other way, the site may be offline right now.

That got me to thinking: Are all my eggs in one Symfony-shaped basket?

Right now, I would say 8 out of 10 of my eggs are Symfony-shaped eggs.

Historically the vast majority of the content has been focused on Symfony. More recently the content is Symfony alongside some front-end framework / library. I favour React, but have covered Angular, and Ionic.

Whatever front end you choose, Symfony – in my experience – can make a wonderful back end.

I still believe that the Symfony framework is the very best way I can be productive with PHP in 2017. There are always other frameworks and languages popping up that try to seduce me with their new and shiny, and there likely always will be. But when stuff needs to get done, it’s PHP and Symfony that I go to.

As many know already, I started CodeReviewVideos because I found learning Symfony to be difficult and stressful. This stress is amplified many times over if you have to learn Symfony “on the job”. That stress can be greatly reduced if you have access to a library of videos that explain the various ways and means of working with the framework to help you through.

Everything I learn, and have learned on this journey I share with you via the site. If its working for me, then chances are, it will work for you too.

From the feedback I’ve had recently, I’m excited that many of you are open to different types of video / tutorial content. Node JS gets requested a lot. I would also like to cover Elixir. And of course, lots more React.

All of this said, I’m not stopping recording Symfony content. I just spent a week recording loads of new content, and have another week booked in to do even more at the end of this month.

I’m interested in your opinion on this. Please can you hit reply in the comments below and let me know your thoughts?

Video Update

This week there have been four new videos added to the site.

In this video we are getting started with Workflow Guards.

The concept behind a Workflow Guard is simple enough: We want to block a transition if some criteria that we set is not met.

This criteria can be as simple, or as complex as you require.

Guards are going to give us a surface level skim of Events inside the Symfony Workflow Component, which will stand us in good stead for our deeper dive in a couple of videos time.

There are three different ‘levels’ of guards. In the previous video we look at the first two, the more generic guards.

In this video we cover very specific guards, and their potential use cases.

We also look at a way of improving the error message output – this might be a little controversial, but hey, it’s working ok for me so far.

One of the pain points I encountered when working with a Symfony-backed JSON API and React on the front end was dealing with error messages. In particular, those returned from a Symfony form error.

There’s likely a bunch of ways of addressing this problem. My advice is to use TDD to ensure your solution behaves as expected. Handling errors is painful for sure, but it’s pain worth enduring so your end users aren’t left scratching their heads.

In this video we start building an error helper function which takes the returned JSON form errors, and extracts useful information from those errors, allowing display on our Redux Form front end.

In the previous video we made a start on the form error helper.

Even if you’re not interested in JavaScript, I would recommend this video to you as it covers one of my personal bugbears: Nested conditionals.

Whenever you nest anything – in this case an ‘if’ inside and ‘if’ – you increase the complexity of your code. Nothing gripes me more than opening a file to find some “christmas tree” shape where ifs are buried in ifs, buried in ifs… oh and a lovely else at the end with another if inside it. Who are these people who can keep this in their heads?!

Anyway, soap box aside, as I say, if you’re not interested in JavaScript, at least consider watching this for the code structuring aspect.

Ok, until next week, have a jolly fine (and hopefully sunny) weekend, and happy coding!


How I Fixed: Missing Headers on Response in Symfony 3 API

The first time this caught me out, I didn’t feel so bad. The second time – i.e. just now – I knew I had already solved this problem (on a different project), and found my urge to kill rising.

I wanted to  POST in some data, and if the resource is successfully created, then the response should contain a link – via a HTTP header – to the newly created resource.

Example PHP / Symfony 3 API controller action code snippet:

And from the front end, something like this:

Now, the interesting line here – from my point of view, at least – is the final line.

Because this is a newly created resource, I won’t know the ID unless the API tells me. In the Symfony controller action code, the routeRedirectView  will take care of this for me, adding on a Location header pointing to the new resource / record.

I want to grab the Location  from the Headers returned on the Response and by removing the part of the string that contains the URL, I can end up with the new resource ID. It’s brittle, but it works.

Only, sometimes it doesn’t work.

Excuse the formatting.

From JavaScript’s point of view, the Headers array is empty.

This leads to an enjoyable error: “Cannot read property ‘replace’ of null”.

Confusingly, however, from the Symfony profiler output from the very same request / response, I can see the header info is there:

Good times.

Ok, so the solution to this is really simple – when you know the answer.

Just expose the Location  header 🙂

After that, it all works as expected.

Programming Phoenix – Video to Category Relationship

I’m currently working my way through the Programming Phoenix book, which I have to say, I am thoroughly enjoying.

That’s not to say it’s all plain sailing. If only I were that good.

I got to chapter 7, and got myself quite stuck.

I’m aware the book was written for an earlier version of Phoenix, and in an attempt to force myself to “learn the hard way”, I decided that whenever I hit on a deprecated function, I would use the given replacement.

Ok, so in the book on page 99 (Chapter 6: Building Relationships), there is the following code:

When using this code I got a deprecation warning:

Seeing as I made the commitment to using the newer functions, this seemed like as good a time as any to do so.

The suggested replacement:  cast/3 (docs) is where my confusion started. Here’s the signature:

And here’s the signature for  cast/4 :

You may then be wondering why in the original code from the book then that we only have three arguments being passed in?

Good question.

Really there are four, though one is being passed in as the first argument by way of Elixir’s pipe operator – which passes the outcome of the previous statement into the first argument of the next:

As seen in the code from the Programming Phoenix book,  cast/3 expects some  params, then a list of  required_fields, and another list of optional_fields.

One nice thing happening in the original code is the use of Attributes ( @ symbol) as constants.

Using the  ~w sigil is a way to create a list from the given string. Ok, so lots of things happening here in a very little amount of code. This is one of the benefits, and drawbacks (when learning) of Elixir, in my experience.

With PHPStorm I’m so used to having the code-completion and method signature look ups (cmd+p on mac) that learning and remembering all this stuff is really the hardest part. Maybe ‘intellisense’ has made me overly lazy.

Anyway, all of this is nice to know but it’s not directly addressing ‘problem’ I faced (and I use the inverted commas there as this isn’t a problem, it’s my lack of understanding).

We’ve gone from having required and optional fields, to simply just allowed fields.

My replacement code started off like this:

Everything looked right to me.

I had the required fields, but now as a list, rather than the ~w sigil approach.

I had specified my three required fields by way of validate_required.

And as best I could tell, I was associating with the :category.

But no matter what I did, my Video was never associated with a Category on form submission.

I could see the Category was being submitted from my form:

But the generated insert statement was missing my category data:

Anyway, it turns out I was over thinking this.

All I needed to do was add in  :category to the list passed in as the  allowed argument and everything worked swimmingly:

With the outcome of the next form submission:

Easy, when you know how.

This took me an embarrassingly long time to figure out. I share with you for two reasons:

  1. Learning the hard way makes you learn a whole bunch more than simply copy / pasting, or typing out what you see on the book pages.
  2. I couldn’t find anything about this after a lot of Googling, so if some other poor unfortunate soul is saved a lot of headscratching from this in the future then it was all worthwhile writing.

That said, the new edition of Programming Phoenix is due for release very soon (so I believe), so likely this will have been addressed already.