The Baffling World Of RabbitMQ and Docker

Recently I decided to switch my entire Ansible-managed infrastructure (for one project) over to Docker – or specifically, docker-compose. Part of this setup needs RabbitMQ.

I had no trouble getting the official RabbitMQ image to pull and build. I could set a default username, password, and vhost. And all of this worked just fine – I could use this setup without issue.

However, as I am migrating an existing project, I already had a bunch of queues, exchanges, bindings, users… etc.

What I didn’t want is to have some manual step where I have to remember to import the definitions.json file whenever building – or rebuilding – the environment.

Ok, so this seems a fairly common use case, I should imagine. But finding a solution wasn’t as easy as I expected. In hindsight, it’s quite logical, but then… hindsight 🙂

Please note that I am not advocating using any of this configuration. I am still very much in the learning phase, so use your own judgement.

Here is the relevant part of my docker-compose.yml file :

Then I went to my old / existing RabbitMQ server and from the “Overview” page, I went to the “Import / export definitions” section (at the bottom of the page), and did a “Download broker definitions”.

This gives a JSON dump, which as it contains a bunch of sensitive information, I have doctored for display here:

You could – at this point – go into your Docker-ised RabbitMQ, and repeat the process for “Import / export definitions”, do the “upload broker definitions” step and it should all work.

The downside is – as mentioned above – if you delete the volume (or go to a different PC) then unfortunately, your queues etc don’t follow you. No good.

Now, my solution to this is not perfect. It is a static setup, which sucks. I would like to make this dynamic, but for now, what I have is good enough. Please do shout up if you know of a way to make this dynamic, without resorting to a bunch of shell scripts.

Ok, so I take the definitions.json file, and the other config file, rabbitmq.config, and I copy them into the RabbitMQ directory that contains my Dockerfile:

For completeness, the enabled_plugins  file contents are simply:

And the rabbitmq.config file is:

And the Dockerfile :

(yes, just that one line)

Now, to get these files to work seems like you would need to override the existing files in the container. To do this, I used additional config in the docker-compose volumes section:

Note here the new volumes.

Ok, so down, rebuild, and up:

The output is a bit messy, but the problem is the RabbitMQ container has already exited, but should still be running.

To view the logs for RabbitMQ at this stage is really easy – though a bit weird.

What I would like to do is to get RabbitMQ to write its log files out to my disk. But adding in a new volume isn’t solving this problem – one step at a time (I don’t have a solution to this issue just yet, I will add another blog post when I figure this out). The issue is that RabbitMQ is writing its logs to tty by default.


Ok, bit odd.

Without going the long way round, the solution here is – as I said at the start – logical, but not immediately obvious.

As best I understand this, the issue is the provided environment variables now conflict with the user / pass combo in the definitions file.

Simply commenting out the environment variables fixes this:

Another down, build, up…

And this time things look a lot better:

Hopefully that helps someone save a little time in the future.

Now, onto the logs issue… the fun never stops.



Published by

Code Review

Code Review

CodeReviewVideos is a video training site helping software developers learn Symfony faster and easier.

One thought on “The Baffling World Of RabbitMQ and Docker”

  1. I hadn’t thought of using containers but that’s a great idea. Thanks so much for sharing!

Leave a Reply

Your email address will not be published. Required fields are marked *