How I Fixed: unknown flag: –project-name in GitHub Actions

Bit of an obscure one this, and given that I received absolutely no Google results, I’m guessing may be of little help to … well, anyone. But perhaps this is the price one pays for being an early adopter.

The Problem

I created a GitHub actions workflow that ran a Makefile command. It looked something like this:

---
name: E2E Tests

on: [push, pull_request]

jobs:
    test:
        runs-on: ubuntu-latest

        steps:
            - uses: actions/checkout@v2

            - name: make install
              run: make install

            - name: make serve
              run: make serve

Excuse the crappy steps, I was / am still kinda experimenting with this workflow.

Anyway, what we really care about (and where the problem actually lies) is in what the make install command represents:

serve:
	docker compose \
		-p my_project_name_here \
		 --remove-orphans && \
		docker compose up --remove-orphans
.PHONY: serve

Don’t be put off if you don’t use / understand Makefile‘s, but if you don’t, they are pretty useful. Other solutions exist, yadda yadda.

Keen eyed readers will have spotted something potentially curious:

docker compose ...

Not:

docker-compose ...

Yeah, so I got this from a little info at the end of my local command line output:

➜  my-project git:(e2e-tests-setup) docker-compose
Define and run multi-container applications with Docker.

Usage:
  docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
  docker-compose -h|--help

 ...

Docker Compose is now in the Docker CLI, try `docker compose`

(Excuse the formatting, WP does its own thing)

See that line at the very end. Why not try the new stuff, right? Who doesn’t love the shiny.

But herein lies the problem. Note, even the official docs aren’t (yet) consistent in what format they use.

The Solution

The solution to this problem is super simple.

Use docker-compose for your GitHub actions commands.

The problem arises because GitHub actions thinks I’m trying to run docker commands, and must be using an older version of the Docker CLI.

Locally:

docker -v
Docker version 20.10.7, build f0df350

GitHub Actions:

Run docker -v
Docker version 20.10.7+azure, build f0df35096d5f5e6b559b42c7fde6c65a2909f7c5

Anyway, my fix was to change the Makefile command to use docker-compose instead of docker compose:

serve:
	docker-compose \
		-p my_project_name_here \
		 --remove-orphans && \
		docker-compose up --remove-orphans
.PHONY: serve

How I Solved: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running?

OK, tl;dr, this is not a true fix. However, it works. Or worked for me.

The issue I have been facing, the one that has cost me my entire Saturday morning, is this:

➜  gitlab-ci docker-compose up 
Creating network "gitlab-ci_default" with the default driver
Creating gitlab-ci_runner_1_28ccd2f6e08d          ... done
Creating gitlab-ci_register-runner_1_6ddb7e90a9d3 ... done
Creating gitlab-ci_dind_1_bb210df194a2            ... done
Attaching to gitlab-ci_runner_1_3cb60d519ae8, gitlab-ci_register-runner_1_941db09830b5, gitlab-ci_dind_1_a0ef0b8a29e4
runner_1_3cb60d519ae8 | Runtime platform                                    arch=amd64 os=linux pid=7 revision=61e7606f version=14.1.0~beta.182.g61e7606f
runner_1_3cb60d519ae8 | Starting multi-runner from /etc/gitlab-runner/config.toml...  builds=0
runner_1_3cb60d519ae8 | Running in system-mode.                            
runner_1_3cb60d519ae8 |                                                    
runner_1_3cb60d519ae8 | Configuration loaded                                builds=0
runner_1_3cb60d519ae8 | listen_address not defined, metrics & debug endpoints disabled  builds=0
runner_1_3cb60d519ae8 | [session_server].listen_address not defined, session endpoints disabled  builds=0
register-runner_1_941db09830b5 | Runtime platform                                    arch=amd64 os=linux pid=7 revision=61e7606f version=14.1.0~beta.182.g61e7606f
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
register-runner_1_941db09830b5 | Running in system-mode.                            
register-runner_1_941db09830b5 |                                                    
register-runner_1_941db09830b5 | Registering runner... succeeded                     runner=cjX3zQG_
register-runner_1_941db09830b5 | Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 
gitlab-ci_register-runner_1_941db09830b5 exited with code 0
dind_1_a0ef0b8a29e4 | .............................++++
dind_1_a0ef0b8a29e4 | ...........................................................................................................................................................++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
runner_1_3cb60d519ae8 | Configuration loaded                                builds=0
dind_1_a0ef0b8a29e4 | .......++++
dind_1_a0ef0b8a29e4 | .............++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Signature ok
dind_1_a0ef0b8a29e4 | subject=CN = docker:dind server
dind_1_a0ef0b8a29e4 | Getting CA Private Key
dind_1_a0ef0b8a29e4 | /certs/server/cert.pem: OK
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
dind_1_a0ef0b8a29e4 | ................................................................++++
dind_1_a0ef0b8a29e4 | ...................................................................................................................++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Signature ok
dind_1_a0ef0b8a29e4 | subject=CN = docker:dind client
dind_1_a0ef0b8a29e4 | Getting CA Private Key
dind_1_a0ef0b8a29e4 | /certs/client/cert.pem: OK
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.193084271Z" level=info msg="Starting up"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.194256426Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198038066Z" level=info msg="libcontainerd: started new containerd process" pid=53
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198088131Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198099583Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198127773Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198154133Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.211367108Z" level=info msg="starting containerd" revision=d71fcd7d8303cbf684402823e425e9dd2e99285d version=v1.4.6
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.236199982Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.236321650Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.240984040Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241236268Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.btrfs"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241270029Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.devmapper"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241295950Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241311248Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241375982Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241537098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241730231Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241748382Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241783028Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241794800Z" level=info msg="metadata content store policy set" policy=shared
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262730870Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262764458Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262808206Z" level=info msg="loading plugin "io.containerd.service.v1.introspection-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262843956Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262862664Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262875776Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262889612Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262902664Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262923290Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262946006Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262959263Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263113401Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263225410Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263552551Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263577871Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263616499Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263632647Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263648459Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263661513Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263674151Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263687079Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263699853Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263789801Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263807278Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263947799Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263971592Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263987446Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264001179Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264194211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264252887Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264299356Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264314178Z" level=info msg="containerd successfully booted in 0.053975s"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271530892Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271565662Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271592175Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271613087Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272302289Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272321745Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272351355Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272365620Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342826242Z" level=warning msg="Your kernel does not support swap memory limit"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342846686Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342999134Z" level=info msg="Loading containers: start."
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.417804617Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.465223597Z" level=info msg="Loading containers: done."
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.494192010Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.494297708Z" level=info msg="Daemon has completed initialization"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.559800797Z" level=info msg="API listen on /var/run/docker.sock"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.565866502Z" level=info msg="API listen on [::]:2376"
runner_1_3cb60d519ae8 | Checking for jobs... received                       job=870 repo_url=https://example.com/myrepo/myproject.git runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Job failed (system failure): Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  duration_s=9.004197071 job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Failed to process runner                   builds=0 error=Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s) executor=docker runner=A6qDsS-H

The critical lines being:

WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)

This setup is for GitLab CI, where I run GitLab Runner through docker compose.

Here’s my docker-compose.yaml config, for what it’s worth:

version: '3'

services:

  dind:
    restart: always
    privileged: true
    volumes:
    - /var/lib/docker
    image: docker:17.09.0-ce-dind 
    entrypoint: ["dockerd-entrypoint.sh", "--tls=false", "--storage-driver=overlay2"]

  runner:
    restart: always
    image: gitlab/gitlab-runner:alpine
    volumes:
    - ./gitlab/runner:/etc/gitlab-runner:Z
    - ./gitlab/runner/builds:/builds
    environment:
    - DOCKER_HOST=tcp://dind:2375
      
  register-runner:
    restart: 'no'
    image: gitlab/gitlab-runner:alpine
    volumes:
    - ./gitlab/runner:/etc/gitlab-runner:Z
    command:
    - register
    - --non-interactive
    - --locked=false
    - --name=mybox
    - --executor=docker
    - --docker-image=docker:19.03.12
    - --docker-privileged
    environment:
    - CI_SERVER_URL=http://example.com/
    - REGISTRATION_TOKEN=my-token-here

(careful, this won’t copy paste due to WP funking up the encoding)

Note, the docker image version used by dind is the most important part here. The docker image version used by register-runner doesn’t seem to matter.

Prior to this I tried the very latest docker image, then docker:19.03.12 as per the official GitLab docs (at the time of writing), and then fortunately, I had my ancient configs which gave me the heads up to try a much older version of Docker.

So it seems using the older docker version ‘fixes’ this. I don’t know why – and I don’t have time (nor really, the inclination) to investigate. If you’re looking for a quick fix, hopefully this works for you. And if you do have the proper fix, please let me know via a comment.

The Baffling World Of RabbitMQ and Docker

Recently I decided to switch my entire Ansible-managed infrastructure (for one project) over to Docker – or specifically, docker-compose. Part of this setup needs RabbitMQ.

I had no trouble getting the official RabbitMQ image to pull and build. I could set a default username, password, and vhost. And all of this worked just fine – I could use this setup without issue.

However, as I am migrating an existing project, I already had a bunch of queues, exchanges, bindings, users… etc.

What I didn’t want is to have some manual step where I have to remember to import the definitions.json file whenever building – or rebuilding – the environment.

Ok, so this seems a fairly common use case, I should imagine. But finding a solution wasn’t as easy as I expected. In hindsight, it’s quite logical, but then… hindsight 🙂

Please note that I am not advocating using any of this configuration. I am still very much in the learning phase, so use your own judgement.

Here is the relevant part of my docker-compose.yml file :

version: '2'

services:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Then I went to my old / existing RabbitMQ server and from the “Overview” page, I went to the “Import / export definitions” section (at the bottom of the page), and did a “Download broker definitions”.

This gives a JSON dump, which as it contains a bunch of sensitive information, I have doctored for display here:

{
  "rabbit_version": "3.6.8",
  "users": [
    {
      "name": "rabbit_mq_dev_user",
      "password_hash": "somepasswordhash+yQtnMlaK6Iba",
      "hashing_algorithm": "rabbit_password_hashing_sha256",
      "tags": "administrator"
    }
  ],
  "vhosts": [
    {
      "name": "\/"
    }
  ],
  "permissions": [
    {
      "user": "rabbit_mq_dev_user",
      "vhost": "\/",
      "configure": ".*",
      "write": ".*",
      "read": ".*"
    }
  ],
  "parameters": [
    
  ],
  "policies": [
    
  ],
  "queues": [
    {
      "name": "a.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    },
    {
      "name": "b.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    }
  ],
  "exchanges": [
    {
      "name": "router",
      "vhost": "\/",
      "type": "direct",
      "durable": true,
      "auto_delete": false,
      "internal": false,
      "arguments": {
        
      }
    }
  ],
  "bindings": [
    {
      "source": "router",
      "vhost": "\/",
      "destination": "a.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    },
    {
      "source": "router",
      "vhost": "\/",
      "destination": "b.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    }
  ]
}

You could – at this point – go into your Docker-ised RabbitMQ, and repeat the process for “Import / export definitions”, do the “upload broker definitions” step and it should all work.

The downside is – as mentioned above – if you delete the volume (or go to a different PC) then unfortunately, your queues etc don’t follow you. No good.

Now, my solution to this is not perfect. It is a static setup, which sucks. I would like to make this dynamic, but for now, what I have is good enough. Please do shout up if you know of a way to make this dynamic, without resorting to a bunch of shell scripts.

Ok, so I take the definitions.json file, and the other config file, rabbitmq.config, and I copy them into the RabbitMQ directory that contains my Dockerfile:

➜  symfony-docker git:(master) ✗ ls -la rabbitmq 
total 24
drwxrwxr-x  2 chris chris 4096 Mar 24 21:13 .
drwxrwxr-x 10 chris chris 4096 Mar 25 12:11 ..
-rw-rw-r--  1 chris chris 1827 Mar 25 12:38 definitions.json
-rw-rw-r--  1 chris chris  130 Mar 25 12:55 Dockerfile
-rw-rw-r--  1 chris chris   54 Mar 24 20:53 enabled_plugins
-rw-rw-r--  1 chris chris  122 Mar 25 12:55 rabbitmq.config

For completeness, the enabled_plugins  file contents are simply:

[rabbitmq_management, rabbitmq_management_visualiser].

And the rabbitmq.config file is:

[
    {
        rabbitmq_management, [
            {load_definitions, "/etc/rabbitmq/definitions.json"}
        ]
    }
].

And the Dockerfile :

FROM rabbitmq:3.6.8-management

(yes, just that one line)

Now, to get these files to work seems like you would need to override the existing files in the container. To do this, I used additional config in the docker-compose volumes section:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Note here the new volumes.

Ok, so down, rebuild, and up:

docker-compose down && docker-compose build && docker-compose up -d
➜  symfony-docker git:(master) ✗ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                      PORTS                                      NAMES
47c448ea31a2        composer                "/docker-entrypoin..."   30 seconds ago      Exited (0) 29 seconds ago                                              composer
f094719f6444        symfonydocker_nginx     "nginx -g 'daemon ..."   30 seconds ago      Up 29 seconds               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx
cfaf9b7328dd        symfonydocker_symfony   "docker-php-entryp..."   31 seconds ago      Up 30 seconds               9000/tcp                                   symfony
6b27600dc726        symfonydocker_rabbit1   "docker-entrypoint..."   31 seconds ago      Exited (1) 29 seconds ago                                              rabbit1
8301d6282c7d        symfonydocker_php       "docker-php-entryp..."   31 seconds ago      Up 29 seconds               9000/tcp                                   php
f6105aac9cfb        mysql                   "docker-entrypoint..."   31 seconds ago      Up 30 seconds               0.0.0.0:3306->3306/tcp                     mysql

The output is a bit messy, but the problem is the RabbitMQ container has already exited, but should still be running.

To view the logs for RabbitMQ at this stage is really easy – though a bit weird.

What I would like to do is to get RabbitMQ to write its log files out to my disk. But adding in a new volume isn’t solving this problem – one step at a time (I don’t have a solution to this issue just yet, I will add another blog post when I figure this out). The issue is that RabbitMQ is writing its logs to tty by default.

Anyway:

➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1                                                         
Attaching to rabbit1
rabbit1     | /usr/local/bin/docker-entrypoint.sh: line 278: /etc/rabbitmq/rabbitmq.config: Permission denied

Ok, bit odd.

Without going the long way round, the solution here is – as I said at the start – logical, but not immediately obvious.

As best I understand this, the issue is the provided environment variables now conflict with the user / pass combo in the definitions file.

Simply commenting out the environment variables fixes this:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          # RABBITMQ_DEFAULT_USER: "rabbitmq"
          # RABBITMQ_DEFAULT_PASS: "rabbitmq"
          # RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Another down, build, up…

➜  symfony-docker git:(master) ✗ docker-compose down && docker-compose build && docker-compose up -d --remove-orphans
Stopping nginx ... done
Stopping symfony ... done
Stopping php ... done
Stopping mysql ... done
Removing nginx ... done
Removing composer ... done
Removing symfony ... done
Removing rabbit1 ... done
Removing php ... done
# etc

And this time things look a lot better:

➜  symfony-docker git:(master) ✗ docker ps -a                                                                        
CONTAINER ID        IMAGE                   COMMAND                  CREATED              STATUS                          PORTS                                                                                        NAMES
91d59e754d1a        composer                "/docker-entrypoin..."   About a minute ago   Exited (0) About a minute ago                                                                                                composer
b7db79270773        symfonydocker_nginx     "nginx -g 'daemon ..."   About a minute ago   Up About a minute               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp                                                     nginx
bd74d10e444a        symfonydocker_rabbit1   "docker-entrypoint..."   About a minute ago   Up About a minute               4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp   rabbit1
720e91743aa6        symfonydocker_php       "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     php
ec69b7c038e9        symfonydocker_symfony   "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     symfony
717ed6cb180f        mysql                   "docker-entrypoint..."   About a minute ago   Up About a minute               0.0.0.0:3306->3306/tcp                                                                       mysql
➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1
Attaching to rabbit1
rabbit1     | 
rabbit1     | =INFO REPORT==== 25-Mar-2017::13:17:23 ===
rabbit1     | Starting RabbitMQ 3.6.8 on Erlang 19.3
rabbit1     | Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     | Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     | 
rabbit1     |               RabbitMQ 3.6.8. Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     |   ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     |   ##  ##
rabbit1     |   ##########  Logs: tty
rabbit1     |   ######  ##        tty
rabbit1     |   ##########
rabbit1     |               Starting broker...

Hopefully that helps someone save a little time in the future.

Now, onto the logs issue… the fun never stops.