Docker In Production – Course Outline

This week I’ve been planning out the new Docker series. There’s a ton of stuff to cover, and I’m not sure whether to start out with the very basics, or dive right into the difficult stuff. One possibility is to split it into two (or more) course, covering different levels of detail.

What I am planning on covering is how to deploy the sort of stack that a small site might need. This includes:

  • Symfony 3
  • MySQL
  • nginx
  • Redis
  • RabbitMQ
  • Graylog

    Being that I love Symfony, it’s probably unsurprising that we are going to use Symfony 3 as the example back end. But in truth, you could swap this part out for anything you like. Hopefully with some of the problems we will encounter in getting a Symfony site online using Docker, then other similar setups will be covered with only minimal modification to the process.

    Then if you go with this, you’d probably want a separate front end site, so:

  • WordPress
  • MySQL
  • nginx

    The MySQL and nginx containers would be completely different from the Symfony stack ones.

    That brings up another issue then – sharing ports. You’re going to hit on issues if you try and re-use port 80 for serving two different nginx containers. At least, I did.

    The solution to this is using a Load Balancer. So we will cover this too.

    With a load balancer in place, we will add in LetsEncrypt to give us free SSL. This will cover both the Symfony stack and the WordPress stack.

    But wait, there’s more.

    We probably want a separate front end stack also. For the app that talks to our Symfony 3 API stack. So we will need to setup this too.

    There’s a bunch of fun challenges in each one of these steps. And by fun I mean “will cost you hours of your life” kinds of fun. Even simple stuff that you likely take for granted can bite you – e.g. how might you run a cronjob in Docker-land?

    Lastly, all of these services would be fairly painful without some kind of deployment pipeline. Thankfully, GitLab has us covered here, so we will also need to add a GitLab instance into our stack. Again, this will be dockerised.

    Oh, and managing this would be a total nightmare without some kind of container orchestration software. We will be using Rancher.

    So, yeah. Plenty to cover 🙂

    But as I say, plenty to cover and that’s without mentioning the basics.

    In other news this week (and some of last), there’s a really interesting read around the forthcoming release of Symfony 4 – (start here http://fabien.potencier.org/symfony4-compose-applications.html)

    I really like the direction that Symfony 4 sounds like it’s taking. I cannot wait to try out Symfony Flex. I’m even going to get back to using Makefiles by the sound of things – though I can’t say I was in love with them the last time I had to use them. That was years ago, mind.

Video Update

This week saw four new, free videos added to the site.

These videos start here:

https://codereviewvideos.com/course/symfony-3-for-beginners/video/walking-through-the-initial-app

And are all part of the same course. This series is a slower paced introduction to Symfony 3, aiming to cover in a little more depth the big questions I had when first starting with Symfony in general.

Whilst we have covered how to create a simple Contact Form before on CodeReviewVideos, this time we are going to use it as the basis for a slightly more interesting series. Once we’ve built the contact form and had a play around with some extra things you might wish to do with it, we are going to move on and secure the contact form.

This will involve implementing some basic user management features – but crucially we are going to do so without resorting to FOSUserBundle.

Now, don’t get me wrong here. I think FOSUserBundle is fantastic and am extremely grateful for its existence. But you may have jumped in and added FOSUserBundle without ever taking the time to figure out how to implement login and registration for yourself. And so this series will aim to cover this.

Ok, that just about covers it from me this week.

Thanks to everyone who has been in touch since last week. I truly appreciate your feedback.

Until next week, have a great Easter Weekend, and happy coding.

Chris

Docker In Production: It’s a Go!

I want to start by saying thank you to everyone who got in touch regarding last week’s post on learning how to deploy with Docker. Judging from the feedback I received, I’m not the only one who has found deploying docker to be difficult.

Last week I mentioned how I was 11 days into my docker deployment exercise, and at that point the end did not seem to be in sight. Thankfully this week, although now at 17 days (at the time of writing), I can say “GREAT SUCCESS!”

The truth is, it has been an absolute pain to get this far. The short version of events is that I wanted to deploy a fairly standard Symfony 3 stack:

  • Symfony 3 JSON API
  • MySQL database
  • nginx web server
  • Lets Encrypt for lovely free SSL
  • RabbitMQ for queuing
  • Node w/ PM2 for queue workers

And I hit upon a fair few problems doing this with Ansible.

Now, don’t get me wrong: Ansible is fantastic.

There is – however – a major stumbling block with Ansible. If you make a mistake, reverting that mistake can be quite painful. My solution to this has often been destroy the server, and simply re-run the Ansible playbook(s) again until a working stack is back up and running.

But here’s the thing: This is terrible in production.

“No kidding, Chris” – everyone, everywhere.

I get this. It’s not a solution. The alternative is to jump onto the box, and start hacking at the config until you get a working system again. Only now you have a problem – you need to ensure your Ansible playbook / role setup will reliably reproduce this exact same setup. And that can be really, really tricky.

Pretty much the only way to guarantee you have nailed this is to, ahem, wipe the box and re-run the playbook.

See, in development you have a nice easy life, for the most part. If things go wrong, you can flatten everything, and start again. It costs you time, and time (as they say) is money. But overall, the cost is cheap.

As part of your development process you can add in extra pieces to your stack without much concern. Need RabbitMQ? There’s a role for that. Need Redis? There’s a role for that. Need a firewall? Sure, there’s a role for that – but do you really want to bother with a firewall in development?

And as a firewall in development seems a little overkill, I am often sorely tempted to skip this step. Hey, I have a lot on my plate and if I don’t need it right now, then I put it off.

In a similar vein, SSL in development is a total luxury. Without a public DNS record I can’t do any LetsEncrypt magic. I could do a self signing effort (there’s very likely an Ansible role for that, too), but what’s the point? If I use a self signed cert in dev, but LetsEncrypt in prod, now I have two different environments. Actually, using LetsEncrypt in prod, and not using it in dev is still two different environments, but somehow that feels … better?

Ultimately I have always ended up with two varying environments. Dev is a stripped down, and subtly different version of prod.

Anyway, back to my deployment woes. The straw that broke the camel’s back for me was in deploying the Node queue workers. The queue workers are fairly simple Node JS scripts. The gist of it is that they are long running processes that get given messages (think: JSON data) from RabbitMQ. They parse the JSON, and take some appropriate action.

As ever with software, things inevitably go wrong. When things go wrong with Node, it has the tendency to throw it’s hands up in the air and give up. My scripts behave like a pop star with a sore throat. PM2 is a process manager designed to sit and watch these processes on a 24/7 basis, and pro-actively restart any that die, for any reason. To continue the shaky analogy, PM2 is the venue manager, with two burly thugs waiting to duff up said pop star if the awaiting concert fans aren’t promptly entertained.

Before PM2 could do its thing, I needed a way to reliably get it installed on my servers. You guessed it – there’s an Ansible role for that.

Each role adds (yet another) layer of abstraction around whatever particular task you are trying to achieve. This layer means an extra bunch of potential steps that may or may not be causing any problems you might encounter. In my case, my problem was that whilst my code would deploy, and PM2 would manage it, it was always an out-of-date version.

To try and fix this, I started using Ansistrano (hey, another Ansible role!) to manage the deployment of my code, which triggered PM2 to hopefully reload my latest code changes… except it didn’t.

Somewhere along the way, I’d become many layers deep into a problem that I’d used all these tools to try and avoid.

In hindsight, I reckon if I’d stuck with the problem for the past two and a bit weeks, I would likely have found a solution. But the truth is, my confidence in my deployment process was at a low. With no traffic to the site, it wasn’t hurting any end users (thank the Lord!), but I could quite easily picture a future point where I had either deployed and made a mess, or worse, become so scared of deployment that I had essentially stopped new development to avoid the issue.

I know I’m not alone in this. In fact, I’ve worked with numerous organisations of many sizes that suffer from this very problem.

Staring me in the face, right there on the PM2 web site, was the section on “Docker Integration”.

Again, in hindsight, it probably would have made more sense to start small and Dockerise just the Node Script / PM2 deployment.

But that’s not what I did. No siree. Instead, I decided – to heck with this – let’s go the whole hog and move this entire stack to Docker.

It’s OK, we have hot failover

So yeah, it’s been a long, hard two (and a bit) weeks.

But I’m there. Well, 95% of the way there. See, very late yesterday evening it suddenly dawned on me that I’d neglected to include the WordPress blog which I’m using on the site root. Symfony is brilliant, don’t get me wrong, but use the right tool for the job. WordPress might get a lot of stick for its code, but the product that the end user sees is brilliant.

Putting WordPress into Docker should be easy right? There’s tons of tutorials on this. Heck, even the official Docker Compose tutorial has a guide on how to do just this.

Actually though, it’s not easy. Sure, it’s different to using Ansible. Docker gets you up and running faster, there is no denying that. It also uses a whole bunch less memory and disk space than having to provision a bunch of virtual machines. But really, it’s just a different set of difficult problems.

Taking WordPress as the example: where do you store user uploads? What about new plugins?

These are things you don’t give much of a second thought to in a ‘traditional’ deployment. In Docker, these are headscratchers. You’re discouraged from using Volumes in production, but without them you can’t persist any data…

Most every tutorial I have found completely gloss over these trivial details :/

Anyway, after all this I do have some working solutions to these problems which I am going to share with you. From the plan I’ve made for this series it could become sprawling. I don’t want this to be the case. I’d rather keep it concise, but the truth is there is an absolute ton of stuff that you need to know to actually work with Docker.

This series is going to have a particular focus on deploying Symfony with Docker. We will also cover Rancher, a tool for managing your containers in production. In my opinion, after deploying Symfony with Docker you will have encountered a whole bunch of real world problems that make working with Docker on pretty much any other code base a lot easier.

I’d still love to hear from you if you are working with Docker in any way – whether just tinkering with it; working with it only in development; or have fully embraced Docker and put it into production. I have plenty more war stories to share, and whilst I can’t promise to have answers to your questions, if I can help in any way, I will do my best to try.

Video Update

This week there have been three new videos added to the site:

https://codereviewvideos.com/course/symfony-workflow-component-tutorial/video/new-in-symfony-3-3-workflow-guard-expressions

Towards the end of recording the Symfony Workflow Component Tutorial series I spotted an upcoming feature in Symfony 3.3, which is Workflow Guard Expressions.

As a quick recap, a Guard is a way of blocking a Transition. A transition is the process that objects going through your workflow will need to pass through to get from place to place.

Earlier in the series we covered creating separate Symfony Services to ‘contain’ your Guard logic.

With Guard Expressions you may be able to replace these standalone services with simple, human readable one-liners. It won’t always be the solution, but I’ve found them incredibly useful for two reasons:

* They cut down on code (less code, less bugs)
* They put your guards right inside your workflow definitions (easier to understand at a glance)

They are cool, and useful, and if you’re using the Symfony Workflow Component then I encourage you to check them out.

https://codereviewvideos.com/course/how-to-import-a-csv-in-symfony/video/setup-and-manual-implementation

and

https://codereviewvideos.com/course/how-to-import-a-csv-in-symfony/video/importing-csv-data-the-easy-way

In this two part series we cover a frequently requested concept – working with CSV data inside a Symfony application.

This series was created in response to a question from a site member regarding how to import CSV data, and turn it into related Doctrine entities.

When thought about as one process – importing and converting – it can be quite overwhelming. But if you break this process down into two tasks:

  • reading a CSV file;
  • converting each row into related Doctrine entities

Then it becomes a lot easier.

The approach given in these videos is a not intended for production use. You will need to expand on these concepts to fit your own needs. The aim here is to cover the high level, rather than “this is the implementation you should be using!”, which I disagree with. Every project is unique and has its own requirements. Use this as a possible source of guidance.

All of this weeks videos are free, and have been put up on YouTube also.

Thanks for reading, and have a great weekend.

Until next week, happy coding!

Chris

Would You Like To Learn How To Deploy With Docker?

At the time of writing, I’m now 11 days into my journey into Docker-ising my workflow.

The idea here is to build a continious integration / continuous delivery pipeline. It sounds horrendously enterprise, but in reality it’s all about removing the friction involved in getting working code into production as easily as possible.

As a quick re-cap, what I’m trying to achieve is roughly:

  • Develop locally using Docker;
  • Commit code, and push up to GitLab;
  • For each commit, GitLab will perform a checkout of my code, build up the environment (using Docker), and run my automated test suite;
  • If the tests pass, then the Docker image should be stored into my private GitLab Container Registry (or to you and me, store the Docker image on my private GitLab server, rather than publicly e.g. on dockerhub);
  • Assuming I am happy with everything, I can then pull this image to my production environment and replace the running container with the new one.

For this last step I’m looking at using Rancher.

Honestly, even just writing that out makes it sound tricky.

In practice, it’s been… a bloomin’ nightmare.

As I say, I’m 11 days into this. 11 days that is, without writing any code on the project that actually matters.

I genuinely believe this entire process will be worth it. Once it all works – and believe me, I will make this work even if it dangerously shortens my life expectancy – then I can roll this out not just to my current project, but to every project. And not just projects that use PHP either.

And that’s a huge win for me. That’s why I’m persisting with it.

I asked last week if anyone had any experience with this. Somewhat unsurprisingly, I got zero replies 🙂

Now, this may be because no one reads what I write. It’s highly probable. However, MailChimp gives me stats and the stats say that at least some of you do read these emails 🙂 Thanks!

Anyway, my take away from this is that if no one replied, either it’s because no one has experience doing this (and judging by how much pain I’m going through, I don’t blame you), or maybe it’s just not interesting to you.

I’d be up for doing a series on this for CodeReviewVideos once I have a working setup. It should shave a ton of time of the process if you are interested, but don’t have the time / desire to invest 11+ days of your life into a similar setup.

I would be extremely grateful if you could reply and let me know if this is a video tutorial series that would interest you.

I hope that in next week’s email I can say that I have a working solution.

Video Update

This week there have been three new videos added to the site.

I’m not going to link to each of the individual videos in this instance, but rather only to the first as they follow on logically from each other:

https://codereviewvideos.com/course/react-redux-and-redux-saga-with-symfony-3/video/registration-part-1

In this final part of the “React, Redux, and Redux Saga With Symfony 3” we wrap up by adding in Registration.

Previously when dealing with registration I have felt a little overwhelmed. From a very high level, it’s a form like any other.

But in reality it’s one of the most important forms on your site. It’s a core piece of the first impression a (potential) customer will have. You need to get it right.

Fortunately though, we’ve already done the vast majority of the hard work.

We have a set of re-usable components we can call upon, such as the repeated password entry field, and our styled form field.

We have a set process to follow, where we create a container, then a component to hold our form. We create our functions and pass them in, which will dispatch the right messages when interesting things happen. It’s all stuff we’ve covered throughout this series.

Finally we cover how to work with both the happy path, and the unhappy paths. We cover how to display helpful form errors to guide the user through as and when things go wrong.

And at the end of all this we have a working registration system that’s useful regardless of the type of app you’re creating. Without too much extra effort this system can be expanded upon to add in a payment processor – such as Stripe. I know, as this is exactly the same setup that’s being used in the forthcoming update of CodeReviewVideos 🙂

Blog Update

This week I blogged about getting RabbitMQ working with Docker:

The Baffling World Of RabbitMQ and Docker

That’s it from me this week. Until next week, happy coding 🙂

Chris

The Baffling World Of RabbitMQ and Docker

Recently I decided to switch my entire Ansible-managed infrastructure (for one project) over to Docker – or specifically, docker-compose. Part of this setup needs RabbitMQ.

I had no trouble getting the official RabbitMQ image to pull and build. I could set a default username, password, and vhost. And all of this worked just fine – I could use this setup without issue.

However, as I am migrating an existing project, I already had a bunch of queues, exchanges, bindings, users… etc.

What I didn’t want is to have some manual step where I have to remember to import the definitions.json file whenever building – or rebuilding – the environment.

Ok, so this seems a fairly common use case, I should imagine. But finding a solution wasn’t as easy as I expected. In hindsight, it’s quite logical, but then… hindsight 🙂

Please note that I am not advocating using any of this configuration. I am still very much in the learning phase, so use your own judgement.

Here is the relevant part of my docker-compose.yml file :

version: '2'

services:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Then I went to my old / existing RabbitMQ server and from the “Overview” page, I went to the “Import / export definitions” section (at the bottom of the page), and did a “Download broker definitions”.

This gives a JSON dump, which as it contains a bunch of sensitive information, I have doctored for display here:

{
  "rabbit_version": "3.6.8",
  "users": [
    {
      "name": "rabbit_mq_dev_user",
      "password_hash": "somepasswordhash+yQtnMlaK6Iba",
      "hashing_algorithm": "rabbit_password_hashing_sha256",
      "tags": "administrator"
    }
  ],
  "vhosts": [
    {
      "name": "\/"
    }
  ],
  "permissions": [
    {
      "user": "rabbit_mq_dev_user",
      "vhost": "\/",
      "configure": ".*",
      "write": ".*",
      "read": ".*"
    }
  ],
  "parameters": [
    
  ],
  "policies": [
    
  ],
  "queues": [
    {
      "name": "a.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    },
    {
      "name": "b.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    }
  ],
  "exchanges": [
    {
      "name": "router",
      "vhost": "\/",
      "type": "direct",
      "durable": true,
      "auto_delete": false,
      "internal": false,
      "arguments": {
        
      }
    }
  ],
  "bindings": [
    {
      "source": "router",
      "vhost": "\/",
      "destination": "a.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    },
    {
      "source": "router",
      "vhost": "\/",
      "destination": "b.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    }
  ]
}

You could – at this point – go into your Docker-ised RabbitMQ, and repeat the process for “Import / export definitions”, do the “upload broker definitions” step and it should all work.

The downside is – as mentioned above – if you delete the volume (or go to a different PC) then unfortunately, your queues etc don’t follow you. No good.

Now, my solution to this is not perfect. It is a static setup, which sucks. I would like to make this dynamic, but for now, what I have is good enough. Please do shout up if you know of a way to make this dynamic, without resorting to a bunch of shell scripts.

Ok, so I take the definitions.json file, and the other config file, rabbitmq.config, and I copy them into the RabbitMQ directory that contains my Dockerfile:

➜  symfony-docker git:(master) ✗ ls -la rabbitmq 
total 24
drwxrwxr-x  2 chris chris 4096 Mar 24 21:13 .
drwxrwxr-x 10 chris chris 4096 Mar 25 12:11 ..
-rw-rw-r--  1 chris chris 1827 Mar 25 12:38 definitions.json
-rw-rw-r--  1 chris chris  130 Mar 25 12:55 Dockerfile
-rw-rw-r--  1 chris chris   54 Mar 24 20:53 enabled_plugins
-rw-rw-r--  1 chris chris  122 Mar 25 12:55 rabbitmq.config

For completeness, the enabled_plugins  file contents are simply:

[rabbitmq_management, rabbitmq_management_visualiser].

And the rabbitmq.config file is:

[
    {
        rabbitmq_management, [
            {load_definitions, "/etc/rabbitmq/definitions.json"}
        ]
    }
].

And the Dockerfile :

FROM rabbitmq:3.6.8-management

(yes, just that one line)

Now, to get these files to work seems like you would need to override the existing files in the container. To do this, I used additional config in the docker-compose volumes section:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Note here the new volumes.

Ok, so down, rebuild, and up:

docker-compose down && docker-compose build && docker-compose up -d
➜  symfony-docker git:(master) ✗ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                      PORTS                                      NAMES
47c448ea31a2        composer                "/docker-entrypoin..."   30 seconds ago      Exited (0) 29 seconds ago                                              composer
f094719f6444        symfonydocker_nginx     "nginx -g 'daemon ..."   30 seconds ago      Up 29 seconds               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx
cfaf9b7328dd        symfonydocker_symfony   "docker-php-entryp..."   31 seconds ago      Up 30 seconds               9000/tcp                                   symfony
6b27600dc726        symfonydocker_rabbit1   "docker-entrypoint..."   31 seconds ago      Exited (1) 29 seconds ago                                              rabbit1
8301d6282c7d        symfonydocker_php       "docker-php-entryp..."   31 seconds ago      Up 29 seconds               9000/tcp                                   php
f6105aac9cfb        mysql                   "docker-entrypoint..."   31 seconds ago      Up 30 seconds               0.0.0.0:3306->3306/tcp                     mysql

The output is a bit messy, but the problem is the RabbitMQ container has already exited, but should still be running.

To view the logs for RabbitMQ at this stage is really easy – though a bit weird.

What I would like to do is to get RabbitMQ to write its log files out to my disk. But adding in a new volume isn’t solving this problem – one step at a time (I don’t have a solution to this issue just yet, I will add another blog post when I figure this out). The issue is that RabbitMQ is writing its logs to tty by default.

Anyway:

➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1                                                         
Attaching to rabbit1
rabbit1     | /usr/local/bin/docker-entrypoint.sh: line 278: /etc/rabbitmq/rabbitmq.config: Permission denied

Ok, bit odd.

Without going the long way round, the solution here is – as I said at the start – logical, but not immediately obvious.

As best I understand this, the issue is the provided environment variables now conflict with the user / pass combo in the definitions file.

Simply commenting out the environment variables fixes this:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          # RABBITMQ_DEFAULT_USER: "rabbitmq"
          # RABBITMQ_DEFAULT_PASS: "rabbitmq"
          # RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Another down, build, up…

➜  symfony-docker git:(master) ✗ docker-compose down && docker-compose build && docker-compose up -d --remove-orphans
Stopping nginx ... done
Stopping symfony ... done
Stopping php ... done
Stopping mysql ... done
Removing nginx ... done
Removing composer ... done
Removing symfony ... done
Removing rabbit1 ... done
Removing php ... done
# etc

And this time things look a lot better:

➜  symfony-docker git:(master) ✗ docker ps -a                                                                        
CONTAINER ID        IMAGE                   COMMAND                  CREATED              STATUS                          PORTS                                                                                        NAMES
91d59e754d1a        composer                "/docker-entrypoin..."   About a minute ago   Exited (0) About a minute ago                                                                                                composer
b7db79270773        symfonydocker_nginx     "nginx -g 'daemon ..."   About a minute ago   Up About a minute               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp                                                     nginx
bd74d10e444a        symfonydocker_rabbit1   "docker-entrypoint..."   About a minute ago   Up About a minute               4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp   rabbit1
720e91743aa6        symfonydocker_php       "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     php
ec69b7c038e9        symfonydocker_symfony   "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     symfony
717ed6cb180f        mysql                   "docker-entrypoint..."   About a minute ago   Up About a minute               0.0.0.0:3306->3306/tcp                                                                       mysql
➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1
Attaching to rabbit1
rabbit1     | 
rabbit1     | =INFO REPORT==== 25-Mar-2017::13:17:23 ===
rabbit1     | Starting RabbitMQ 3.6.8 on Erlang 19.3
rabbit1     | Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     | Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     | 
rabbit1     |               RabbitMQ 3.6.8. Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     |   ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     |   ##  ##
rabbit1     |   ##########  Logs: tty
rabbit1     |   ######  ##        tty
rabbit1     |   ##########
rabbit1     |               Starting broker...

Hopefully that helps someone save a little time in the future.

Now, onto the logs issue… the fun never stops.