Multiple Environments With Docker Compose
At this stage we have all the knowledge required to quickly get up and running with Docker.
There are always more things to learn about Docker, both broad strokes and deep dives. But my personal approach is to get enough knowledge to start confidently playing with a technology, and let the real world dictate what I need to learn next.
A big part of using Docker in the real world - in my personal experience - is in using Docker in different environments.
I always have my Dockerised development environment.
To begin with, all of this config lives inside docker-compose.yml
.
But once I get serious about a project, I often need at least two, but usually several different configurations. It depends on the environment and project requirements.
Often the ports I use in development won't be those I use in staging, or production.
Likewise, volumes may have different paths on underlying disk, or using images with differing tags, and so on.
Fortunately Docker Composer caters to this need.
As a heads up, what we will end up with is something like:
docker-compose.yml
docker-compose.dev.yml
docker-compose.staging.yml
docker-compose.ci.yml
And so on.
I typically don't have a configuration for production. This is because my production configurations live elsewhere, and will be covered in a different tutorial. I use Rancher in production, which is currently undergoing a dramatic change between 1.0 and 2.0 releases, so as of the time of recording, I have had to hold back on releasing that course.
What Lives In Each Environment?
Given that I start with a docker-compose.yml
file, and I use this in my development environment, why then do I have a docker-compose.dev.yml
file in the list above?
As my environments grow, I move the specifics of the dev
environment out of the generic docker-compose.yml
file, and into a dedicated environment file.
I can then run a command like:
docker-compose \
-f docker-compose.yml \
-f docker-compose.dev.yml \
up -d
Multiple -f
iles can be used to provide configuration for this current command execution.
By extracting out e.g. the ports
information we could do the following:
# ./docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7.20
hostname: db
nginx:
image: docker.io/codereviewvideos/nginx.symfony.dev
hostname: nginx
depends_on:
- php
This is our base configuration.
All environments will use these bits of config.
Our development environment (docker-compose.dev.yml
) would expand on this further:
# ./docker-compose.dev.yml
version: '3'
services:
db:
volumes:
- "./volumes/mysql_dev:/var/lib/mysql"
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db_dev
- MYSQL_USER=dbuser
- MYSQL_PASSWORD=dbpassword
ports:
- 33061:3306
nginx:
volumes:
- "./volumes/nginx/logs:/var/log/nginx/"
- "./:/var/www/dev"
ports:
- 81:80
All of this configuration is specific to our development environment.
We may have multiple Docker environments running on our machine concurrently, and we can customise the ports
config to ensure we can access each one independantly, without every nginx
instance vying for port 80.
Likewise, we may have an acceptance test environment config that runs on the CI server (or CI runner instance):
# ./docker-compose.acceptance.yml
version: '3'
services:
db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db_acceptance
- MYSQL_USER=dbuser
- MYSQL_PASSWORD=dbpassword
nginx:
ports:
- 80:80
We wouldn't be needing to connect to the database via a tool such as SequelPro, or similar, as this environment is largely automated.
Likewise, we don't need to keep data around longer than the execution of our test process, so there's no volumes for persistent storage. The job runs, the containers are deleted, there's no mess.
All of these options, and more, become available to you when using Docker in this way.
Makefile
To Make Your Life Easier
We have already touched on using a Makefile
in your Docker projects to make working with your project that much easier.
I typically augment my Makefile
with multiple environments:
docker_build:
@docker build \
--build-arg WORK_DIR=/var/www/dev/ \
-t docker.io/codereviewvideos/symfony.dev .
docker_push:
@docker push docker.io/codereviewvideos/symfony.dev
bp: docker_build docker_push
dev:
@docker-compose down && \
docker-compose build --pull --no-cache && \
docker-compose \
-f docker-compose.yml \
-f docker-compose.dev.yml \
up -d --remove-orphans
acceptance:
@docker-compose down && \
docker-compose build --pull --no-cache && \
docker-compose \
-f docker-compose.yml \
-f docker-compose.acceptance.yml \
up -d --remove-orphans
And now things are as simple as running make dev
to get a development environment up and running, or my CI runner can execute make acceptance
to spin up that stack, and so on.
End of Regulation Play
And that's basically it for how I use Docker outside of production.
There are more areas to explore - networking, advanced volume usage, reducing image sizes, and so on.
However, this is - in my opinion - everything you need to know to get up and running with Docker.
I use Docker every day. It has been a dramatic shift in the way I develop and deploy software.
I am greatly looking forward to sharing with you my means of deploying Docker to Production, along with GitLab CI integration. This course will follow as soon as Rancher 2.0 becomes stable.