Docker Images vs Docker Containers
There are two very common terms you will hear when working with Docker:
Images, and Containers.
The confusing part is that they are very similar concepts, but they are different.
One way of thinking about Docker Images vs Containers is to think about blueprints, and the outcome of constructing something from that blueprint.
This is a typical metaphor given in programming examples, where a
class is considered the blueprint (or Docker Image), and the
class would be the object we work with (or Docker Container).
What Is A Docker Image?
Docker images are the output of building a
I think of
Dockerfiles as being the code that represents the system that we want to build. In some ways they are similar to Ansible playbooks.
We haven't yet seen a
Dockerfile, so for reference, here is what one might look like:
FROM nginx:stable COPY ./conf.d/upstream.conf /etc/nginx/conf.d/ VOLUME /etc/nginx/conf.d/ VOLUME /var/log/nginx/
This is the
Dockerfile I used as the basis to create the Docker Image for the
nginx service we saw in the
docker-compose.yml file used in the first video in this series.
We will cover this process is more depth in a few videos time.
When we run a
docker build command against this
Dockerfile, a Docker Image will be created. This Docker Image is made up of a series of layers where each layer represents a single instruction from the
In this case our Docker image would consist of at least 4 layers.
Each layer is one single change in isolation.
You may wonder what is being changed? The answer is simple: the preceding layer.
It's a little like working on a project that uses git. We typically do small chunks of work, staging our changes and then committing those changes to the project's repository. In this way our projects are built up by many isolated changes that each build upon their previous commits, and together form the current state of the project.
Because Docker views each layer as an isolated change we get the benefit of caching. This is particularly useful when building larger
Dockerfiles, as making changes to the
Dockerfile and then running a
docker build means that only the changed layers (and anything after that change) needs to be rebuilt. This can be a particularly helpful time saver on large images such as the one we will build for our base PHP7 image.
The resulting Docker image file in itself does very little. To make it useful we must
run the image, which produces a Docker Container.
Going back to our earlier analogy, if our
Dockerfile is our service's code, then the Docker Image is the compiled version of our code. It's similar to a Windows
.exe file in that it's ready to use, but we haven't started using it just yet.
What Is A Docker Container?
A Docker Container is a Docker Image with one extra layer.
This extra layer is called "the container layer", and it is writable. In other words when we
run a Docker Container, we can write new files to the container, modify and delete existing files, and this container remembers these changes.
However - and this is the big gotcha - these changes only affect that Docker Container.
They do not affect the Docker Image from which the container was created. And they do not affect other Docker Containers created from the same Image.
Again if we revert to the
instanceof analogy then we can do something like:
$one = new Thing(); $two = new Thing(); $one->setName('bob'); echo $one->getName(); // outputs: 'bob' echo $two->getName(); // does not output 'bob'
If you need to keep these changes around (and you often do) then you will need a Docker Volume. We will get to Docker Volumes shortly.
The really nice thing about Docker Containers is that they are very lightweight. Unlike a typical Virtual Machine (e.g. one created using Virtual Box) there is no need for a full operating system, only the essential libraries and settings and nothing more.
What this means is that a Docker Container can start up in just a few seconds, whereas a typical Virtual Machine may take tens of seconds to several minutes to boot.
This speed boost is nice during development, having just one command get your entire dev stack up and running certainly feels productive.
Where this speed boost truly shines in my experience is in the test environment. Particularly within your Continuous Integration pipeline. We will cover this in much greater detail later in this series.
Docker Image Example
We're going to start off by running a
docker build command against a brand new
Dockerfile of our own creation.
Start by making a new directory and changing directory into that new directory:
mkdir /tmp/docker-nginx-test && cd $_
Then create your
Dockerfile (feel free to use a different text editor):
Dockerfile, add the following:
Save and exit, which on
vim means: esc, then :wq.
Dockerfile isn't super useful. When we do a
docker build against this file, Docker is going to take the official
nginx image and ... build it :) We might as well have just used the
nginx image directly in this case. But worry not, we will expand on this momentarily.
docker build /tmp/docker-nginx-test
If everything goes to plan then you should see something like:
$ docker build /tmp/docker-nginx-test Sending build context to Docker daemon 2.048kB Step 1/1 : FROM nginx:stable ---> 0346349a1a64 Successfully built 0346349a1a64
docker build command at the bare minimum needs one argument: the path.
In this case our path is the full path to our directory:
However, as we learned in the previous video, we can refer to the current directory with a single period:
Therefore it is far more common to see this command written as:
docker build .
Unusual yes, but not altogether odd when you know the convention.
One of the most confusing things I found when first working with Docker is that finding the contents of the underlying Docker image is often not easy.
To illustrate this point let's take a look at our
Dockerfile once more:
Dockerfiles must start with a
FROM line sets the base image from which all other instructions / lines will be applied.
This base image can be any valid image, including ones we create. We will explore this concept later when we start to build our own
However, what is this image? How was it built? What commands make up the
nginx images layers?
This is good if you are the trusting sort. But how do you validate this?
By looking at the image's history:
$ docker image history 0346349a1a64 IMAGE CREATED CREATED BY SIZE COMMENT 0346349a1a64 5 months ago /bin/sh -c #(nop) CMD ["nginx" "-g" "daem... 0B <missing> 5 months ago /bin/sh -c #(nop) EXPOSE 443/tcp 80/tcp 0B <missing> 5 months ago /bin/sh -c ln -sf /dev/stdout /var/log/ngi... 22B <missing> 5 months ago /bin/sh -c apt-key adv --keyserver hkp://p... 58.2MB <missing> 5 months ago /bin/sh -c #(nop) ENV NGINX_VERSION=1.10.... 0B <missing> 5 months ago /bin/sh -c #(nop) MAINTAINER NGINX Docker... 0B <missing> 5 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <missing> 5 months ago /bin/sh -c #(nop) ADD file:4eedf861fb567ff... 123MB
Why the command:
docker image history 0346349a1a64
docker image history is the base command. You can run
docker image to see a bunch of other choices:
$ docker image Usage: docker image COMMAND Manage images Options: --help Print usage Commands: build Build an image from a Dockerfile history Show the history of an image import Import the contents from a tarball to create a filesystem image inspect Display detailed information on one or more images load Load an image from a tar archive or STDIN ls List images prune Remove unused images pull Pull an image or a repository from a registry push Push an image or a repository to a registry rm Remove one or more images save Save one or more images to a tar archive (streamed to STDOUT by default) tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE Run 'docker image COMMAND --help' for more information on a command.
But why the long hash:
Because that was the output of our earlier run of
docker build .:
$ docker build . Sending build context to Docker daemon 2.048kB Step 1/1 : FROM nginx:stable ---> 0346349a1a64 # this bit here Successfully built 0346349a1a64
0346349a1a64 is the layer hash.
From running the
docker image history 0346349a1a64 command we can see all the
<missing> entries relate to the build that happened for our base image:
nginx:stable. We can compare this output to that found in
They seem to match :)
It's useful to look at the base image to figure out what it's doing for you. In this case the
nginx:stable image has a particularly interesting line:
Here the container's port 80 is being exposed. This is great as
http communicates over port 80 by default. However, this does not immediately allow public access to port 80 by default. To do this you must explicitly publish port 80, as we will see shortly. This is confusing and unintuitive.
What if we wanted
We could augment our own
FROM nginx:stable EXPOSE 443
And by building our image again we could ensure port
https would be listening:
$ docker build . Sending build context to Docker daemon 2.048kB Step 1/2 : FROM nginx:stable stable: Pulling from library/nginx 94ed0c431eb5: Pull complete b616fca08db5: Pull complete a5481c3400ae: Pull complete Digest: sha256:ff5f07fdf2dd003adb3589c8f9e2c84d54599bd13a307f385e23d3f02297c887 Status: Downloaded newer image for nginx:stable ---> 7f1c878a4833 Step 2/2 : EXPOSE 443 ---> Running in a90e34fa844d ---> dfe0dd23e4af Removing intermediate container a90e34fa844d Successfully built dfe0dd23e4af
To confirm, we can view the
docker history /
docker image history output:
$ docker history dfe0dd23e4af IMAGE CREATED CREATED BY SIZE COMMENT dfe0dd23e4af 19 seconds ago /bin/sh -c #(nop) EXPOSE 443/tcp 0B 7f1c878a4833 4 weeks ago /bin/sh -c #(nop) CMD ["nginx" "-g" "daem... 0B <missing> 4 weeks ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 0B <missing> 4 weeks ago /bin/sh -c #(nop) EXPOSE 80/tcp 0B <missing> 4 weeks ago /bin/sh -c ln -sf /dev/stdout /var/log/ngi... 22B <missing> 4 weeks ago /bin/sh -c apt-get update && apt-get inst... 52.2MB <missing> 4 weeks ago /bin/sh -c #(nop) ENV NJS_VERSION=1.12.1.... 0B <missing> 4 weeks ago /bin/sh -c #(nop) ENV NGINX_VERSION=1.12.... 0B <missing> 4 weeks ago /bin/sh -c #(nop) MAINTAINER NGINX Docker... 0B <missing> 4 weeks ago /bin/sh -c #(nop) CMD ["bash"] 0B <missing> 4 weeks ago /bin/sh -c #(nop) ADD file:fa8dd9a679f473a... 55.3MB
The truly peculiar thing there is that the base image does appear to
EXPOSE 443 already, but it's not there in the
Dockerfile. If you know why this is, please do let me know as I struggled to find an answer on this.
That said, I personally prefer to explicitly specify the open ports in my own
Dockerfile, so for me I would do:
FROM nginx:stable EXPOSE 80 EXPOSE 443
Either way this leaves us with two images on our system:
- Our customised variant
We can see this Docker images list:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> dfe0dd23e4af About a minute ago 107MB nginx stable 7f1c878a4833 4 weeks ago 107MB
There's just no way you could get a working version of nginx on a Virtual Machine in 107mb.
As ever, there are multiple commands that do the same thing. Another command to achieve a list of Docker images would be:
docker image ls
images. Nothing quite like a bit of confusion :/
Running a Docker Container from a Docker Image
Now that we have built ourselves a Docker Image, let's fire up our first Docker Container from that image:
docker run --rm dfe0dd23e4af
Change the value of
dfe0dd23e4af to whatever your
IMAGE ID value is from the
docker images output.
Notice that in doing this your terminal window has been taken over by Docker.
To exit, press ctrl+c.
By using the flag
--rm we ensure that the running container is deleted / removed when the running container process is stopped. This is useful as Docker has a real tenancy to leave its leftovers all over your disk. It's easy to forget this fact and find your system out of disk space.
docker run --rm dfe0dd23e4af is fairly useless.
To make things more interesting it would be helpful to make port 80 publicly accessible:
docker run --rm -p 8080:80 dfe0dd23e4af
Here I am passing in the
-p 8080:80 flag to map port
8080 on my local machine to port
80 in the running Docker container.
This is particularly useful if you have multiple projects running on your machine at any one time. I typically use
803, etc for my projects. Again, more on this as we progress through the course.
With the container running, we should now be able to hit our
nginx webserver. Run the command again and browse to:
127.0.0.1:8080, and in your terminal you should see something like the following:
$ docker run --rm -p 8080:80 dfe0dd23e4af 172.17.0.1 - - [26/Aug/2017:11:03:22 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" "-"
Each time you refresh the page in your browser a new log entry line should appear. Fairly neat.
Ok, time to wrap up our Docker container example:
Remember, to exit, press ctrl+c.
As we ran the command with
--rm our container should have been removed. Confirm this with:
docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
You should see no entries.
We still have the image left over. Let's delete it:
docker rmi dfe0dd23e4af Deleted: sha256:dfe0dd23e4afe3d8873a16ccc7f1206e4e02e23452cf861492bb066e92e71aed
You may also want to clear up the nginx base image:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx stable 7f1c878a4833 4 weeks ago 107MB $ docker rmi nginx:stable Untagged: nginx:stable Untagged: nginx@sha256:ff5f07fdf2dd003adb3589c8f9e2c84d54599bd13a307f385e23d3f02297c887 Deleted: sha256:7f1c878a4833621e106cedffbe3b9d88e4a7ee2673577b92f74417c72a818c28 Deleted: sha256:0627a452e15be7d0ceeabe719083487fa5eeca6544776d39f05feffdf44955e0 Deleted: sha256:938f33ad92af39e79310318b22e7b6e0dd30a80c672db37c178732c8503e1937 Deleted: sha256:eb78099fbf7fdc70c65f286f4edc6659fcda510b3d1cfe1caa6452cc671427bf $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE
Ok, that wraps up our quick tour of Docker Images and Docker Containers.
These are two of the essential building blocks for working with Docker.
The next big one is Volumes. Let's get to it.