Wow, what a week. The sun is shining, projects are progressing, and overall, things are feeling remarkably Summery.
However, when I’m sat at a computer, a problem is never that far away.
I’m planning to start recording the first “Deploying with Docker” tutorials this weekend. Things have generally stabilized now in terms of provisioning, configuration, and image building, and at last, the continuous integration / continuous deployment pipelines seem to work as expected.
I’m really excited to share all this stuff with you. It’s going to be a heck of a course.
However, I can’t promise you it will be plain sailing. There’s a reason that deploying in this manner isn’t the defacto standard. My gut tells me it will be, but not in this decade.
Anyway, the idea here is that even if you don’t want to fully embrace Docker in your stack, I’m hoping you will take away at least a few new tips and tricks. I’ve noticed some big benefits in Dockerising my development stack alone. Having all of this enhance my production stack is just a huge added bonus.
One thing I had not anticipated ahead of migrating from Ansible to Docker was in the vast(*) amounts of disk space that Docker images consume.
Vastness is subjective, and this mainly becomes a problem if you work with small disk sizes 🙂
To give you an idea of what I mean, let’s take a small piece of the infrastructure as an example:
- Symfony as an API
- nginx for http(s) access to the API
Ok, so far so good.
But what about logging?
Right, so after a bunch of issues with Graylog in Docker, this week I migrated to the ELK stack.
Actually, Graylog wasn’t really the root cause of my problems. But I wanted to decouple my Symfony site from having to know anything about Graylog or logging in general (beyond Monolog, of course). With this in mind, I created two further containers purely for logging:
- Filebeat to monitor nginx access and error logs
- Filebeat to monitor Symfony’s prod.log file(s)
Filebeat is a helpful utility provided by Elastic co to help ship your logs from point A (e.g. nginx or Symfony) to point B (e.g. the ‘L’ part of your ELK stack – aka Logstash).
That’s four containers – which means four images – for one small part of my stack.
But there’s actually more than four images.
There’s the ELK stack – I had to make some customisations for security, so that’s now stored on my GitLab.
My Symfony image is based off of a PHP7-FPM image.
And nginx is based off the ‘official’ nginx image.
I don’t need to keep the official nginx image at my end. But I do have to keep my customised PHP7-FPM image.
Anyway, all of these images get built (and rebuilt, and rebuilt…) whenever I do a ‘git push’ for certain branches in my development environment.
Originally I hit upon a problem whereby I was using a cheapo $10 Digital Ocean droplet with 20gb of disk space, and within a few short days, I’d knocked my server offline due to it being 100% disk utilised. Oops.
Well, I thought, this won’t do.
So I migrated GitLab from the DO droplet onto a set of containers running on my old gaming PC.
This went well. Well, it went slowly and quite painfully, but once it was done I felt pretty good about myself.
And then I ran out of disk space again.
Ok, so schoolboy error. What I’d done was setup a new VM on my old gaming PC with only a 30gb disk. I’d turned off GitLab’s container backups so figured I’d have tons of space. Not so.
Well, I mucked about a bit with the idea of mounting some extra space to the VM, but this became a different problem in itself. So I ditched the idea, and came up with a cunning plan:
If you’ve never had the need to learn about such things, let me very briefly explain:
Thin provisioning is the concept of allocating more space to your VM’s hard disk than you really have physically available.
Let’s say I had a 100gb disk. I could thin provision my VM to have a 2tb disk and Virtualbox would happily just believe me that all this space is available.
We’re borrowing from Peter to pay Paul.
Now, let me take a quick detour here back to my days as a servant of the public sector. You can talk to many people about the UK’s public sector and most will tell you that collectively, the public sector lacks money.
Not so in the IT department.
The IT department – at least when I worked there – always had piles of cash to throw at fancy solutions.
One such solution was “The NetApp”.
These things are not cheap. And that means most of us will never use them, and certainly not have the luxury of having one at home to play with / learn on.
Essentially what I’m talking about here is a bunch of huge racks of hard disks that are grouped together and offered out to all the servers on the network. This way you can centralise all your storage supposedly making your life easier in terms of backups and security and all that malarky.
Of course, it requires a bunch of new skills (mainly Linux skills as it happens – perfect for the Microsoft-world of public sector), and that’s great for the solution providers as this means upselling expensive consultancy services. But that’s not the relevant part of this detour.
Now, if we have all this disk space – think: a small scale, local Amazon S3 – then wouldn’t it be a great idea to offer more space to all our servers than we really have available? Yes! Of course it would, what could possibly go wrong?
If only I had learned my lesson.
Anyway, the upshot of this is that’s how I spent my bank holiday Monday : https://codereviewvideos.com/blog/how-i-fixed-virtualbox-failed-delete-snapshot/
And whilst I thought I had solved this issue, lo-and-behold I was greeted by the same exact issue again this morning. I’m not sure what’s chewing up the disk space as the .vdi file is growing to twice the size of the space used inside the VM… But here’s the conclusion – I’m just going to buy a 2tb disk and be done with it. Life’s too short.
Aside from enjoying my misery, what the heck has this wild tangent got to do with Docker?
Well, it’s not just Docker. This story is about my life as developer in general. There are no shortcuts.
The only way I figure these things out is either by reading, by watching videos / conference talks, or by experiencing them for myself first hand.
It’s taken me over 6 weeks to get this Docker setup working as I want it too. I’m hoping to cut that down dramatically for you. But you must be prepared to learn and work through some difficulties. And when you do, you will come out more knowledgeable – and more employable than before.
And that goes for Development in general. We haven’t chosen an easy life for ourselves. But it is a brilliant one all the same.
One nice thing is that we can do all this stuff without having to spend tens of thousands on expensive hardware. No NetApps to buy. We can have this working on our home computers and with only a minimal monthly spend to put this online at Digital Ocean, or Linode, or similar. This means we can play – and learn – in our own time, and that means we can control our own learning and ultimately, our careers.
I’ll jump off my soapbox now.
This week saw three new videos added to the site.
This entire series is completely free.
We are nearly finished with this series, and aside from the Deploying with Docker course, I also have some new “build along”-type courses coming too. I find these some of the most fun course to create, and from what I’ve been hearing recently, they are generally the most well received.
As ever, thank you very much for reading, watching, and getting in touch. I really appreciate your time.
Until next week, have a great (and hopefully, sunny) weekend and happy coding 🙂