I hit on an annoying issue this week, which I’m not sure of the root cause.
Last week I bumped GitLab from 10.6, to 10.8, and somehow broke my GitLab CI Runner.
Somewhere, I have a backup of the
config.toml file I was using. I run my GitLab CI Runner in a Docker container. I only run one, as it’s only for my projects. And one is enough.
Somehow, the Runner borked. And annoyingly I neither had a reference of the running version (never use
:latest unless you like uncertainty), and recreating without the
config.toml file has been a pain.
So for my own future reference, here is my current GitLab Runner
FWIW this isn’t perfect. I’m hitting on a major issue currently whereby GitLab CI Pipeline stages with multiple jobs in the stage are routinely failing. It’s very frustrating. It’s also not scheduled for fix until v11, afaik.
I love me some Dockerised GitLab. I have the full CI thing going on, with a private registry for all my Docker images that are created during the CI process.
It all works real nice.
Until that Saturday night, when suddenly, it doesn’t.
Though it sounds like I’m going off on a tangent, it’s important to this story that you know I recently I changed my home broadband ISP.
I host one of my GitLab instances at my house. All my GitLab instances are now Dockerised, managed by Rancher.
I knew that as part of switching ISPs, there might (read: 100% would) be “fun” with firewalls, and ports, and all that jazz.
I thought I’d got everything sorted, and largely, I had.
Except I decided that whilst all this commotion was taking place, I would slightly rejig my infrastructure.
I use LetsEncrypt for SSL. I use the LetsEncrypt certs for this particular GitLab’s private registry.
I had the LetsEncrypt container on one node, and I was accessing the certs via a file share. It seemed pointless, and added complexity (the afore mentioned extra firewall rules), which I could remove if I moved the container on to the same box as the GitLab instance.
I made this move.
Things worked, and I felt good.
Then, a week or so later, I made some code changes and pushed.
The build failed almost immediately. Not what I needed on a Saturday night.
In the build logs I could see this:
Error response from daemon:Get https://my.gitlab:5000/v2/: received unexpected HTTP status: 500 Internal Server Error
This happened when the CI process was trying to log in to the private registry.
After a bit of head scratching, I tried from my local machine and sure enough I got the same message.
As so many of my problems seem to, it boiled down to permissions.
Rather than copy the certs over from the original node, I let LetsEncrypt generate some new ones. Why not, right?
This process worked.
The GitLab and the Registry containers used a bind mounted volume to access the LetsEncrypt cert inside the container on the path
When opening each container, I would be logged in as root.
Root being root, I had full permissions. I checked each file with a cheeky
cat and visually confirmed that all looked good.
GitLab doesn’t run as root, however, and as the files were owned by root, and had 600 permissions:
Completed500Internal Server Error in125ms(ActiveRecord:7.2ms)
I’ve been super busy this week behind the scenes working on the new site version. Typically, every time I think I’m done (we all know we are never done :)), I hit upon something I’ve either missed, or overlooked.
There are a bunch of CSS related jobs:
And React seems to have completely broken my mailing list sign up forms:
And some things I only noticed when testing my user journeys:
One thing I am really happy with is the new Profile panel:
4 tabs, a lot of work 🙂
There are 29 open tickets left before I consider this next version done (again, never really done :)).
I reckon I will be happy enough to open the beta when I’m down to 20 tickets. I genuinely wish I was a 10x developer, able to blitz stuff like this out in a weekend.
I’m really looking forward to switching over though. This new site is so much nicer to work with from a development point of view. And I have a bunch of improvements from a member / visitor point of view, too 🙂
Both the front end and back end are completely separated. Both are pushed through a GitLab CI pipeline, which automates the test runs, and if all goes to plan, builds a Docker image for both. I can then deploy these using Rancher.
It’s a great setup.
As you may have noticed, Docker is currently the main course topic on the site. This is a Docker tutorial for beginners. If you’ve ever wanted to learn Docker, and prefer to see real world setups rather than simplified examples then this is the right starting point for you.
Which brings me neatly on to this weeks…
This week saw three new videos added to the site.
All of this week’s videos fall under the same theme:
Long, and short term storage of data in Docker.
One of the most confusing aspects of switching my stack from Virtual Machines to Docker was what was happening to my data?
In Virtual Machine-land (if such a place ever existed) the location of my data was fairly obvious. It would either be on the VM itself (on the virtual hard disk), or on some attached network drive.
Things are roughly the same in Docker, but it’s not a 100% like-for-like experience. Especially not on a Mac.
I’m going out on a limb here, and I’m going to guess you love a good Star Trek episode as much as me.
Nothing troubles my tribbles quite like a blurt of technobabble:
The temporal surge we detected was caused by an explosion of a microscopic singularity passing through this solar system. Somehow, the energy emitted by the singularity shifted the chroniton particles in our hull into a high state of temporal polarisation.
Now the good news, when watching Star Trek, is that you really don’t need to understand any of that to gain enjoyment from the episode. The writers make sure the characters appear to understand it, and the Federation once again, save the day.
But what about technobabble we do have to understand?
Docker Bind Mounts.
Well, in this video we run our tricorder over bind mounts and figure out just what the heck these things are.
The good news?
Beyond the name they are surprisingly straightforward.
Docker bind mounts sounds complicated, but as we saw in the previous video, they really aren’t.
Docker volumes, on the other hand, sound quite straightforward.
But they are more complicated.
Docker’s Volumes are the future of the way we work with data in Docker.
Therefore, understanding Volumes is a very useful skill to have.
Fortunately, even though Volumes are more involved than bind mounts, by watching the previous two videos you have all the pre-requisite information required to grasp this concept.
In this video we work through a bunch of examples to ensure the fundamentals are in place. Volumes will feature heavily throughout the rest of our dealings with Docker, so be sure to understand this section before moving on.
That’s all from me for this week. Until next week, have a great weekend, and happy coding.