How I Fixed: Error: The number of path segments is not divisible by 2 in “”

Perhaps “how I fixed” is a poor title for this one. I don’t think I fixed it, but I found a workaround.

Here’s the gist of the problem:

docker run --rm \
		--env-file /path/to/my/terraform/azure/.env \
		-v /path/to/my/terraform/azure:/workspace \
		-w /workspace \
		my-custom/terraform:local \
		apply --auto-approve
╷
│ Error: The number of path segments is not divisible by 2 in ""
│ 
│   with azurerm_linux_virtual_machine.christest,
│   on create-instance.tf line 1, in resource "azurerm_linux_virtual_machine" "christest":
│    1: resource "azurerm_linux_virtual_machine" "christest" {
│ 
╵
╷
│ Error: The number of path segments is not divisible by 2 in ""
│ 
│   with azurerm_linux_virtual_machine.christest,
│   on create-instance.tf line 1, in resource "azurerm_linux_virtual_machine" "christest":
│    1: resource "azurerm_linux_virtual_machine" "christest" {
│ 
╵
╷
│ Error: The number of path segments is not divisible by 2 in ""
│ 
│   with azurerm_linux_virtual_machine.christest,
│   on create-instance.tf line 1, in resource "azurerm_linux_virtual_machine" "christest":
│    1: resource "azurerm_linux_virtual_machine" "christest" {
│ 

Some extra info that may / may not be helpful in this particular instance is that I wanted to run Terraform through Docker. In order to work with Azure command line (az) I had to bake that into the Dockerfile

FROM hashicorp/terraform:1.0.10

RUN \
  apk update && \
  apk add bash py-pip && \
  apk add --virtual=build gcc libffi-dev musl-dev openssl-dev python3-dev make && \
  python3 -m pip install --upgrade pip && \
  python3 -m pip install azure-cli && \
  apk del --purge build

To build the Dockerfile I then do a docker build -t my-custom/terraform:local . which is where that custom Docker image is coming from above. Names changed to protect the innocent.

Anyway, I have a bunch of files in this project mainly to split things out for the sake of my sanity. Where Terraform seemed to die was with this first file:

resource "azurerm_linux_virtual_machine" "christest" {
  name                            = "${var.owner}-vm"
  resource_group_name             = azurerm_resource_group.christest.name
  location                        = azurerm_resource_group.christest.location
  size                            = var.instance_size
  admin_username                  = "adminuser"
  admin_password                  = "abadpassword"
  disable_password_authentication = false
  network_interface_ids = [
    azurerm_network_interface.christest.id,
  ]

  source_image_id = var.source_image_id

  os_disk {
    storage_account_type = "Standard_LRS"
    caching              = "ReadWrite"
  }
}

By and large, I’d simply copied this from the docs and then tried to be a smart arse and turned some of the things into variables.

Here’s where things got confusing.

As above, the Terraform output complains that:

Error: The number of path segments is not divisible by 2 in ""

This error repeats three times.

Hmmm. Three times… well, wait. Don’t I have three variables here, right at the top? Probably them, right?

No. No matter what I did – and it got to the point where I hardcoded them – the error remained. If it remained when they were just plain old strings, there was no way it was these lines causing the problem.

So, I dutifully copy / pasted the entire Azure config in from the docs, and lo-and-behold, that worked first time. D’oh.

What else had I changed?

source_image_id = var.source_image_id

And the associated variable I’d created:

# az vm image list --output table

variable "source_image_id" {
  description = "The ID of the Image which this Virtual Machine should be created from"
  type        = string
  default     = "Canonical:UbuntuServer:18.04-LTS:latest"
}

That’s not how it’s set in the example from the docs. Here’s what they have:

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

Annoyingly, I didn’t even need this set as a variable. I’d just tried to be that aforementioned smart arse, which had bitten me on said arse.

There isn’t an example in the docs of how to use source_image_id, I’d just guessed. Wrongly, it seems.

And why I say I haven’t fixed this is I still don’t know the right format to use here. I just know that by using source_image_reference then the error goes away. Good enough for me.

How I Fixed: docker: Error response from daemon: Decoding seccomp profile failed: json: cannot unmarshal array into Go value of type seccomp.Seccomp.

Another day, another cryptic error message from Docker.

Right, so here’s what I was trying to do.

Blindly following the Playwright docs for getting a Playwright Docker container up and running, I first created a new file, seccomp_profile.json inside the current directory. The location of this directory is irrelevant, just the file needs to live in the directory from which you (or I) run the command.

And the command is directly lifted from the 1.14 docs:

docker run -it --rm --ipc=host --user pwuser --security-opt seccomp=seccomp_profile.json mcr.microsoft.com/playwright:focal /bin/bash

When doing this, I got the error above, but for clarity (and for SEO purposes) here it is again:

docker: Error response from daemon: Decoding seccomp profile failed: json: cannot unmarshal array into Go value of type seccomp.Seccomp.

This tells us that whilst the file was read, the contents are somehow wrong.

I’m just going to skip straight to the fix here, as I tried a few things and got lucky. Sadly, I don’t know jack about Golang, but I loosely understand the error above as saying hey, Chris, that JSON file doesn’t decode into a format my program is expecting.

Here’s what MS give:

[
  {
    "comment": "Allow create user namespaces",
    "names": [
      "clone",
      "setns",
      "unshare"
    ],
    "action": "SCMP_ACT_ALLOW",
    "args": [],
    "includes": {},
    "excludes": {}
  }
]

And here’s what it needs to be:

  {
    "comment": "Allow create user namespaces",
    "names": [
      "clone",
      "setns",
      "unshare"
    ],
    "action": "SCMP_ACT_ALLOW",
    "args": [],
    "includes": {},
    "excludes": {}
  }

Or, in short, not an array. Just a single object.

Why the docs have it that way, I don’t know.

Go figure.

Arf arf.

How I Fixed: unknown flag: –project-name in GitHub Actions

Bit of an obscure one this, and given that I received absolutely no Google results, I’m guessing may be of little help to … well, anyone. But perhaps this is the price one pays for being an early adopter.

The Problem

I created a GitHub actions workflow that ran a Makefile command. It looked something like this:

---
name: E2E Tests

on: [push, pull_request]

jobs:
    test:
        runs-on: ubuntu-latest

        steps:
            - uses: actions/checkout@v2

            - name: make install
              run: make install

            - name: make serve
              run: make serve

Excuse the crappy steps, I was / am still kinda experimenting with this workflow.

Anyway, what we really care about (and where the problem actually lies) is in what the make install command represents:

serve:
	docker compose \
		-p my_project_name_here \
		 --remove-orphans && \
		docker compose up --remove-orphans
.PHONY: serve

Don’t be put off if you don’t use / understand Makefile‘s, but if you don’t, they are pretty useful. Other solutions exist, yadda yadda.

Keen eyed readers will have spotted something potentially curious:

docker compose ...

Not:

docker-compose ...

Yeah, so I got this from a little info at the end of my local command line output:

➜  my-project git:(e2e-tests-setup) docker-compose
Define and run multi-container applications with Docker.

Usage:
  docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
  docker-compose -h|--help

 ...

Docker Compose is now in the Docker CLI, try `docker compose`

(Excuse the formatting, WP does its own thing)

See that line at the very end. Why not try the new stuff, right? Who doesn’t love the shiny.

But herein lies the problem. Note, even the official docs aren’t (yet) consistent in what format they use.

The Solution

The solution to this problem is super simple.

Use docker-compose for your GitHub actions commands.

The problem arises because GitHub actions thinks I’m trying to run docker commands, and must be using an older version of the Docker CLI.

Locally:

docker -v
Docker version 20.10.7, build f0df350

GitHub Actions:

Run docker -v
Docker version 20.10.7+azure, build f0df35096d5f5e6b559b42c7fde6c65a2909f7c5

Anyway, my fix was to change the Makefile command to use docker-compose instead of docker compose:

serve:
	docker-compose \
		-p my_project_name_here \
		 --remove-orphans && \
		docker-compose up --remove-orphans
.PHONY: serve

How I Fixed: Error response from daemon: Get https://registry.example.com/v2/: unauthorized: HTTP Basic: Access denied

OK – silly problem time.

A while back I force reset the password of one of my automated CI users. For a variety of reasons, I never checked that this had worked properly.

When I went to log in via the command line today, I was getting this:

➜  docker login -u myuser registry.example.com
Password: 
Error response from daemon: Get https://registry.example.com/v2/: unauthorized: HTTP Basic: Access denied

Very confusing.

I hard reset the user’s password via the GitLab Admin Panel, but still the problem persisted.

Simple fix: log in as this user via the web GUI.

Once you do that, you should see the password change prompt. Change your password there, and et voila, you can now login from the command line again.

It would be useful if the service offered a better message around this occurrence, but I’m guessing it’s a bit of a weird edge case. I’m actually not sure if the issue lies with GitLab or the Docker Registry image honestly.

Either way, hopefully that solves your problem.

How I Solved: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running?

OK, tl;dr, this is not a true fix. However, it works. Or worked for me.

The issue I have been facing, the one that has cost me my entire Saturday morning, is this:

➜  gitlab-ci docker-compose up 
Creating network "gitlab-ci_default" with the default driver
Creating gitlab-ci_runner_1_28ccd2f6e08d          ... done
Creating gitlab-ci_register-runner_1_6ddb7e90a9d3 ... done
Creating gitlab-ci_dind_1_bb210df194a2            ... done
Attaching to gitlab-ci_runner_1_3cb60d519ae8, gitlab-ci_register-runner_1_941db09830b5, gitlab-ci_dind_1_a0ef0b8a29e4
runner_1_3cb60d519ae8 | Runtime platform                                    arch=amd64 os=linux pid=7 revision=61e7606f version=14.1.0~beta.182.g61e7606f
runner_1_3cb60d519ae8 | Starting multi-runner from /etc/gitlab-runner/config.toml...  builds=0
runner_1_3cb60d519ae8 | Running in system-mode.                            
runner_1_3cb60d519ae8 |                                                    
runner_1_3cb60d519ae8 | Configuration loaded                                builds=0
runner_1_3cb60d519ae8 | listen_address not defined, metrics & debug endpoints disabled  builds=0
runner_1_3cb60d519ae8 | [session_server].listen_address not defined, session endpoints disabled  builds=0
register-runner_1_941db09830b5 | Runtime platform                                    arch=amd64 os=linux pid=7 revision=61e7606f version=14.1.0~beta.182.g61e7606f
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
register-runner_1_941db09830b5 | Running in system-mode.                            
register-runner_1_941db09830b5 |                                                    
register-runner_1_941db09830b5 | Registering runner... succeeded                     runner=cjX3zQG_
register-runner_1_941db09830b5 | Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 
gitlab-ci_register-runner_1_941db09830b5 exited with code 0
dind_1_a0ef0b8a29e4 | .............................++++
dind_1_a0ef0b8a29e4 | ...........................................................................................................................................................++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
runner_1_3cb60d519ae8 | Configuration loaded                                builds=0
dind_1_a0ef0b8a29e4 | .......++++
dind_1_a0ef0b8a29e4 | .............++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Signature ok
dind_1_a0ef0b8a29e4 | subject=CN = docker:dind server
dind_1_a0ef0b8a29e4 | Getting CA Private Key
dind_1_a0ef0b8a29e4 | /certs/server/cert.pem: OK
dind_1_a0ef0b8a29e4 | Generating RSA private key, 4096 bit long modulus (2 primes)
dind_1_a0ef0b8a29e4 | ................................................................++++
dind_1_a0ef0b8a29e4 | ...................................................................................................................++++
dind_1_a0ef0b8a29e4 | e is 65537 (0x010001)
dind_1_a0ef0b8a29e4 | Signature ok
dind_1_a0ef0b8a29e4 | subject=CN = docker:dind client
dind_1_a0ef0b8a29e4 | Getting CA Private Key
dind_1_a0ef0b8a29e4 | /certs/client/cert.pem: OK
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.193084271Z" level=info msg="Starting up"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.194256426Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198038066Z" level=info msg="libcontainerd: started new containerd process" pid=53
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198088131Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198099583Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198127773Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.198154133Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.211367108Z" level=info msg="starting containerd" revision=d71fcd7d8303cbf684402823e425e9dd2e99285d version=v1.4.6
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.236199982Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.236321650Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.240984040Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241236268Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.btrfs"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241270029Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.devmapper"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241295950Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241311248Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241375982Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241537098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241730231Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241748382Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241783028Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.241794800Z" level=info msg="metadata content store policy set" policy=shared
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262730870Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262764458Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262808206Z" level=info msg="loading plugin "io.containerd.service.v1.introspection-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262843956Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262862664Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262875776Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262889612Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262902664Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262923290Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262946006Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.262959263Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263113401Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263225410Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263552551Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263577871Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263616499Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263632647Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263648459Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263661513Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263674151Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263687079Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263699853Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263789801Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263807278Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263947799Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263971592Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.263987446Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264001179Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264194211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264252887Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264299356Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.264314178Z" level=info msg="containerd successfully booted in 0.053975s"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271530892Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271565662Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271592175Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.271613087Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272302289Z" level=info msg="parsed scheme: "unix"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272321745Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272351355Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.272365620Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342826242Z" level=warning msg="Your kernel does not support swap memory limit"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342846686Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.342999134Z" level=info msg="Loading containers: start."
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.417804617Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.465223597Z" level=info msg="Loading containers: done."
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.494192010Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.494297708Z" level=info msg="Daemon has completed initialization"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.559800797Z" level=info msg="API listen on /var/run/docker.sock"
dind_1_a0ef0b8a29e4 | time="2021-06-19T09:44:43.565866502Z" level=info msg="API listen on [::]:2376"
runner_1_3cb60d519ae8 | Checking for jobs... received                       job=870 repo_url=https://example.com/myrepo/myproject.git runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Failed to remove network for build           error=networksManager is undefined job=870 network= project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | Will be retried in 3s ...                           job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | ERROR: Job failed (system failure): Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)  duration_s=9.004197071 job=870 project=90 runner=A6qDsS-H
runner_1_3cb60d519ae8 | WARNING: Failed to process runner                   builds=0 error=Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s) executor=docker runner=A6qDsS-H

The critical lines being:

WARNING: Preparation failed: Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running? (docker.go:865:0s)

This setup is for GitLab CI, where I run GitLab Runner through docker compose.

Here’s my docker-compose.yaml config, for what it’s worth:

version: '3'

services:

  dind:
    restart: always
    privileged: true
    volumes:
    - /var/lib/docker
    image: docker:17.09.0-ce-dind 
    entrypoint: ["dockerd-entrypoint.sh", "--tls=false", "--storage-driver=overlay2"]

  runner:
    restart: always
    image: gitlab/gitlab-runner:alpine
    volumes:
    - ./gitlab/runner:/etc/gitlab-runner:Z
    - ./gitlab/runner/builds:/builds
    environment:
    - DOCKER_HOST=tcp://dind:2375
      
  register-runner:
    restart: 'no'
    image: gitlab/gitlab-runner:alpine
    volumes:
    - ./gitlab/runner:/etc/gitlab-runner:Z
    command:
    - register
    - --non-interactive
    - --locked=false
    - --name=mybox
    - --executor=docker
    - --docker-image=docker:19.03.12
    - --docker-privileged
    environment:
    - CI_SERVER_URL=http://example.com/
    - REGISTRATION_TOKEN=my-token-here

(careful, this won’t copy paste due to WP funking up the encoding)

Note, the docker image version used by dind is the most important part here. The docker image version used by register-runner doesn’t seem to matter.

Prior to this I tried the very latest docker image, then docker:19.03.12 as per the official GitLab docs (at the time of writing), and then fortunately, I had my ancient configs which gave me the heads up to try a much older version of Docker.

So it seems using the older docker version ‘fixes’ this. I don’t know why – and I don’t have time (nor really, the inclination) to investigate. If you’re looking for a quick fix, hopefully this works for you. And if you do have the proper fix, please let me know via a comment.