The Baffling World Of RabbitMQ and Docker

Recently I decided to switch my entire Ansible-managed infrastructure (for one project) over to Docker – or specifically, docker-compose. Part of this setup needs RabbitMQ.

I had no trouble getting the official RabbitMQ image to pull and build. I could set a default username, password, and vhost. And all of this worked just fine – I could use this setup without issue.

However, as I am migrating an existing project, I already had a bunch of queues, exchanges, bindings, users… etc.

What I didn’t want is to have some manual step where I have to remember to import the definitions.json file whenever building – or rebuilding – the environment.

Ok, so this seems a fairly common use case, I should imagine. But finding a solution wasn’t as easy as I expected. In hindsight, it’s quite logical, but then… hindsight 🙂

Please note that I am not advocating using any of this configuration. I am still very much in the learning phase, so use your own judgement.

Here is the relevant part of my docker-compose.yml file :

version: '2'

services:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Then I went to my old / existing RabbitMQ server and from the “Overview” page, I went to the “Import / export definitions” section (at the bottom of the page), and did a “Download broker definitions”.

This gives a JSON dump, which as it contains a bunch of sensitive information, I have doctored for display here:

{
  "rabbit_version": "3.6.8",
  "users": [
    {
      "name": "rabbit_mq_dev_user",
      "password_hash": "somepasswordhash+yQtnMlaK6Iba",
      "hashing_algorithm": "rabbit_password_hashing_sha256",
      "tags": "administrator"
    }
  ],
  "vhosts": [
    {
      "name": "\/"
    }
  ],
  "permissions": [
    {
      "user": "rabbit_mq_dev_user",
      "vhost": "\/",
      "configure": ".*",
      "write": ".*",
      "read": ".*"
    }
  ],
  "parameters": [
    
  ],
  "policies": [
    
  ],
  "queues": [
    {
      "name": "a.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    },
    {
      "name": "b.queue.here",
      "vhost": "\/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        
      }
    }
  ],
  "exchanges": [
    {
      "name": "router",
      "vhost": "\/",
      "type": "direct",
      "durable": true,
      "auto_delete": false,
      "internal": false,
      "arguments": {
        
      }
    }
  ],
  "bindings": [
    {
      "source": "router",
      "vhost": "\/",
      "destination": "a.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    },
    {
      "source": "router",
      "vhost": "\/",
      "destination": "b.queue.here",
      "destination_type": "queue",
      "routing_key": "",
      "arguments": {
        
      }
    }
  ]
}

You could – at this point – go into your Docker-ised RabbitMQ, and repeat the process for “Import / export definitions”, do the “upload broker definitions” step and it should all work.

The downside is – as mentioned above – if you delete the volume (or go to a different PC) then unfortunately, your queues etc don’t follow you. No good.

Now, my solution to this is not perfect. It is a static setup, which sucks. I would like to make this dynamic, but for now, what I have is good enough. Please do shout up if you know of a way to make this dynamic, without resorting to a bunch of shell scripts.

Ok, so I take the definitions.json file, and the other config file, rabbitmq.config, and I copy them into the RabbitMQ directory that contains my Dockerfile:

➜  symfony-docker git:(master) ✗ ls -la rabbitmq 
total 24
drwxrwxr-x  2 chris chris 4096 Mar 24 21:13 .
drwxrwxr-x 10 chris chris 4096 Mar 25 12:11 ..
-rw-rw-r--  1 chris chris 1827 Mar 25 12:38 definitions.json
-rw-rw-r--  1 chris chris  130 Mar 25 12:55 Dockerfile
-rw-rw-r--  1 chris chris   54 Mar 24 20:53 enabled_plugins
-rw-rw-r--  1 chris chris  122 Mar 25 12:55 rabbitmq.config

For completeness, the enabled_plugins  file contents are simply:

[rabbitmq_management, rabbitmq_management_visualiser].

And the rabbitmq.config file is:

[
    {
        rabbitmq_management, [
            {load_definitions, "/etc/rabbitmq/definitions.json"}
        ]
    }
].

And the Dockerfile :

FROM rabbitmq:3.6.8-management

(yes, just that one line)

Now, to get these files to work seems like you would need to override the existing files in the container. To do this, I used additional config in the docker-compose volumes section:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          RABBITMQ_DEFAULT_USER: "rabbitmq"
          RABBITMQ_DEFAULT_PASS: "rabbitmq"
          RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Note here the new volumes.

Ok, so down, rebuild, and up:

docker-compose down && docker-compose build && docker-compose up -d
➜  symfony-docker git:(master) ✗ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                      PORTS                                      NAMES
47c448ea31a2        composer                "/docker-entrypoin..."   30 seconds ago      Exited (0) 29 seconds ago                                              composer
f094719f6444        symfonydocker_nginx     "nginx -g 'daemon ..."   30 seconds ago      Up 29 seconds               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx
cfaf9b7328dd        symfonydocker_symfony   "docker-php-entryp..."   31 seconds ago      Up 30 seconds               9000/tcp                                   symfony
6b27600dc726        symfonydocker_rabbit1   "docker-entrypoint..."   31 seconds ago      Exited (1) 29 seconds ago                                              rabbit1
8301d6282c7d        symfonydocker_php       "docker-php-entryp..."   31 seconds ago      Up 29 seconds               9000/tcp                                   php
f6105aac9cfb        mysql                   "docker-entrypoint..."   31 seconds ago      Up 30 seconds               0.0.0.0:3306->3306/tcp                     mysql

The output is a bit messy, but the problem is the RabbitMQ container has already exited, but should still be running.

To view the logs for RabbitMQ at this stage is really easy – though a bit weird.

What I would like to do is to get RabbitMQ to write its log files out to my disk. But adding in a new volume isn’t solving this problem – one step at a time (I don’t have a solution to this issue just yet, I will add another blog post when I figure this out). The issue is that RabbitMQ is writing its logs to tty by default.

Anyway:

➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1                                                         
Attaching to rabbit1
rabbit1     | /usr/local/bin/docker-entrypoint.sh: line 278: /etc/rabbitmq/rabbitmq.config: Permission denied

Ok, bit odd.

Without going the long way round, the solution here is – as I said at the start – logical, but not immediately obvious.

As best I understand this, the issue is the provided environment variables now conflict with the user / pass combo in the definitions file.

Simply commenting out the environment variables fixes this:

    rabbit1:
        build: ./rabbitmq
        container_name: rabbit1
        hostname: "rabbit1"
        environment:
          RABBITMQ_ERLANG_COOKIE: "HJKLSDFGHJKLMBZSFXFZD"
          # RABBITMQ_DEFAULT_USER: "rabbitmq"
          # RABBITMQ_DEFAULT_PASS: "rabbitmq"
          # RABBITMQ_DEFAULT_VHOST: "/"
        ports:
          - "15672:15672"
          - "5672:5672"
        labels:
          NAME: "rabbit1"
        volumes:
          - "./rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins"
          - "./rabbitmq/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:rw"
          - "./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:rw"
          - "./volumes/rabbitmq/rabbit1/data:/var/lib/rabbitmq:rw"

Another down, build, up…

➜  symfony-docker git:(master) ✗ docker-compose down && docker-compose build && docker-compose up -d --remove-orphans
Stopping nginx ... done
Stopping symfony ... done
Stopping php ... done
Stopping mysql ... done
Removing nginx ... done
Removing composer ... done
Removing symfony ... done
Removing rabbit1 ... done
Removing php ... done
# etc

And this time things look a lot better:

➜  symfony-docker git:(master) ✗ docker ps -a                                                                        
CONTAINER ID        IMAGE                   COMMAND                  CREATED              STATUS                          PORTS                                                                                        NAMES
91d59e754d1a        composer                "/docker-entrypoin..."   About a minute ago   Exited (0) About a minute ago                                                                                                composer
b7db79270773        symfonydocker_nginx     "nginx -g 'daemon ..."   About a minute ago   Up About a minute               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp                                                     nginx
bd74d10e444a        symfonydocker_rabbit1   "docker-entrypoint..."   About a minute ago   Up About a minute               4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp   rabbit1
720e91743aa6        symfonydocker_php       "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     php
ec69b7c038e9        symfonydocker_symfony   "docker-php-entryp..."   About a minute ago   Up About a minute               9000/tcp                                                                                     symfony
717ed6cb180f        mysql                   "docker-entrypoint..."   About a minute ago   Up About a minute               0.0.0.0:3306->3306/tcp                                                                       mysql
➜  symfony-docker git:(master) ✗ docker-compose logs rabbit1
Attaching to rabbit1
rabbit1     | 
rabbit1     | =INFO REPORT==== 25-Mar-2017::13:17:23 ===
rabbit1     | Starting RabbitMQ 3.6.8 on Erlang 19.3
rabbit1     | Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     | Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     | 
rabbit1     |               RabbitMQ 3.6.8. Copyright (C) 2007-2016 Pivotal Software, Inc.
rabbit1     |   ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
rabbit1     |   ##  ##
rabbit1     |   ##########  Logs: tty
rabbit1     |   ######  ##        tty
rabbit1     |   ##########
rabbit1     |               Starting broker...

Hopefully that helps someone save a little time in the future.

Now, onto the logs issue… the fun never stops.

 

How I Fixed: Missing Headers on Response in Symfony 3 API

The first time this caught me out, I didn’t feel so bad. The second time – i.e. just now – I knew I had already solved this problem (on a different project), and found my urge to kill rising.

I wanted to POST in some data, and if the resource is successfully created, then the response should contain a link – via a HTTP header – to the newly created resource.

Example PHP / Symfony 3 API controller action code snippet:

    public function postAction(Request $request)
    {
        $form = $this->createForm(MyResourceType::class, null, [
            'csrf_protection' => false,
        ]);

        $form->submit($request->request->all());

        if (!$form->isValid()) {
            return $form;
        }

        $myResource = $form->getData();

        $em = $this->getDoctrine()->getManager();
        $em->persist($myResource);
        $em->flush();

        $routeOptions = [
            'id'      => $myResource->getId(),
            '_format' => $request->get('_format'),
        ];

        return $this->routeRedirectView(
            'get_myresource', 
            $routeOptions, 
            Response::HTTP_CREATED
        );
    }

And from the front end, something like this:

export async function createMyResource(important, info, here) {

  const baseRequestConfig = getBaseRequestConfig();

  const requestConfig = Object.assign({}, baseRequestConfig, {
    method: 'POST',
    body: JSON.stringify({
      important,
      info,
      here
    })
  });

  /* global API_BASE_URL */
  const url = API_BASE_URL + '/my-resource';

  const response = await asyncFetch(url, requestConfig);

  return {
    myResource: {
      id: response.headers.get('Location').replace(`${url}/`, '')
    }
  };
}

Now, the interesting line here – from my point of view, at least – is the final line.

Because this is a newly created resource, I won’t know the ID unless the API tells me. In the Symfony controller action code, the routeRedirectView  will take care of this for me, adding on a Location header pointing to the new resource / record.

I want to grab the Location  from the Headers returned on the Response and by removing the part of the string that contains the URL, I can end up with the new resource ID. It’s brittle, but it works.

Only, sometimes it doesn’t work.

Response
body:(...)
bodyUsed: false
headers: Headers
  __proto__: Headers
ok:true
status:201
statusText:"Created"
type:"cors"
url:"http://api.my-api.dev/app_dev.php/my-resource"
__proto__:Response

Excuse the formatting.

From JavaScript’s point of view, the Headers array is empty.

This leads to an enjoyable error: “Cannot read property ‘replace’ of null”.

Confusingly, however, from the Symfony profiler output from the very same request / response, I can see the header info is there:

Good times.

Ok, so the solution to this is really simple – when you know the answer.

Just expose the Location  header 🙂

# /app/config/config.yml

# Nelmio CORS
nelmio_cors:
    defaults:
        allow_origin:  ["%cors_allow_origin%"]
        allow_methods: ["POST", "PUT", "GET", "DELETE", "OPTIONS"]
        allow_headers: ["content-type", "authorization"]
        expose_headers: ["Location"] # this being the important line
        max_age:       3600
    paths:
        '^/': ~

After that, it all works as expected.

Programming Phoenix – Video to Category Relationship

I’m currently working my way through the Programming Phoenix book, which I have to say, I am thoroughly enjoying.

That’s not to say it’s all plain sailing. If only I were that good.

I got to chapter 7, and got myself quite stuck.

I’m aware the book was written for an earlier version of Phoenix, and in an attempt to force myself to “learn the hard way”, I decided that whenever I hit on a deprecated function, I would use the given replacement.

Ok, so in the book on page 99 (Chapter 6: Building Relationships), there is the following code:

  @required_fields ~w(url title description)
  @optional_fields ~w(category_id)

  def changeset(model, params \\ :empty) do
    model
    |> cast(params, @required_fields, @optional_fields)
  end

When using this code I got a deprecation warning:

warning: `Ecto.Changeset.cast/4` is deprecated, please use `cast/3` + `validate_required/3` instead
    (rumbl) web/models/video.ex:34: Rumbl.Video.changeset/2
    (rumbl) web/controllers/video_controller.ex:33: Rumbl.VideoController.create/3
    (rumbl) web/controllers/video_controller.ex:1: Rumbl.VideoController.action/2
    (rumbl) web/controllers/video_controller.ex:1: Rumbl.VideoController.phoenix_controller_pipeline/2

Seeing as I made the commitment to using the newer functions, this seemed like as good a time as any to do so.

The suggested replacement: cast/3 (docs) is where my confusion started. Here’s the signature:

cast(data, params, allowed)

And here’s the signature for cast/4 :

cast(data, params, required, optional)

You may then be wondering why in the original code from the book then that we only have three arguments being passed in?

Good question.

Really there are four, though one is being passed in as the first argument by way of Elixir’s pipe operator – which passes the outcome of the previous statement into the first argument of the next:

  model
 |> cast(params, @required_fields, @optional_fields)

As seen in the code from the Programming Phoenix book, cast/3 expects some params, then a list of required_fields, and another list of optional_fields.

One nice thing happening in the original code is the use of Attributes (@ symbol) as constants.

Using the ~w sigil is a way to create a list from the given string. Ok, so lots of things happening here in a very little amount of code. This is one of the benefits, and drawbacks (when learning) of Elixir, in my experience.

With PHPStorm I’m so used to having the code-completion and method signature look ups (cmd+p on mac) that learning and remembering all this stuff is really the hardest part. Maybe ‘intellisense’ has made me overly lazy.

Anyway, all of this is nice to know but it’s not directly addressing ‘problem’ I faced (and I use the inverted commas there as this isn’t a problem, it’s my lack of understanding).

We’ve gone from having required and optional fields, to simply just allowed fields.

My replacement code started off like this:

  def changeset(struct, params \\ %{}) do
    struct
    |> cast(params, [:url, :title, :description])
    |> assoc_constraint(:category)
    |> validate_required([:url, :title, :description])

Everything looked right to me.

I had the required fields, but now as a list, rather than the ~w sigil approach.

I had specified my three required fields by way of validate_required.

And as best I could tell, I was associating with the :category.

But no matter what I did, my Video was never associated with a Category on form submission.

I could see the Category was being submitted from my form:

%{"category_id" => "3", "description" => "Quality techno", "title" => "Richie Hawtin @ ENTER Ibiza Closing Party 2014, Space Ibiza",
  "url" => "https://www.youtube.com/watch?v=-5t2gH0l99w"}

But the generated insert statement was missing my category data:

INSERT INTO "videos" ("description","title","url","user_id","inserted_at","updated_at") VALUES ($1,$2,$3,$4,$5,$6) ...

Anyway, it turns out I was over thinking this.

All I needed to do was add in :category to the list passed in as the allowed argument and everything worked swimmingly:

  def changeset(struct, params \\ %{}) do
    struct
    |> cast(params, [:url, :title, :description, :category_id])
    |> assoc_constraint(:category)
    |> validate_required([:url, :title, :description])
  end

With the outcome of the next form submission:

INSERT INTO "videos" ("category_id","description","title","url","user_id","inserted_at","updated_at") VALUES ($1,$2,$3,$4,$5,$6,$7) ...
[debug] QUERY OK db=1.1ms

Easy, when you know how.

This took me an embarrassingly long time to figure out. I share with you for two reasons:

  1. Learning the hard way makes you learn a whole bunch more than simply copy / pasting, or typing out what you see on the book pages.
  2. I couldn’t find anything about this after a lot of Googling, so if some other poor unfortunate soul is saved a lot of headscratching from this in the future then it was all worthwhile writing.

That said, the new edition of Programming Phoenix is due for release very soon (so I believe), so likely this will have been addressed already.

Deploying With Ansistrano

I have a major problem. I keep waaay too many open tabs on my phone. Every time I see something interesting – via Twitter, Hacker News, or Reddit – I open the link in a tab and promise myself I will check it out more thoroughly soon.

I currently have 47 open tabs 🙁

This is only amplified by the fact I do the same on desktop. Right now, across three desktops I have over a hundred open tabs.

It’s becoming an epidemic.

Anyway, once in a while, one of these tabs becomes useful.

Recently I hit on a problem whereby I needed to deploy a bunch of individual JavaScript files – node.js scripts – to a server to be used as Rabbit MQ workers.

I had a bunch of requirements for these workers:

  • start with specific flags (–harmony-async-await )
  • restart automatically if the server reboots
  • restart if the script crashes
  • can run multiple instances

And so on.

These turned out to be the superficial problems – and I say this because there’s a tool out there that already nails this problem – PM2.

Initially I thought these would be the hard problems.

What I hadn’t banked on was how much of a royal pain in the backside it might be to deploy my node.js scripts to dev / prod / wherever.

 

My requirements are fairly straightforward – they could be solved by using rsync. However, rsync becomes unmanageable as a project grows.

There’s the issue of remembering the right command, and then duplicating the command – altering slightly – for the prod deploy.

And what if it goes wrong? Well, you have to handle that yourself.

Call me spoiled, but having become accustomed to Deployer (Matt did a fantastic job on this course btw, you should check it out), I now use that as my baseline for deployments.

I have a similar tool I use on JavaScript projects called Flightplan. It offers a decent level of functionality, but with one major issue (from my p.o.v):

It is a pain to deploy more than one directory.

Flightplan works on the assumption – as best I can tell – that you will be running your project through some webpack-style setup first, producing a dist directory which contains everything you need to boot your single page app, or whatever.

This is cool, but I needed to run many different worker scripts – all ideally from one directory.

As best I understand it, webpack allows this via it’s multiple entry options, but I’m not using webpack. Actually, I tried to use webpack but it threw out a bunch of errors right away and I gave up.

I also tried Deployer. But that didn’t work much good either. JS mixed with PHP leads to mess.

Enter Ansistrano

Ok, so all that was a very long-winded precursor to the eventual solution.

However, I felt I needed to do justice to how much I have struggled to get this thing working. It’s taken 5 hours… ouch.

Needless to say I tried to give up on getting Ansistrano at numerous times (see how I ended up at Deployer, Flightplan, webpack etc).

In the end though, I cracked it. So here goes:

Firstly, my playbook:

---
- hosts: all

  vars:
    ansistrano_deploy_to: "/var/www/your/remote/path" # server side path you want to deploy too
    ansistrano_keep_releases: 3

    ansistrano_deploy_via: "git"

    ansistrano_git_repo: "ssh://git@your.gitlab.server/your-gitlab-user/your-project.git"
    ansistrano_git_branch: "master"


  roles:
    - { role: ansistrano.deploy }

Pay special attention to the ansistrano_git_repo entry, whereby I needed to add the prefix of ssh:// to make this work. If you don’t, you will find Ansible doesn’t understand the path you are providing, and blanks it out instead :/

I guess I wasn’t the only person to notice this.

Also, note that the typical git path given by gitlab will contains colons, which need to be replaced with spaces:

git@your.gitlab.com:your-user/your-project.git

// becomes

git@your.gitlab.com/your-user/your-project.git

 

Now note, this is an Ansible issue, not an Ansistrano issue.

This should be enough to get most of the way there.

However, I hit upon another issue.

No matter what I did, all the Ansistrano managed folders were being created as root .

Since the days of yore, I have been using the same set of flags on my runs of ansible-playbook, and today I was well and truly bitten on the backside:

ansible-playbook playbook/deploy.yml -i hosts -l my-server.dev -k -K  -vvvv

Ultimately this command sees me through. I’ve started using -vvvv on every playbook run as it saves me having to re-run when things inevitably go wrong. Also, for the love of God, use snapshots before running.

But yeah, my issue was I was running with the additional flag of -s which forced the playbook to run as root. Silly me.

Anyway, early signs are promising. It all works. I just wish it hadn’t taken me so much time to figure out these problems. Hopefully though, by sharing I can save someone some hassle in future.

An issue with React Widgets DateTimePicker

When you have the fix to a problem, but don’t know the root cause of why that problem is happening, you have two choices:

  1. Accept you have a solution, and move on, or;
  2. Accept you have a solution, and figure out the cause of the problem.

The problem is, #2 takes time. And that might not be time you have currently.

Dwelling on some problems is not the best use of my time in my current position.

So here’s my problem, and a solution, but I cannot offer you the root cause to this problem. Nor is it the defacto solution, I believe.

Using React Widgets DateTimePicker

I guess you’re either the kind of developer who creates everything themselves.

Or you first try using other people’s.

I wanted a Date Picker, ideally with time, and I would like it to be hooked up to Redux Form.

The good news, Redux Form already works nicely with React Widgets. See the previous link for a decent tutorial. And this code sample.

As a result, I can quite easily add in a Date Time Picker that’s hooked up to my Redux Store. This is good. It pleases me.

It also allows me to start laser focusing my A/B tests.

date-of-birth-as-a-timestamp
You’ve never A/B tested unless you know for sure customers don’t prefer timestamps.

But joviality aside (wait, that was humour?), I did hit a problem along the way:

react-widgets-calendar-size-issue
That’s not so good, Al.

There is a quick fix to this.

Add in a local (S)CSS style:

.rw-popup-container.rw-calendar-popup {
  width: 24em !important;
}
redux-form-datetime-picker-react-widgets
Better

It’s good, but it’s not quite right. Good enough for me to continue, though has been firmly lodged into the ticket queue under ‘improvement’.

Here’s what it should look like:

react-widgets-datetime-picker-proper

So there you go. Hope it helps someone. It’s unofficial. It’s not the real fix. Any pointers appreciated.