How I Fixed: argument “$entityManager” of method “__construct()” references class “Doctrine\ORM\EntityManager” but no such service exists

Ok, mega crazy title. And honestly, this is just the tip of the iceberg. Allow me to set the scene:

Lately I have had email conversations, read threads on hackernews, and even had a forum post challenging how and why I do things the way I do.

The summary of the email conversations being why I persist with Symfony / PHP generally, when other, “better” solutions exist. And the same can be said for the linked forum post.

And then yesterday I saw that linked Hacker News thread:

It was at about ~320 comments when I read it. The top reply was the most interesting for me:

There’s a bit more to it than that, and the thread itself is worth a read. There’s basically 400+ different suggested ways to “get a web app up quickly in 2018”. I’d disagree with a bunch of them, but then, they are the way I do things.

Wait, what?

Yeah, I’d disagree with Docker + Ansible + Terraform + nginx + (Symfony/Rails/Go/etc) + Postgres, etc, being quick to get up and running.

Sure, once you know the drill / have projects to copy / paste from, it can be quick, relative to the first time you had to learn and implement all this stuff. But it’s not quick quick. It still takes me ages.

And so I challenged myself: Just how quickly could I get a typical project up and running for myself? The perfect question for a Saturday night.

My Setup

The setup I most typically use is:

  • Terraform for spinning up a server
  • Ansible for prep’ing the box
  • Docker for running stuff
  • GitLab for code hosting + CI
  • nginx for my web server
  • Symfony / PHP 7 for the code
  • Postgres for the DB

This is a lot of stuff, and it’s not super quick to set up.

This is why I started by mentioning the email / forum conversations whereby people ask: is Symfony / PHP the best tool of choice?

Well, maybe not. I don’t know. I just know I’m more productive with Symfony and PHP generally than everything else – though JavaScript is a close second.

Over the past few years I’ve tried other setups. It’s hard to invest time in learning another stack when the end result may be basically identical – what did I gain from the time invested? Could that time have been better invested elsewhere? Hard questions to answer.

But yeah, Node and more recently, Golang have been stronger contenders than usual for my attention. Anyway, that’s a bit of a digression.

The Problem

As mentioned above, that’s my stack. Learning it all took ages (years?), but as each project is, from an infrastructure point of view, very similar, I can now spin up a new environment very quickly.

My challenge was to find out how quickly. I got most of the core stuff up and running in ~1.5 hours.

I didn’t get the Behat testing environment set up in that time. Because I hit on an issue.

I wanted a simple JSON API as the outcome of this process. By simple I mean basically CRUD.

With the basic stack up and running, I created a basic entity (one property), and updated the DB accordingly. Doctrine was used for DB interactivity. Again, very typical for my projects.

In order to get data out of my repo, I needed to create a repository. There’s an awesome post on this by Tomas Votruba called How to use Repository with Doctrine as Service in Symfony.

As a side note here: if you haven’t already, I would highly recommend reading Tomas’ blog, as it’s jam packed with things you’d likely find very useful and interesting. Also, check out his GitHub projects, with Rector in particular being incredible.

I followed the linked article, and hit upon the following:

What was weird to me at this point is that I’ve followed this article before, but never hit upon any problems.

Anyway, I did as I was told – I switched up the code to reference the EntityManagerInterface instead:

This is a really simple class.

For complete clarity, here’s basically the rest of the app at this point:

And the entity:

This is basically a generated entity with a couple of tweaks. It’s not the final form, so don’t take this as good practice, or whatever.

The purpose of what this class is supposed to do is also not relevant here, but will be discussed in a future video.

Anyway, the problem is evident in the code above. If you can spot it, then good stuff 🙂

If not, keep reading.

So with three records in the DB, all the connectivity setup, things looking decent, I sent in a request to my only endpoint – GET /.

And it didn’t work. I hit a 504 Gateway Timeout  error from nginx.

Very confusing, overall. I mean, this is basically copy / paste from a different project that works just fine. Only, I’ve renamed the project name. What the heck?

I hit refresh a few times, you know, to make sure the computer wasn’t lying to me. And then everything started going unresponsive. Very odd. I’ve just bumped the system from 16gb to 32gb, and all I have is a few Docker containers running, a browser with admittedly too many open tabs, and one instance of PHPStorm. Surely this couldn’t be taxing the system. htop  told me a different story:

Yeah, I know, that swap size is ridiculous. Forgive me.

The nginx logs weren’t really that helpful. I needed to look at the PHP log output, which in this case is achieved via docker logs :

Line 23 of  TemporaryEmailRepository  is:

I mucked around a bit, trying out injecting the ObjectManager instead, but hit the same issue.

Then I wondered if it was the act of injecting itself, or actually using the injected code (durr). So I commented out the call:

Reloading now, I was no longer seeing the massive RAM spike, and looking at what that call was doing pushed me down the right lines.

I’ll admit, it took me a much longer amount of time than I’d of liked to realise my mistake:

Now, I’m not 100% certain on the conclusion here, but this is my best guess.

I believe I had created a circular reference. I’d injected the Entity Manager into the repo. Immediately I’d asked for the entity. The entity has an annotation pointing at the repo, which triggered the endless loop.

Anyway, removing the repositoryClass attribute fixed it up. Kinda obvious in hindsight.

The Conclusion

I’m convinced I could get an environment up faster than this. Without hitting this issue I believe I would be at the ~2 hour mark to go from idea to having a solid setup that’s good to write code in a sane, reproducible, reliable / testable way.

I think back to 10+ years ago, where I’d be up and running so much faster. PHP is essentially a scripting language. With shared hosting, you’d have the DB ready, the web server ready, you just needed to write a bit of code, connect to the DB, push the code up somehow (FTP :)) and bonza, you’re up and running.

Looking at that way now, I’m amazed how far I’ve come. There’s a massive overhead with using frameworks – time spent learning (which never stops, unless your framework of choice goes EOL), patching, managing all this stuff, learning new ways to make things better… is it all worth it? I think so.

I think the biggest takeaway for me lately is that whilst within the last ~5 years I’ve shipped a lot less code to prod than in the 5 years preceding this, the code I do ship is more stable, and maintainable.

Nagging in my mind, however, is that what’s the point in this slow, methodical approach if the end result is it takes so long, I either don’t bother with entire ideas, or by the time I’ve shipped them, I’m so burned out by the seeming complexity of the whole thing that I lose interest in taking them further.

Anyway, I appreciate this is half helpful, half rant. I just needed to blog it and get these thoughts out of my head.

My GitLab Runner Config.toml [Example]

I hit on an annoying issue this week, which I’m not sure of the root cause.

Last week I bumped GitLab from 10.6, to 10.8, and somehow broke my GitLab CI Runner.

Somewhere, I have a backup of the config.toml file I was using. I run my GitLab CI Runner in a Docker container. I only run one, as it’s only for my projects. And one is enough.

Somehow, the Runner borked. And annoyingly I neither had a reference of the running version (never use :latest unless you like uncertainty), and recreating without the config.toml file has been a pain.

So for my own future reference, here is my current GitLab Runner config.toml file:

FWIW this isn’t perfect. I’m hitting on a major issue currently whereby GitLab CI Pipeline stages with multiple jobs in the stage are routinely failing. It’s very frustrating. It’s also not scheduled for fix until v11, afaik.

The 2018 Beginners Guide to Back End (JSON API) + Front End Development

It’s been a few weeks in the making, but I am happy now to reveal my latest course here on CodeReviewVideos:

The 2018 Beginners Guide to Back End (JSON API) + Front End Development.

This course will cover building a JSON-based API with the following back-end stacks:

  1. ‘raw’ Symfony 4 (PHP)
  2. Symfony 4 with FOSRESTBundle (PHP)
  3. API Platform (PHP)
  4. Koa JS (JavaScript / node)

Behat will be used to test all of these APIs. One Behat project, four different API implementations – in two different languages (PHP and JS).

We’re going to be covering the happy paths of GET , POST , PUT , (optionally) PATCH , and DELETE.

We’ll also be covering the unhappy paths. Error handling and display is just as important.

Where possible we’re going to try and use just one Behat feature file. It’s not always possible – the various implementations don’t always want to behave identically.

There’s a ton of good stuff covered in these videos. But the back end is only half the battle.

Whether you want to “catch them all”, or you’re working with a dedicated front-end dev, it’s definitely useful to know the basics of both.

With that in mind, you can pick and choose whether to implement the back-end, or front-end, or both.

If you don’t want to implement a back-end yourself, cloning any of the projects and getting an environment up and running is made as easy as possible by way of Docker. But you don’t need to use Docker. You can bring-your-own database, and do it that way, too.

The Front End

Whatever back end you decide to spin up, the front end should play nicely.

We’re going to implement a few different front-ends. The two I’m revealing today are:

  1. ‘raw’ JavaScript
  2. React

I have plans for a few others, but each implementation is a fair amount of work and I don’t want to over promise at this stage. There’s definitely at least two more coming, but let me first get these two on the site 🙂

The raw JavaScript approach aims to show how things were in the ‘bad old days‘. The days before your package manager  would take up ~7gb of your hard disk with its cache  directory.

The benefit of working this way is that there’s really no extra ‘stuff’ to get in the way. We can focus on making requests, and working with responses.

But that said, this is 2018 and the many modern JavaScript libraries and frameworks are fairly awesome. You’ll definitely get a renewed sense of appreciation for how much easier your life is once you’re comfortable using a library like React, after having done things the hard way.

Again, as mentioned we will cover more than just raw JS and React. Currently each implementation is between ten and fifteen videos. Each video takes a couple of hours to write up, and another couple of hours to record on average. I’m going as fast as I can, and will upload and publish as quickly as possible.

You can watch them as they drop right here.

Site Update

Behind the scenes over the past 10 weeks I have been working on integrating CodeReviewVideos with Braintree.

This is to enable support for PayPal.

I tried to create a ticket for everything I could think of ahead of starting development.

And I added a new ticket for any issue I hit during development. I’m not convinced I tracked absolutely everything, but even so I completely underestimated just how much work would be involved in this feature.

Being completely honest, I have never been more envious of Laravel’s Spark offering. For $99 they get Stripe and Braintree integration, and a whole bunch more. Staggering.

There’s a bunch of other new and interesting features in this release.

I’ve taken the opportunity to migrate from Symfony 3 to Symfony 4 for the API. There’s a bunch of new issues that arose during this transition – I hadn’t given it much prior thought, but with the new front controller ( public/index.php) totally broke my Behat ( app_acceptance.php) setup.

This work is also enabling the next major feature which I will start work on, once PayPal is live. More on that in my next update.

I appreciate that from the outside looking in, there doesn’t seem to have been a great deal of activity on the site over the last few weeks. I can assure you that behind the scenes, there has never been more activity.

Have A Great Weekend

Ok, that’s about it from me for the moment.

As ever, have a great weekend, and happy coding.

p. s. – I would be extremely grateful if you could help me spread the word by clicking here to tweet about the new course.

How I Fixed: “error authorizing context: authorization token required”

I love me some Dockerised GitLab. I have the full CI thing going on, with a private registry for all my Docker images that are created during the CI process.

It all works real nice.

Until that Saturday night, when suddenly, it doesn’t.

Though it sounds like I’m going off on a tangent, it’s important to this story that you know I recently I changed my home broadband ISP.

I host one of my GitLab instances at my house. All my GitLab instances are now Dockerised, managed by Rancher.

I knew that as part of switching ISPs, there might (read: 100% would) be “fun” with firewalls, and ports, and all that jazz.

I thought I’d got everything sorted, and largely, I had.

Except I decided that whilst all this commotion was taking place, I would slightly rejig my infrastructure.

I use LetsEncrypt for SSL. I use the LetsEncrypt certs for this particular GitLab’s private registry.

I had the LetsEncrypt container on one node, and I was accessing the certs via a file share. It seemed pointless, and added complexity (the afore mentioned extra firewall rules), which I could remove if I moved the container on to the same box as the GitLab instance.

I made this move.

Things worked, and I felt good.

Then, a week or so later, I made some code changes and pushed.

The build failed almost immediately. Not what I needed on a Saturday night.

In the build logs I could see this:

This happened when the CI process was trying to log in to the private registry.

After a bit of head scratching, I tried from my local machine and sure enough I got the same message.

My Solution

As so many of my problems seem to, it boiled down to permissions.

Rather than copy the certs over from the original node, I let LetsEncrypt generate some new ones. Why not, right?

This process worked.

The GitLab and the Registry containers used a bind mounted volume to access the LetsEncrypt cert inside the container on the path /certs/.

When opening each container, I would be logged in as root.

Root being root, I had full permissions. I checked each file with a cheeky cat and visually confirmed that all looked good.

GitLab doesn’t run as root, however, and as the files were owned by root, and had 600 permissions:

The user GitLab is running as doesn’t have permission to read the private key.

Some more error output that may help future Googlers:

Argh.

Thankfully I hadn’t deleted the old cert, so I went back and saw that I had previously set 0640  on the private key in the old setup.

Directory permissions for the certs was set to 0750 with execute being required as well as read.

In my case this was sufficient to satisfy GitLab.

When making the change on the new node, I could then immediately log back in.

A Tip To Spot This Sooner

I would strongly recommend that you schedule your project to run a build every ~24 hours, even if nothing has changed.

This will catch weird quirks that aren’t related to your project, but have inadvertently broken your project’s build.

It’s much easier to diagnose problems whilst they are fresh in your mind.

Also, ya’ know, better documentation! This is exactly why I’m now writing this post. So in the future when I inevitable make a similar mistake, I now know where to look first 🙂

Symfony with Redis

In a bid to make getting up and running with CodeReviewVideos tutorials moving forwards, I have created a new repo called the Docker and Symfony3 starting point.

I will be using this as the basis for all projects going forwards as it dramatically simplifies the process of setting up each tutorial series, and should – in theory – make reliably reproducing my setups much easier for you.

I’ve had a lot of feedback lately to say it’s increasingly hard work to follow some of the older tutorials as the dependencies are out of date.

Taking this feedback on board, I will do my best to update these projects in the next few weeks-to-months. I am, however, going to wait for Symfony 4 to land before I spend any time updating stuff. No point duplicating a bunch of effort. Symfony 4 drops in November, in case you are wondering.

Video Update

This week saw two new videos added to the site. Unfortunately I was ill early in the week so didn’t get to record the usual third video.

#1 – [Part 1/2] – Symfony 3 with Redis Cache

Caching is a (seemingly) easy win.

Imagine we have some expensive operation: maybe a heavy computation, or some API request that needs to go across the slow, and unpredictable internet.

Wouldn’t it be great if we did this expensive operation only once, saved (or cached) the result, and then for every subsequent request, we sent back the locally saved result.

Yes, that sounds awesome.

Symfony has us covered here. The framework comes with a bunch of ways we can cache and store data.

Redis seems to be the one I find most larger organisations like to use, so that’s one of the reasons behind picking Redis out of the bunch.

At this point you may be thinking:

But Chris, I don’t have a Redis instance just laying around waiting for use, and I’m not rightly sure how I might go about setting one up!

Well, not to worry.

To make life as simple as possible, we are going to use Docker.

Docker… simple… the same sentence?!

Well, the jolly good news is that you don’t need to know anything about Docker to use this setup. At least, I hope you won’t. That’s the theory.

As mentioned above, I am making use of the new Docker / Symfony 3 starting point repo. I’ve tweaked this a touch for our Redis requirements.

By the end of this video you will have got a Symfony website up and running, and cached data into Redis. We will cover the setup required, both from a Docker perspective, and the config needed inside Symfony.

#2 – [Part 2/2] – Symfony 3 with Redis Cache

In the previous video we put all our caching logic into a controller action.

That’s really not going to cut it beyond your first steps. In reality you’re going to want to move this stuff out into a separate service.

To give you an example of how I might do it:

Controller calls CacheApiWrapper which calls Api if needed.

That might not be making much sense, so let’s break it down further.

Let’s imagine an app without caching.

We need to call a remote API to get the days prices of Widgets.

We hit a controller action which delegates the API call off to our Api service, which really manages the API call for us. When the API call completes (or fails), it returns some response to the calling controller action, which in turn returns the response to the end user.

If that API call always returns the same response every time it is called – maybe the prices of Widgets only change every 24 hours – then calling the remote API for every request is both pointless, and wasteful.

We could, if we wanted to, add the Caching functionality directly into the Api service.

I don’t like this approach.

Instead, I prefer to split the responsibility of Caching out into a separate ‘layer’. This layer / caching service has an identical API to the Api service. This caching service takes the Api service as a constructor argument, and wraps the calls, adding caching along the way.

If all this sounds complex, after seeing some code, hopefully it will make a bit more sense:

This should hopefully look familiar.

The unusual part is the WidgetPriceProvider. Well, they do say the two hardest parts of computer science are cache invalidation, and naming things…

Really WidgetPriceProvider is the best name I could think of to explain what’s happening. This service provides Widget Prices.

How it provides them is where the interesting part happens:

As mentioned, WidgetPriceProvider is a wrapper / layer over the Api Service.

It also has the cache injected.

This way the API call is separated from the cache, and can be used directly if needed. Sometimes I don’t want to go via the cache. Maybe in the admin backend, for example.

Note that this service implements WidgetPriceInterface. This isn’t strictly necessary. This interface defines a public function of fetchWidgetPrices.

The reason for this is, as mentioned earlier, I want the caching layer ( WidgetPriceProvider) to use the same method name(s) as the underlying API service.

Speaking of which:

So really it’s about delegating responsibility down to a more appropriate layer.

Be Careful – don’t blindly copy this approach.

There is a gotcha in this approach which maybe not be immediately obvious.

If your API call fails, the bad result (and empty array in this case) will still be cached.

Adapt accordingly to your own needs. It may be better to move the try  / catch logic out from the WidgetPrice class and into the WidgetPriceProvider instead.

Or something else. It’s your call.

Anyway, that’s an excursion into a potentially more real world setup.

Hopefully you find this weeks videos useful.

As every, have a great weekend and happy coding.

Chris