For a while now I’ve wanted to share files between my OSX / Macbook Pro laptop, and one (or ideally several) of my Ubuntu Linux PCs.
Because my laptop has 256gb of hard disk space, and that is consumed at an alarmingly fast rate when recording, and rendering videos.
I’ve tried, and failed, to set up file sharing on Ubuntu on numerous occasions. It is, of course, due to something (or several compounding somethings) I have been doing wrong, but I neither have the patience, nor desire to figure out exactly what.
Enter Docker. Saviour of hair follicles. Provider of solutions.
File Sharing Ubuntu Style, With Docker
Let’s cut right to the chase. Here is some configuration:
Essentially to make this work you need Docker, and Docker Compose installed on the PC (Ubuntu in my case) that you want to share data from.
There are some changes you will need to make.
Make sure each of the volumes match the path to a real directory on your computer.
The part before the colon is on your local PC.
The part after the colon is where that directory will be available inside the resulting running Docker container.
Customise the username and password appropriately.
You can find out what all the no;no;no bits mean in the documentation.
Then put the code from above into a file called docker-compose.yml somewhere on your computer. Once saved off to a file, run docker-compose up (or docker-compose up -d to run in the background) and away you go.
Your Ubuntu File Server For Mac
Once you have your shiny new Ubuntu Samba File Server up and running, how do you connect to it from your Mac?
Pop in the IP address, or (perhaps more nicely) the name of your Ubuntu PC.
The share name corresponds with whatever you called the share in the command section of your docker-compose.yml file.
Once you type in the full path to your share, click Connect.
After a second or two you should see a prompt. Enter the credentials to the share you created in your docker-compose.yml file.
Bonza. You are now connected.
At this point you should have read / write access to the share. This is what I wanted, and allows me to copy all my video files over to the network drive, rather than store them on my over priced, under provisioned hard disk.
One thing I get asked about, more than I ever expected, was about how I do some things for CodeReviewVideos.com.
I never expected people to be interested in my set up.
But I get a fair amount of questions, all the same.
One such question has been around my editing process.
I keep things super simple. You may have noticed my lack of Hollywood effects on transitions, and what have you.
I take a lot of pleasure in recording the videos for the site.
It is truly the best bit.
I started this site because I thought how much further I’d have been, professionally, and personally, if I had just known some of this stuff a few years earlier.
That if I followed this stuff, I could be building systems that emphasised a more decoupled nature, and in doing so, I’d build systems that were extremely pleasant to work with. The code would be an enjoyable place to be.
Which ultimately would give you more freedom to spend time working on the more exciting bits.
Learning React if you’re a back end dev may seem like a pointless pain in the arse. But it could be that your life is overwhelmed with firefighting on the back end, that you don’t ever get the time you’d like to play with the front end.
It may be that once you’ve played around with some simple, yet powerful front end concepts that you’d actually start to love it. There is a lot of fun to be had there.
Behind The Scenes
That’s the stuff I like recording.
But a huge constraint for me recording more videos like that has been in that an equal amount of time or more goes into editing.
This week I decided to take on a dedicated editor.
It’s a big deal for me. I’ve accepted that I can’t do it all.
And because I get asked about my behind-the-scenes process so much, I decided to share some of it. More on that below.
This week I recorded two full days of Livestream stuff.
Livestream right now has literally been me recording as normal, but watching myself on a private discord to see just how many secrets I’d accidentally expose. No plot twist here: it would have been a few.
The exercise was very worthwhile. I found out about a little box I could use to control various aspects of the stream like an arcade keypad. Seemed totally overkill.
More realistic is to go with some sort of two screen setup, and record the secondary screen, only dragging into shot what I know is safe to share.
Or an alternative is to go full open infrastructure, and treat it as a total demo environment. I’m not sure, yet. These are some of the enjoyable questions I find myself pondering on the livestream.
I took part in a really interesting discussion late last week, and into the first part of this week regarding the usefulness of unit testing in the format most of us(?) practice.
To give some context: a problem was discovered whilst preparing a Live deployment. The problem itself was really small: an array being re-instantiated in a conditional, about 5 lines after it had originally been instantiated. That’s a very nerdy way of saying this was happening:
This is the real world. Stuff happens. We deal with it, we (hopefully) find a way to mitigate it, and we move on.
My process of mitigation was to create a set of unit tests for the file in question. I used my typical approach:
Cover the default happy path – only mandatory arguments, unit testing a few variations if needed
Cover the alternative happy path – any optional arguments
Check for the obvious bad stuff, and assert the mitigation is as expected
I put the code in for review, and got some interesting feedback:
I don’t think Unit testing this class is the way to go… in an ideal world
I am paraphrasing somewhat, but stick with me.
Looking at this made me question everything I do around testing.
I deeply value and trust the opinion of this reviewer, and they are telling me that unit testing a class is not the way to go?
Am I doing unit testing all wrong?
There was a fair bit more to this piece of feedback on this particular PR. The reviewer had been kind enough to offer more detail on their thoughts for this issue.
This person’s preferred approach would be to test the interactions with this class, rather than the class itself.
To test the wider system behaviour, rather than the individual steps.
And this got me to thinking. I’d heard this advice before. I’ve read this advice before. But I started to question if it had sunk in.
Am I wrong to think unit tests add value here?
If unit tests haven’t already been created for this class, is it even worth adding them now?
At some point, can explicitly untested code ever be considered trusted?
I mulled over a bunch of questions like these all weekend.
My Perspective on Unit Testing
As a beginner to a project, my approach when unit testing is to work my way up.
I start with some problem to solve, and I follow that one tiny path from beginning to end, and see what I interact with along the way.
For any class I find, I look for a unit test.
If I find one, I read it.
If one doesn’t exist, I try to create one.
This isn’t always possible, particularly on legacy code.
In that case, one solution might be to hide implementations behind an interface. This way you can A/B any new code you do write, giving you options.
Once I have done this, I create a unit test for the new / revised / alternative implementation.
I keep doing this until I reach the end of the request>response life-cycle.
This causes me to write mostly one type of tests.
I write a lot of unit tests. When I don’t see them, I write them.
I believe this adds value. At the very least, this adds complimentary value.
Another reviewer in the same thread, another very intelligent and smart person whose opinion I valued linked to some related reading. An Uncle Bob article, in particular.
I read that article twice, in full.
And I didn’t understand it.
Specifically, I didn’t understand this bit:
As the tests get more specific, the production code gets more generic.
This article takes the point of view that if you’re typically
test files look something like:
That your tests are highly coupled to your production code. Which makes refactoring – true refactoring – inherently more difficult.
I am super guilty of calling any changes to my code refactoring. It sounds very official. Sorry, I can’t come to the pub, I’m refactoring.
Refactoring is defined as a sequence of small changes that keep the tests passing at all times
If the unit tests are tightly coupled to your implementation, it’s highly likely that small changes to your code break, comparatively, a lot of tests.
Keeping the test suite up to date becomes a chore, and is soon sacrificed when project managers push for constant changes. The rot sets in.
What I Learned
Look for behaviour. Then test that behaviour.
I agreed with this approach already.
My perspective of what constitutes behaviour is where I have been asking myself the most questions.
I feel I needed to understand the behaviour of that one class. As an outsider looking in, I found value in this approach.
I’ve learned to question the correct layer in which adding a test, or set of tests, gives the most benefit.
It may be that your problem is solved by an integration test suite. On larger projects, this test suite may not even be in the same language you’re working in. This presents different challenges.
Tools like Behat, and PHPSpec have led me down some paths that have been encouraging me to work like Uncle Bob, without even realising it.
I’ve also learned that I still have a lot to learn about unit testing. That’s a great thing. I have ordered Martin Fowler’s Refactoring book to better inform myself of what Refactoring is truly supposed to be about.
There are some interesting links I’d like to share with you this week around this subject:
We’re apparently in for a lovely weekend of sunshine here in the UK. Perfect weather for sitting indoors coding 🙂
It’s been a busy week for me. I’ve had a long stint of travel, followed by busy days on a contract. In the evenings I’ve been fighting GitLab CI, which seems to have gone haywire. Builds are now taking hours to finish. I’ve unfortunately still not resolved this. There’s always some fun challenge to tackle.
On the plus side, thanks to the travel I did managed to see the Flying Scotsman here in Preston earlier this week:
Ok, enough ramble, on with the show…
Over the last 4+ years of creating content for CodeReviewVideos.com I’ve tried a whole bunch of different approaches. I’m always trying to optimise my process and make things that touch more efficient, with the ultimate aim of delivering the most interesting and useful software development tutorials for you.
A number of people have asked why I “drip” out content. Typically, at least until the start of this year, I would release three videos a week: Monday, Wednesday, and Friday.
This approach worked well for me.
But, I got those emails frequently enough, asking why I didn’t either:
Release more videos, or;
Upload loads at once, rather than a few at a time
I had a definitive answer to the first question:
I can typically only get three videos done per week. Each video takes a lot of effort, from planning, to creating the initial code, to writing up that process, then recording, editing, uploading, and finally publishing. On average, for any minute you see on screen, it’s taken me about ~30 minutes to create.
The second question, however, I wasn’t sure on. The only way to find out would be to test the system.
As such, at the start of this year I decided I would switch to releasing blocks of content – an entire course at once, when it was done.
There’s a bunch of reasons why I don’t think this approach has worked. Probably the most disappointing for me, personally, is having had a number of subscribers cancel with feedback that I appeared to have abandoned the site. Heartbreaking.
Rather than dwell on this as a negative, I choose to take it as an experiment that didn’t work out. I’ve learned some lessons, and I know – more definitively, rather than just “gut feel” – that this is not the right process, for me.
As such, I have, in recent weeks, switched back to regular video updates. From now on, the previous approach of three videos per week will return.
Something I’m considering at the moment is livestreaming things that I am working on. If you’ve ever watched Twitch, you’ll know the sort of thing I’m on about.
The main reason for this is to capture the thought process that I go through when writing code. I think this would be incredibly interesting to share. I do try to capture this when creating the more traditional content.
But even so, sometimes that exploratory stage holds big reasons as to why the code ended up as it did. This is much harder to cover in the traditional video approach.
For each of these livestreaming sessions I would record the screen as normal, along with the audio. I’m not sure how to capture chat as of yet, as the screen real estate is already extremely limited. I record at 1280×720, which is great for clarity of font etc, but severely limiting for real world dev. These things will need to be addressed.
This idea stemmed from this tweet:
I made a backlink checker in Symfony, Node, Elixir, and Go. After trying each project, I’ve come back to the original #Symfony project and am re-writing, full #tdd. The difference between prototype and this design is staggering. It’s one way to spend a Saturday evening. #PHP
There would also be no formal, written notes created for these videos. And each video would likely be ~1 hour long. I know this isn’t for everyone, but I’d be really grateful to hear your feedback on this idea.
One thing that I found initially confusing when working with the API Platform was in the creation of custom routes. In particular, in this video we address the issue of using a URI that differs from auto-generated route name / path that’s based off our underlying entity.
This is really useful, and I use this concept in every API Platform project I’ve created.
In this video we finish up the
POST implementation for our API Platform setup.
The number of videos required to get a
POST endpoint working is a little misleading. We could have done this much quicker, but the Behat tests “dogfood” our API, and as such are making use of the
POST endpoint also.
This is all about killing multiple birds with a single stone.
A major selling point, for me, of the API Platform is the rapid application development potential.
As mentioned above, the
POST videos make this look a lot less rapid than it really can be. We had to take a care of a lot of setup / boilerplate for our testing in the previous few videos. Now we can spread our wings a little, and leverage a lot of the benefits that the API Platform provides in getting a brilliant API up and running, fast.
In the next few videos we will continue on with
GET ‘ting multiple resources in one API call,
PUT for updating existing resources, and
DELETE for, well, I’m sure you can figure that one out yourself.
Have a Great Weekend!
Ok, that about wraps it up from me this week.
If you haven’t yet done so, please do come and say hi on the forum. It’s early days on there, but the discussions I’ve been involved with so far have been good fun. Here’s to more of them 🙂
Until next time, have a great weekend, and happy coding.
I hope this post finds you well. It’s so lovely and sunny here in the UK, it’s (almost) a shame to be inside coding. I guess that’s what laptops are for, right? So you can still code whilst sat in the garden.
The last five weeks have been incredibly busy for me. Aside from starting a new role, I’ve managed to finally finish off a couple of big tasks that have been on my TODO list for waaaay too long. These are:
adding PayPal to the site
migrating away from Disqus commenting
I want to quickly cover both.
If you’ve been getting my mailings / reading the blog for any length of time, you’ll likely be sick of hearing about PayPal. When switching from the old Symfony-only implementation of CodeReviewVideos (version 1), I knew I’d want to offer more than just Stripe as a payment option. Therefore, I dutifully planned ahead and made the process of accepting payments as modular as possible, and made Stripe payment processing just one possible implementation.
This all worked absolutely fine. I was plenty comfortable with Stripe already, and had my original implementation to use as a reference.
What I did wrong, in hindsight, was base my implementation too heavily on Stripe.
To be clear, Stripe get a lot of things right. If you have to accept payments, working with Stripe’s API is a joy. It’s hard not of be influenced by how their system works.
As such, some of the ways I implemented things like the card details endpoint, the invoicing, and even little things like what data was being captured if using a debug level of logging were too heavily tied to Stripe.
These things combined to make adding Braintree integration (aka PayPal’s API) take a lot longer than planned (~6 months, to my estimate of about 2-4 weeks). There were some other complications, such as getting my account approved was perhaps something I should have done upfront, but instead, I left this until I was about 8 weeks into development. In hindsight, if they had declined my application, I’d have wasted a lot of time. Not to mention, when I finally thought I might get rejected (it took a while, I got fearful) I stopped development entirely – for about 2 weeks.
The biggest mistake I made though was in the DB schema. Even though I knew upfront that I’d ultimately want to allow people to subscribe with PayPal, or Stripe, I made the relationship between a User account, and a Payment Information a one-to-one.
This was deployed to prod.
All worked fine when all I had was Stripe.
The problem dawned on me that if a User was paying with Stripe, then canceled their subscription, then rejoined with PayPal, then canceled again, and rejoined with Stripe, there was no way to get their previous payment info back. It sounds like an edge case, but if I’ve learned anything from CodeReviewVideos, it’s that all manner of unexpected circumstances can, and do arise. And more frequently than I’d ever have thought.
There was another issue. If a User was paying with Stripe, and then switched to PayPal, with a one-to-one setup they would lose their Stripe invoices. Again, major headaches.
So even when I’d finished the development, I still had a major migration ahead of me. And that consumed about 4 weeks in terms of planning, writing migration scripts, testing, setting up a whole new environment to test the thing end to end… phew, the work just kept on, and on.
Anyway, to cut a long story short, it’s done. The migration went well, and PayPal is now in prod. I think I celebrated by immediately cracking on with the Disqus migration. Ha.
Migrating To Discourse
One aspect of the CodeReviewVideos web site that I’ve never been happy with has been the use of Disqus.
There was a nasty user experience whereby you’d have to sign up once to use the site, and again – and entirely separately – to leave a comment. It sucked. But as far as pragmatic solutions go, it was good enough to get going.
I also read that Disqus would be enabling adverts on my comments section – though to the very best of my knowledge, that never happened. There was talk of a monthly fee. I don’t know. I don’t begrudge them charging for their service, but that wasn’t for me.
Adding Disqus wasn’t super easy, but at the same time, it wasn’t quite as hardcore as the PayPal change.
The complications came by way of Single Sign On, hosting Discourse (via Docker), and replacing the existing comments.
Single Sign On wasn’t as bad as I’d expected. I thought that would be the hardest part. I found a Laravel package which I butchered crafted with surgical precision into something that worked nicely with Symfony.
Hosting Discourse wasn’t too bad. I use Rancher for my Docker container management, but Discourse’s Docker implementation just wasn’t playing ball. In the end I got a new VPS from Hetzner and hosted it there instead. There were some tweaks needed, but overall it wasn’t so bad.
Replacing the existing comments was the real tricky part. Disqus provide a one-way export – something I think is a bit weird. By which I mean you can export your data from Disqus, but they won’t let you revert to a previous ‘backup’. Anyway, I didn’t need that, I just needed the export, so that was fine and dandy.
Once I had the export I needed to get that data in to Docker, and then tweak the provided Disqus import script to run against my export. That all generally worked, but it only seems to have missed some of the comments off. I’m not sure why, but also I felt the end result was “good enough”.
The import worked by looking for any existing user, and then mapping a Disqus email address to the user’s Discourse email address. If the Disqus commentor never had a site membership, then now their comment will be assigned to some anonymous username like
Disqus_312jklsdf2kl or whatever. Not perfect, but again, good enough.
Now what happens is when I create a new video, the comments section automagically creates a new forum post under the username chris. As such, if you look at the forum today (and you should, because it’s ace), you’ll see I’ve been posting new topics like a mad man. This will slow down over the next few days.
As I write this I still have email functionality globally disabled on the forum. This will change, possibly over the weekend, once I’m suitably confident everything has settled down. You may recall receiving an email from the staging forum a few weeks back – yep, I made a boob there. Sorry about that. Once bitten, twice shy.
I mentioned at the start that aside from the PayPal and Discourse changes, I have also recently started a new role.
Sometimes I get emails asking why I don’t have any new videos in a while, or why I haven’t updated X, Y, or Z to the latest and greatest. Believe me, I’d love to spend all day making new videos. Unfortunately CodeReviewVideos is not my full time job.
As some of you may know – I’m fairly open about it – I am a contractor by day.
One of the really nice things about being a contractor is getting to experience lots of different projects, in various scales of complexity, and reliability 🙂
I get a lot of interesting ideas for videos from my day-to-day work experiences. And this means that the content on CodeReviewVideos is very much about actionable, real world stuff you can use right now, today in your projects. It’s also battle tested / used in real world production websites.
There’s a bunch of videos I wrote up but haven’t had time to record as of yet. The reason I mention the whole day job thing is that it means I have only a fixed amount of time per week to devote to the site, and I have to prioritise tasks accordingly. Above all else, I prefer making videos. This is why I stared, and continue to run CodeReviewVideos. I love sharing, and from the feedback I get from so many of you (thank you!) you find it useful, too.
But of course, over the last few weeks I’ve had these other big site changes (dare I say, improvements) to make. And that has meant I haven’t been recording. Thankfully all that can change, and I can get back to making recording new stuff.