I have no idea how to explain this one, but I’ll do my best.
Infrequently when I open the lid on my 2020 Macbook Pro, the laptop wakes up, but the screen is “washed out” / has this weird very white overlay going on.
Nothing I have done seems to resolve this.
It started almost immediately after purchasing, and subsequent OS updates haven’t solved it. I have no external monitors plugged in, or in fact, anything external at all. The one thing that may be a commonality is having the power cable plugged in.
Annoyingly, when I go to shut down / restart (which fixes it, btw), the screen comes back to normal just as I’m shutting down. How frustrating.
So a cunning fix I’ve found – that solves a crappy problem with a three grand laptop – is to open an app like iTerm, then go to shutdown.
iTerm will then prompt saying “hey, do you really want to shut down?” And this allows me to stop the shutdown process and my screen goes back to normal.
Should I need a bodge workaround for a top of the line Apple product? Apparently: yes.
I should add, btw, that this is not the only problem I have with this laptop. I also suffer from:
intermittent key repeat / sticking;
headphone jack doesn’t work on wake from sleep;
my fat fingers keep accidentally touching the “touch bar” when typing.
I’ve recently been forced to migrate from Rancher v1.x to using docker-compose to manage my production Docker containers.
In many ways its actually a blessing in disguise. Rancher was a nice GUI, but under the hood it was a total black box. Also, fairly recently they migrated to v2, and no true transition path was provided – other than “LOL, reinstall?”
Anyway, one thing that Rancher was doing for me – at least, I think – was making sure the log files didn’t eat up all my hard disk space.
This one just completely caught me by surprise, as I am yet to get my monitoring setup back up and running on this particular box:
➜ ssh firstname.lastname@example.org # obv not real
Welcome to chris server
System information as of Wed May 13 12:28:24 CEST 2020
System load: 3.37
Usage of /: 100.0% of 1.77TB
Memory usage: 13%
Swap usage: 0%
Users logged in: 0
=> / is using 100.0% of 1.77TB
There’s a pretty handy command to drill down into exactly what is eating up all your disk space – this isn’t specific to Docker either:
Basically, this wasn’t caused by Docker directly. This was caused by my shonky migration.
The underlying issue here is that some of the Docker containers I run are Workers. They are little Node apps that connect to RabbitMQ, pull a job down, and do something with it.
When the brown stuff hits the twirly thing, they log out a bit of info to help me figure out what went wrong. Fairly standard stuff, I admit.
However, in this new setup, there was no limit to what was getting logged. I guess previously Rancher had enforced some max filesize limits or was helpfully rotating logs periodically.
In this case, the first port of call was to truncate a log. This might not actually be safe, but seeing as it’s my server and it’s not mission critical, I just truncated one of the huge logs:
/var/lib/docker/containers/6bfcad1f93a7fffa8f0e2b852a401199faf628f5ed7054ad01606f38c24fc568 # ls -la
drwx------ 4 root root 4096 May 9 10:26 .
drwx------ 36 root root 12288 May 9 16:50 ..
-rw-r----- 1 root root 301628608512 May 13 12:44 6bfcad1f93a7fffa8f0e2b852a401199faf628f5ed7054ad01606f38c24fc568-json.log
drwx------ 2 root root 4096 May 2 10:14 checkpoints
-rw------- 1 root root 4247 May 9 10:26 config.v2.json
-rw-r--r-- 1 root root 1586 May 9 10:26 hostconfig.json
-rw-r--r-- 1 root root 34 May 9 10:25 hostname
-rw-r--r-- 1 root root 197 May 9 10:25 hosts
drwx------ 3 root root 4096 May 2 10:14 mounts
-rw-r--r-- 1 root root 38 May 9 10:25 resolv.conf
-rw-r--r-- 1 root root 71 May 9 10:25 resolv.conf.hash
truncate --size 0 6bfcad1f93a7fffa8f0e2b852a401199faf628f5ed7054ad01606f38c24fc568-json.log
That freed up about 270gb. Top lols.
Anyway, I had four of these workers running, so that’s where all my disk space had gone.
Not Out Of The Woods Just Yet
There’s two further issues to address though at this point:
Firstly, I needed to update the Docker image to set the proper path to the RabbitMQ instance. This would stop the log file spam. Incidentally, within the space of truncating and then running a further ls -la, the log was already at 70mb. That’s some aggressive connecting.
This would have been nicer as an environment variable – you shouldn’t need to do a rebuild to fix a parameter. But that’s not really the point here. Please excuse my crappy setup.
Secondly, and more importantly, I needed a way to enforce Docker never to misbehave in this way again.
In short, using a Makefile per project allows me to mask away some long winded commands that make kick starting each environment much easier than it may otherwise have been.
The problem is that I have one Makefile per project directory, and some projects have several services. An example might be a project with:
And so on.
Whilst it’s nice to have one command per service, it does still mean I have to log on to the server, cd to each dir, then run the make start command. And in some cases this needs to be done in a particular order, so that dependant services are up before workers try to connect, and so on.
A better way is to have one Makefile in the project root dir, which then calls the make start command in each sub dir. Something like this:
make start --directory /docker/myproject.com/api && \
make start --directory /docker/myproject.com/admin && \
make start --directory /docker/myproject.com/www
This way I can now just run one command on the project root dir, and it will take care of calling all the sub tasks that kick start the project.
And say I go into the mysite.com/www dir and run docker-compose up, all is good.
Then I go to the anothersite.com/www dir and run docker-compose up, and docker compose would first shut down the mysite.com/www containers because, by default, docker compose uses the basename of the directory where your docker-compose.yaml file lives as the project name.
We can pass in a project name when running docker-compose, like so:
docker-compose up -d...
Of course, make sure your project name differs for each of your projects. And once done, your individual docker-compose projects should run in the way you would intuitively expect.
But, this creates another problem. The subtle problem I mentioned above.
Once you start docker-compose projects in this way, all subsequent docker-compose commands need to the -p my_project_name flag. Or they will do the (apparently) unintuitive thing.
docker-compose up -p my_project_name -d
Starting my_project_name_nginx … done
# ??? - nothing shown
docker-compose exec mysql /bin/sh
ERROR: No container found for mysql_1
# ??? wtf
This confused me for a good half an hour or so, even leading me to upgrade docker-compose, try restarting docker, try rebooting the production server… the works.
Of course, none of that worked.
What did work was to include the project name with the command!
docker-compose -p my_project_name top
UID PID PPID C STIME TTY TIME CMD
999 9998 9973 0 10:30 ? 00:00:01 mysqld
docker-compose -p my_project_name exec mysql /bin/bash
Not that this isn’t a bit of a ball ache, but still, at least now it makes sense.
On Saturday dinner time I decided to bump my Ubuntu 19.04 release to Ubuntu 20.04. What could go wrong?!
Well, here I am on a completely fresh installation. So it turns out: quite a lot.
I knew I was ballsed when the upgrade process failed in the terminal. I did an apt-get update and it seemed to think I was already on 20.04. As soon as I rebooted, of course, the OS never came back up. Sad times.
Anyway, tons of other difficulties aside, the issues I hit upon when finally re-installed were not that new to me when it comes to Linux:
In particular, I had two.
Firstly, the mini-display port to display port cable just inexplicably died on me. This completely threw me as all of a sudden my main monitor – a Dell P2715Q – seemed to be working, but the screen was black. I knew something was amiss as when I turned on the screen, Ubuntu would make it my main display but of course, it was all dark so I couldn’t see the log in prompt, or stuff like that.
Long story short – after a full re-install – I realised the cable was at fault. Sad times, and more hours lost.
But that’s fixed now. All it took was a new display port to display port cable, which very fortunately, I had in the spare parts box.
Portrait Mode Problems
A new one on me for Ubuntu 20.04.
I have another 27″ Dell monitor, a U2713HMt, which isn’t 4k. And has therefore been relegated to my second monitor.
I’ve never had issues with this monitor. It is always detected as my primary during install, so I have to install with my head tilted 90 degrees… or just turn the monitor back round, but aside from that, it’s been really solid.
Incidentally, having a 27″ screen at 2560×1440 makes a really nice super big terminal window if you are a massive nerd like me, and spend a lot of time in such places.
Anyway, Ubuntu 20.04 did not like putting this monitor into Portrait Right.
The issues I hit were that it would allow me to specify the setting, but when applied, it would either revert, or kill the monitor entirely.
Fixing Ubuntu 20.04 Portrait Mode Problems
Unfortunately I have not found a fix for the “Screen Display” menu. It won’t take the setting directly.
However, there is a workaround that seems to be working about 95% of the time for me.
sudo apt install arandr
Firstly, I installed arandr. This is a GUI for the more cryptic XRandR. If you’re a whizz with XRandR you can likely do the next bit from there directly somehow. And likely you don’t need blog posts like this to get your PC working properly. Fair play to you.
For the rest of us…
When installed, run arandr but run it as you, not sudo.
Make your monitor setup look how you want it. It’s intuitive enough. The cable names and types are labelled sufficiently that you should be able to figure out what is what.
The “Outputs” section is used to select the individual monitors and then modify them as needed. I just needed to make DVI-I-1 into Orientation “Left”.
Once done, choose “Save As…” and this give your file a name. I called mine triple.sh
Oh, the wit.
The reason we didn’t run this as sudo is because this file will, by default, be saved to your home directory, e.g. /home/chris/.screenlayout/triple.sh
You should be able to Apply this script now (from arandr) and your monitors should be correctly displaying.
However, this won’t last between reboots.
In order to persist between reboots, I used a “startup application” entry.
To get to this, hit the super key and type “Startup”, and then I entered the following:
Giving me this:
So far, it’s worked every time except once, when it didn’t. For which I have no reason.
But hey, that’s Ubuntu baby. If you want an easy life, blow 3 grand on a Mac.
Edit: Less than 24 hours later, I have found that when resuming from sleep, I need to re-run the script. Not ideal. To make this as easy as possible, I moved the shell script to my desktop, and updated the start up script location above to point to the new location. Now, I just double click the file on my desktop whenever I resume and the monitors are out of whack.