Screenshot of a terminal showing the contents of a udev rule file. ``` $ cat /etc/udev/rules.d/99-usb-radios.rules SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="radio-ubitx" SUBSYSTEM=="tty", ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", SYMLINK+="radio-bff9" ```

When connecting Ham radios to a computer, one quickly gets overwhelmed with the number of ttyUSB* devices created. The devices get assigned a mystically variable number depending on boot-time detection, order of connection, position of the stars, and the last fully digested meal of the pet whose most recent birthday it is.

I finally got fed up with this issue the other day, and wrote a udev rule to create symlinks automatically. For each known device. A file containing the following sort of incantations can go in, say, /etc/udev/rules.d/99-usb-radios.rules

SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="radio-ubitx"
Continue reading
Barchart summaries of work time spent of 2023

As I progress in my career, I find my time to be more and more parcelled out due to many external requests, or internal whims. Over the last few years, I have cobbled together a system that allows me to plan time for tasks based on their priority and importance, and keep me honest applying my time where it matters.

At its core, it follows Cal Newport’s Rule # 4 of Deep Work.

At the beginning of each workday, […] Divide the hours of your workday into blocks and assign activities to the blocks. […] When you’re done scheduling your day, every minute should be part of a block. You have, in effect, given every minute of your workday a job. Now as you go through your day, use this schedule to guide you.

Cal Newport, “Deep Work”, Rule # 4


  • It is simply based around a spreadsheet, where
  • I classify sprintly task priorities and urgency using an Eisenhower matrix,
  • I plan a few days ahead by placing tasks in 1-hour blocks,
  • I commit to the day’s plan in the morning, and record actual work; and
  • Comparing commitment to actual work allows me to collect some metrics.
Continue reading
Screenshot of Vim editing SaltStack files for a Nextcloud state: map.jinja, a state file using macros to setup user and groups, and a Jinja macro processing the map data to do so.

I use SaltStack to manage my systems’ configurations. This allows me to have a relatively structured way to maintain them, and set up new ones. There are, however, many ways to set up SaltStack states. I have honed my favourite approach by trial-and-error, which I want to touch on here. It’s nothing too esoteric, but worth a summary for clarity’s sake.


  • States in SLS files should have as few parameter strings as possible.
  • Parameters should instead come from map.jinja files.
  • Maps should generally allow an override by similarly-named pillar keys.
  • Map dicts should be as formulaic as possible, similar to OOP objects implementing interfaces.

This last point is the key, as it makes it easy to

  • leverage existing maps without having to work out the specific details of each one,
  • use macros for common tasks (directory creation, user setup, package and service management, …), and
  • share maps across states (e.g., making web applications’ URIs accessible via a web server)
Continue reading
An interactive git rebase

I recently happened upon an article by Julia Evans on what can go wrong when rebasing in Git. This made me realise that I should probably talk about my favourite, yet obscure, Git feature.

When using commit you can use --fixup <commitid> or --squash <commitid> to create a commit that can be automatically fixup’d or squashed on the next rebase with --autosquash. This is handy, but you need to know the commitid beforehand.

There is a type of refspec that can resolve a regular expression to the commitid of a matching commit: :/<RegExp>. This will find the ID of the most recent commit (not necessarily on your current branch) with message matching /<RegExp>/, and resolve to that.

It’s a killer feature with --fixup and --squash: in a pinch, you can create fixes to past commits that

  1. you only vaguely remember the message of, and
  2. Git can automatically move (autosquash) in the next interactive rebase.
Continue reading
Posted in tip.

At the time of this writing, this blog runs on a Bitnami WordPress image, but I have changed the configuration to run multiple sites (WP_ALLOW_MULTISITE and MULTISITE in the wp-config.php). I realised I had issues running scheduled events using DISABLE_WP_CRON when the ActivityPub plugin failed to send new posts to subscribers. This was confirmed by the site health dashboard, indicating that scheduled events were late.

As it turns out, when manually running the script with sudo -u daemon /opt/bitnami/php/bin/php /opt/bitnami/wordpress/wp-cron.php (with WP_DEBUG enabled) complains of an undeclared HTTP_HOST, and terminates quickly. As soon as I set that variable in the environment and reran the script, the warning was gone, and the script took longer to run. All my recent post also made it to the fediverse!

Continue reading

I’ve never had a personal Twitter account, mainly for fear of the time sink and doom-scrolling. I wanted to avoid both. I recently obtained an invite to BlueSky, which I took, out of curiosity. The next obvious thing to do was to open an account on the Fediverse, and use that (I had one on a self-hosted Nextcloud Social instance, but the server is now firewalled, so not quite social enough).

I quite like the liveliness and congeniality of the discussion there, and I was glad to find a few familiar faces, some that I have been following for decades before. It’s nice to receive everything in one place. Though it does revive my fears of time-sink.

This blog is now also a node in the fediverse, you can follow what I post here at When thinking more about how to make it useful, I also realised that I have a number of ongoing projects that I work on-and-off on. So far, I have been keeping progress notes, and writing a longer blog post in the end. With this integration with the Fediverse, I want to try something new, posting quick updates about progress, blockers and discoveries.

I’ll use the freshly minted µblog category, along with a tag per project, to classify those posts. They will not be displayed on the main page, but will pushed via the fediverse to willing followers. I’ll still write full articles in the end

GnuPG sometimes gets confused about which SmartCard a subkey is on, and refuses to use it from the currently-available card.

tl;dr: Here’s a quick script to fix the issue.

$ export SUBKEYID=...
$ KEYGRIP=$(gpg --with-keygrip -k ${SUBKEYID} | sed -n "/${SUBKEYID}/,/$/{s/ *Keygrip = //p}" )
$ rm -i ~/.gnupg/private-keys-v1.d/${KEYGRIP}.key
$ gpg --card-status  # recreate the stub from the daily-use key
Continue reading

When working on many feature branches, they tend to accumulate in the local Git clone. Even if they get deleted in upstream shared repos, they need to be cleared locally, too, otherwise they will stick around forever.

Here’s a quick one-liner to clean up every branch that is fully merged to main. It does make sure not to delete main and develop, though.

git branch -d $(git branch --merged main | grep -vE '(^\*|master|main|develop)')
Continue reading
Screenshot of a terminal showing oft-used commands

As idle musing, and a way to show off my mastery of shell pipelines, I was wondering what my most-used shell commands are. It’s an easy few commands to pipe.

history | sed 's/^ *//;s/ \+/ /g' | cut -d' ' -f 2 | sort | uniq -c | sort -n | tail -n 20

The outcome is rather expected. I feel validated (by my shell) in my own self-perception!

Continue reading

When migrating a database from MySQL to PostgreSQL, I bumped into a slight issue with timestamp formatting. PostgreSQL supports many date/time formats, but no native support to output ISO-8601 UTC time & date format (e.g., 2023-08-05T13:54:22Z), favouring consistency with RFC3339 instead.

ISO 8601 specifies the use of uppercase letter T to separate the date and time. PostgreSQL accepts that format on input, but on output it uses a space rather than T, as shown above. This is for readability and for consistency with RFC 3339 as well as some other database systems.

Fortunately, StackOverflow had a solution, including some notes about how to handle timestamps with timezones.

SELECT to_char(now() AT TIME ZONE 'Etc/Zulu', 'yyyy-mm-dd"T"hh24:mi:ss"Z"');