Screenshot of a terminal showing the contents of a udev rule file. ``` $ cat /etc/udev/rules.d/99-usb-radios.rules SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="radio-ubitx" SUBSYSTEM=="tty", ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", SYMLINK+="radio-bff9" ```

When connecting Ham radios to a computer, one quickly gets overwhelmed with the number of ttyUSB* devices created. The devices get assigned a mystically variable number depending on boot-time detection, order of connection, position of the stars, and the last fully digested meal of the pet whose most recent birthday it is.

I finally got fed up with this issue the other day, and wrote a udev rule to create symlinks automatically. For each known device. A file containing the following sort of incantations can go in, say, /etc/udev/rules.d/99-usb-radios.rules

SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="radio-ubitx"
Continue reading
Screenshot of Vim editing SaltStack files for a Nextcloud state: map.jinja, a state file using macros to setup user and groups, and a Jinja macro processing the map data to do so.

I use SaltStack to manage my systems’ configurations. This allows me to have a relatively structured way to maintain them, and set up new ones. There are, however, many ways to set up SaltStack states. I have honed my favourite approach by trial-and-error, which I want to touch on here. It’s nothing too esoteric, but worth a summary for clarity’s sake.

tl;dr:

  • States in SLS files should have as few parameter strings as possible.
  • Parameters should instead come from map.jinja files.
  • Maps should generally allow an override by similarly-named pillar keys.
  • Map dicts should be as formulaic as possible, similar to OOP objects implementing interfaces.

This last point is the key, as it makes it easy to

  • leverage existing maps without having to work out the specific details of each one,
  • use macros for common tasks (directory creation, user setup, package and service management, …), and
  • share maps across states (e.g., making web applications’ URIs accessible via a web server)
Continue reading
A few HomeAssistant cards showing SNMP monitoring of speed and quota of an upstring ISP.

I finally fell for the smart-home mania when I needed to read a few Zigbee climate sensors, and started using Home Assistant. There was no return from it, and I gradually grew the number of sensors and automations. This is all the easier thanks to a very active community site, offering many a recipe and troubleshooting advice. This is where I found a bandwidth monitor based on SNMP metrics that has been functional for a while.

My ISP, Internode (no longer the awesome service it used to be 10 years ago), has become increasingly flaky, silently dropping support for their Customer Tools API. This API was useful to track quota usage in a number of tools, including my own Munin plugin. Because of this, I unwittingly, and without warning, went beyond my monthly quota this month. I had to double my monthly bill to buy additional data blocks to tie me over.

It became obvious that I needed a new way to track my usage. What could be better than HomeAssistant, which was already ingesting SNMP data from the router? I posted my updated solution in the original thread, but thought that it might be worth duplicating here.

Continue reading
A diagram of a CloudFormation template creating a TLS-secured CloudFront distribution serving content from an S3 bucket.

As I mentioned in a previous post, I am migrating a number of static websites from Apache on bare metal to an object store and a CDN in the cloud. Namely, this is AWS S3 and CloudFront. To avoid too much manual grooming of pet yaks, I also went directly for Infrastructure-as-Code with CloudFormation, with the objective of creating a relatively simple reusable web+CDN template.

This is not a new topic, and a number of resources already exist around the web. I, for example, started with this one, which does a fairly decent job. There are, however, a number of fine details which I have found were tricky to get right, could lead into incompatibilities, and for which accurate documentation was hard to find (even ChatGPT failed to provide a correct answer, though this is not entirely surprising).

ChatGPT confidently states things that aren’t true.

The goal of this post is to call those out, and provide the CloudFormation template mentioned above for those looking for a base. The template will:

  1. create an S3 bucket for use as a website endpoint
  2. create a CloudFront distribution using that bucket as an Origin
  3. create a few DNS entries
  4. create a TLS certificate for the service

tl;dr:

Continue reading

I talk about restoring backups often recently. This is because the disk on my trusty bare-metal server died. This gave me the opportunity to reassess my hosting choices, and do the ground work to move from where it was to where I want it to be.

One of those changes is moving static website hosting away from a Apache HTTPd, running on an OS I administrate (read: “frequently broke”), to a more focused and hands-off system in the cloud, AWS S3 with a CloudFront CDN (more on this in a later post).

Unfortunately, decades of running Apache have left me with a number of static sites using some on-the-fly templating by relying on Server-side Includes (SSI). Headers, footers, geeky IPv6 and last-modified tags, … none of those work with a truly static host. I needed a solution to render those snippets into full pages.

At first, I thought I’d just write a simple parser in Python. I quickly gave up on the idea, however, when I realised I used included templates with parameters. Pretty nifty stuff, but also not trivial to write a parser for.

Then I realised I already had the perfect parser: Apache. All I needed was to let it render all the pages one last time, and publish those instead! This was packed quickly with a relatively simple Docker container, and the trusty wget. The busy person can find a Gist of the Dockerfile here.

Continue reading

Backups. What a better time to test ’em than when you need ’em. Don’t lie. I know you’ve been there too. In an unfortunate turn of events, I had to restore a number of bare git repos from recent off-site copies (made with the handy rdiff-backup), but they needed a bit more work to be functional.

Once restored, I couldn’t pull or push from my existing working copies. I was greeted with cryptic error messages instead: fatal: git upload-pack: not our ref 0000000000000000000000000000000000000000 and ! [remote rejected] master -> master (missing necessary objects), respectively.

No amount of searching led to an adequate solution. So I simply leveraged git’s distributedness, and used one of the clone to recreate my bare repo. I was nonetheless a bit worried about having lost a few commits on the tip.

Playing in the bare repo later on led me to a more satisfying solution. Apparently, the refs/heads/master file was corrupted (empty), and editing it to contain the full sha-1 of the tip was enough to fix the issue. I found the sha-1 of the desired commit in the packed-refs file at the root of the bare repo. Once done, everything worked as before, and pre-existing working copies were able to pull and push without issue.

I learned two things:

  • A bit more about git
  • That I didn’t actually have any more commits there

Backups! Yay!

Continue reading

Due to an unplanned outage of my main ISP, I had to get a mobile data SIM in a hurry, to use as an LTE backup uplink for my Mikrotik hAP ac3 (the whole setup of which I’ll describe one day). Given the price discrepancy of those, I wanted to make every transferred byte count: No unnecessary update fetching or immediate download of high-res sepia-toned photos of bulldogs in tutus.

Android can advertise itself as a metered network to its (Android) clients, so how do I do the same with Router OS 7?

(router agnostic) tl;dr:

  1. Make DHCP Option 43 (Vendor-Specific Option) contain the string ANDROID_METERED;
  2. The option should be sent even if not requested by the client (not standard compliant, but doesn’t hurt).
Continue reading

I recently had to restore databases from a rough mysqldump backup in a piecemeal fashion. One necessity is to SET the environment correctly, lest some weird encoding issues happen when restoring the data, leading to failures.

A sed one-liner can help for this.

DBNAME=mydb
sed -n "/^-- Server version/,/^-- Current Database/p;/^-- Current Database.*${DBNAME}\`/,/^-- Current Database/{p}" mysqldump.sql > ${DBNAME}.sql

This extracts SQL from the initial header, to the first database, which contains all the sessions SETs. It then captures statements any time the target database is the current one. Note that this doesn’t restore the GRANTs.

Befor blindly piping the output SQL into mysql, one would be well advised to review the contents of the file, to ensure only the desired modifications are included.