When working on many feature branches, they tend to accumulate in the local Git clone. Even if they get deleted in upstream shared repos, they need to be cleared locally, too, otherwise they will stick around forever.
Here’s a quick one-liner to clean up every branch that is fully merged to main. It does make sure not to delete main and develop, though.
ISO 8601 specifies the use of uppercase letter T to separate the date and time. PostgreSQL accepts that format on input, but on output it uses a space rather than T, as shown above. This is for readability and for consistency with RFC 3339 as well as some other database systems. https://tools.ietf.org/html/rfc3339
My ISP, Internode (no longer the awesome service it used to be 10 years ago), has become increasingly flaky, silently dropping support for their Customer Tools API. This API was useful to track quota usage in a number of tools, including my own Munin plugin. Because of this, I unwittingly, and without warning, went beyond my monthly quota this month. I had to double my monthly bill to buy additional data blocks to tie me over.
It became obvious that I needed a new way to track my usage. What could be better than HomeAssistant, which was already ingesting SNMP data from the router? I posted my updated solution in the original thread, but thought that it might be worth duplicating here.
As I mentioned in a previous post, I am migrating a number of static websites from Apache on bare metal to an object store and a CDN in the cloud. Namely, this is AWS S3 and CloudFront. To avoid too much manual grooming of pet yaks, I also went directly for Infrastructure-as-Code with CloudFormation, with the objective of creating a relatively simple reusable web+CDN template.
This is not a new topic, and a number of resources already exist around the web. I, for example, started with this one, which does a fairly decent job. There are, however, a number of fine details which I have found were tricky to get right, could lead into incompatibilities, and for which accurate documentation was hard to find (even ChatGPT failed to provide a correct answer, though this is not entirely surprising).
The goal of this post is to call those out, and provide the CloudFormation template mentioned above for those looking for a base. The template will:
create an S3 bucket for use as a website endpoint
create a CloudFront distribution using that bucket as an Origin
create a few DNS entries
create a TLS certificate for the service
The S3 website endpoint behaves like a website, returning directory index documents, or HTML errors documents.
(The title was pilferedinspired from a comment by a work colleague, who agreed to be henceforth referred to as [Your Name], as ChatGPT offered this placeholder for their signature.)
Artificial Intelligence or, more accurately, Machine Learning is an amazing tool for sifting through large amounts of data and discovering insightful patterns. A task where a human operator would generally get bored and become sloppy — or simply die of old age in the process — can be very effectively performed by a machine, and a result returned, sometimes in a matter of seconds.
Rather than exhibit true intelligence, however, those systems only learn as much as is present in the data they are given. This is also what they regurgitate. It is no wonder that outputs from those algorithms replicate the biases present in their input data.
Much research work has gone into identifying and reducing biases in training data, or actively de-biasing responses, but the final decision of what to do with the result of an ML process is entirely in the hands of a human being operating it.
tl;dr: Rather than focusing solely on painstakingly fixing each ML system separately, we should also leverage generative AI chatbots to help train humans to recognise, and critically think, when dealing with any ML system, de-biased or not.
I talk about restoring backups often recently. This is because the disk on my trusty bare-metal server died. This gave me the opportunity to reassess my hosting choices, and do the ground work to move from where it was to where I want it to be.
One of those changes is moving static website hosting away from a Apache HTTPd, running on an OS I administrate (read: “frequently broke”), to a more focused and hands-off system in the cloud, AWS S3 with a CloudFront CDN (more on this in a later post).
Unfortunately, decades of running Apache have left me with a number of static sites using some on-the-fly templating by relying on Server-side Includes (SSI). Headers, footers, geeky IPv6 and last-modified tags, … none of those work with a truly static host. I needed a solution to render those snippets into full pages.
At first, I thought I’d just write a simple parser in Python. I quickly gave up on the idea, however, when I realised I used included templates with parameters. Pretty nifty stuff, but also not trivial to write a parser for.
Then I realised I already had the perfect parser: Apache. All I needed was to let it render all the pages one last time, and publish those instead! This was packed quickly with a relatively simple Docker container, and the trusty wget. The busy person can find a Gist of the Dockerfile here.
Backups. What a better time to test ’em than when you need ’em. Don’t lie. I know you’ve been there too. In an unfortunate turn of events, I had to restore a number of bare git repos from recent off-site copies (made with the handy rdiff-backup), but they needed a bit more work to be functional.
Once restored, I couldn’t pull or push from my existing working copies. I was greeted with cryptic error messages instead: fatal: git upload-pack: not our ref 0000000000000000000000000000000000000000 and ! [remote rejected] master -> master (missing necessary objects), respectively.
No amount of searching led to an adequate solution. So I simply leveraged git’s distributedness, and used one of the clone to recreate my bare repo. I was nonetheless a bit worried about having lost a few commits on the tip.
Playing in the bare repo later on led me to a more satisfying solution. Apparently, the refs/heads/master file was corrupted (empty), and editing it to contain the full sha-1 of the tip was enough to fix the issue. I found the sha-1 of the desired commit in the packed-refs file at the root of the bare repo. Once done, everything worked as before, and pre-existing working copies were able to pull and push without issue.
I learned two things:
A bit more about git
That I didn’t actually have any more commits there
Due to an unplanned outage of my main ISP, I had to get a mobile data SIM in a hurry, to use as an LTE backup uplink for my Mikrotik hAP ac3 (the whole setup of which I’ll describe one day). Given the price discrepancy of those, I wanted to make every transferred byte count: No unnecessary update fetching or immediate download of high-res sepia-toned photos of bulldogs in tutus.