Every now and then, some spurious peaks show up on munin graphs. The peaks are order of magnitude higher than the expected range of the data. This particularly happens with DERIVE plugins, that are notably used for network interfaces.

One way to fix this, as suggested by Steve Schnepp (and in the faq), is to set the maximum straight into the RRD database, and then let it reprocess the data to honour this maximum.

Continue reading

I wrote this article for the Learnosity blog, where it originally appeared. I repost it here, with permission, for archival. With thanks to Micheál Heffernan for countless editing passes.

The dramatic increase in Learnosity users during the back-to-school period each year challenges our engineering teams to find new approaches to ensuring rock-solid reliability at all times.

Stability is a core part of Learnosity’s offering. Prior to back-to-school (known as “BTS” internally) we load-test our system to handle a 5x to 10x increase on current usage. That might sound excessive, but it accounts for the surge of first-time users that new customers bring to the fold as well as the additional users that existing customers bring.

Since the BTS traffic spike occurs from mid-August to mid-October, we start preparing in March. We test our infrastructure and apps to find and remove any bottlenecks.

Last year, a larger client ramped up their testing. This created a 3x usage increase of our Events API. In the process, several of our monitoring thresholds were breached and the message delivery latency increased to an unacceptable level.

As a result, we poured resources into testing and ensuring our system was stable even under exceptional stress. To detail the process, I’ve broken the post into two parts:

  1. Creating the load with Locust (this piece)
  2. Running the load test (in part two, coming soon).

TL;DR

Here’s a snapshot of what I cover in this post:

  • Our target metrics.
  • How we wrote a Locust script to generate load for a Publish/Subscribe system.
  • Our observations that:
    • The load test must reflect real user behaviours and interactions
    • Load testing alone doesn’t validate system behaviour against target metrics. It’s better to measure this separately while the system is under load.
Continue reading

I finally mastered the shell (beit bash or zsh, but really, this is readline)’s history with command replacement. It took me 19 years and my entire family fortune to gather enough wits to read that part of the manual with enough attention and will as to learn to use it.

Essentially, you can recall previous commands from the history with !number. You can then change some content of the previous command programmatically before running it by adding :s/PATTERN/REPLACEMENT/ or :gs/PATTERN/REPLACEMENT/ (the first one will replace the first occurrence, the second one will replace them all).

Continue reading

I’ve been using Kodi (then XBMC) for more than a decade now (yup, “XB” did stand for X Box alright, but now LibreELEC on a WeTek Core). I’ve also had the library in MySQL for more than half of it. Across migrations, it had developed some quirky content, such as duplicate albums, and some rarities, such as this version of 21, by Adèle, where the description reminds us that her previous album, Ixnay on the Hombre, was only moderately successful on launch; go figure…

As suggested, pretty much everywhere, as the solution for duplicate content in Kodi, I first tried cleaning the library, repeatedly, to no avail. The duplicate albums were still there. One of their noticeable characteristics, though, was that there was always some copy of the album (and in Adèle’s case, the one following Ixnay), that did not have any associated tracks. This felt like it could be a good angle to help me clear those up. Enter some SQL.

Continue reading

I’ve long been meaning to store all my passwords in a single, safe, location, as a way to remain sane as well as safe. But which one? Every operating system (or desktop environment) now has its own store, but choosing one casts a lot of things into stone, and most have a lot of third-party dependencies.

KeePass seems to be a good cross-platform solution, with clients for Linux, Windows, OS X and even Android, and nice features such as filling on demand. But I don’t like the whole clicky interface, if only for use without graphical display. It also doesn’t offer a native way to synchronise the stores across boxes.

For a while, I have been storing all my important configuration files in a git repository, with some make magic to install and update the files on the system. This magic would also store all passwords in a GPG-encrypted files, and replace them when installing the files.

The problem, of course, is that the passwords are still in plaintext in the live systems. And it came back to bite me when I sent an innocuous script (the ics2dav.sh script from this post) to a friend… with the password nicely sitting there. Fortunately, I noticed this before him, and changed my password. In addition, this doesn’t cater for passwords stored in other applications, such as Firefox.

So things had to change. And I discovered pass(1), a simple command-line tool based on GPG-encrypted flat files, with an option to sync natively with Git. So there is finally an option for me to store passwords in a way which fits my workflow.
Continue reading

I recently realised that the QNAP TS-212 NAS (running the latest QTS 4.2.0) can be used as a print server. No need to keep another machine on to print from anywhere!

Remote printing is easy

Both UNICES, through CUPS, and Windows, through Samba, can use the printer straight-away. In the case of the Samsung SCX-3205, the driver under ArchLinux is the samsung-unified-driver (from AUR) which, fortunately, doesn’t install any useless binary beyond those needed by the PPD used by CUPS.

client$ pacman -Qs samsung
local/samsung-unified-driver 1.00.36-2

Remote scanning is harder

The problem is that this is a combo printer/scanner. Moving the printer to the NAS requires a similar solution to CUPS to scan from the network. Fortunately, SANE can do this, and there is some documentation about setting it up on a QNAP NAS. In this case, however, this did not work smoothly, so I had to fix a few things.

Continue reading

A screenshot showing URL completion in dmenu

I have parted with FVWM. Not that I was dissatisfied with more than 12 years of using it and organically growing its configuration. I was not.

But I was recently shown i3 which, despite not being Awesome, is indeed awesome. Particularly in the usability of its default, which I found did not require many a tweak. I was however a bit confused at first, then impressed, when I realised that the auto-generated configuration took into account my Dvorak keymap, and updated the keybindings so the keys would be the same as those on a QWERTY keyboard. That’s thoughtfullness.

The next great thing about i3 (save for $mod+Return to start a term anywhere, anytime), is dmenu. At a press of the relevant binding (equivalent to $mod+d on an 200-year-old keymap), one gets to enter a one-line entry where any command can be entered for execution, with incremental completion.

Dmenu is also nice due to its modularity. It takes a list of strings that can be completed on stdin, and outputs the typed or selected string on stdout, for consumption by whatever script called it.

I figured that it should be possible to handle URLs in a dmenu script. It is actually pretty trivial, and the friend who convinced me to take the jump also provided such a script, which would simply open the typed URL. But I wasn’t entirely satisfied, as recent years of browser usage taught me to expect URL completion. So I looked into ways of doing it.

 

Continue reading