#colophon

Making my website faster

Since I started running this site in 2011, I’ve adhered to some principles to make it fast, cheap, and privacy-respecting.

  • As few third-party cross-domain requests as possible – ideally, none.
  • No trackers (which tends to follow naturally from the above).
  • Use Javascript but don’t require it – the site should work just fine with it disabled, only with some features missing.
  • No server-side code execution – everything is static files.

Out of respect for my readers who don’t have a fancy gigabit fiber internet connection, I test the website primarily on slower, high-latency connections – either a real one, or a simulated 2G connection using Firefox’s dev tools.

I was doing an upgrade of my httpd2 software recently and was frustrated at how long the site took to deliver, despite my performance optimizations in the Rust server code. To fix this, I had to work at a much higher level of the stack – where it isn’t about how many instructions are executed or how much memory is allocated, but instead how much data is transferred, when, and in what order.

On a simulated 2G connection, I was able to get load times down from 11.20 seconds to 3.44 seconds, and the total amount of data transferred reduced from about 630 kB to about 200 kB. This makes the site faster for everyone, whether you’re rocking gigabit fiber or struggling to get packets through.

In this post I’ll walk through how I analyzed the problem, and what changes I made to improve the site.

Planning to redirect traffic to HTTPS

tl;dr: Check that your RSS reader is using an HTTPS URL, because the HTTP one will start redirecting soon, and you probably want to find out if it breaks.

Edit from four days later: I’ve flipped the switch on this and, from the logs, it doesn’t seem to be messing anybody up.


It’s been just about four years since I finally got HTTPS and HTTP/2 working for this site. During that time, I’ve seen most incoming traffic from humans transition over to encrypted connections. (HTTP/2 connections are also significantly faster for both my server, and your user experience, than earlier editions.)

You might wondering what I mean by “traffic from humans.” Well, it turns out the vast majority of my remaining unencrypted HTTP traffic (ye olde port 80) is from a combination of:

  • RSS readers (80%)
  • Shady crawler bots that don’t check robots.txt (15%)
  • Google, for some reason – I’ve poked them about it (~4%)
  • Requests that may be from actual humans (1%ish)

Since I deployed httpd2 back in 2020, I’ve been waiting for an opportunity to turn off publicfile, the HTTP server I’ve used since time immemorial. publicfile has served well, but the code is as archaic as its protocol support, its license makes it difficult to maintain, and (frankly) I’m less excited about appearing to support DJB and his software ecosystem these days.

So, I figure I will do the following:

  1. Respond to all HTTP requests with a 301 redirect to HTTPS (…something publicfile can’t actually do out of the box), and
  2. Turn on the Strict Transport Security header.

For best results, check your RSS reader today and verify that it’s using an HTTPS URL. It should follow the redirect when I enable it, but, you never know.

RSS Feed Back On

At some point in the past… I dunno, two years or so, it appears that my RSS feeds broke.

I use Zola to generate this site, and they don’t have much in the way of a cross-version compatibility guarantee – minor version updates routinely break my templates. (I’m currently stuck on an older version because of this bug.) They appear to have changed the names of the RSS-related settings, causing my detection for generate_rss to always return false (because they also seem to default any typo’d configuration key to false). Whee.

Anyway, should be back on now – thanks to all the folks who have asked about this.

Accessibility Updates

Since it looks like some folks have been actually reading my blog, I’ve made a pass over the site, looking for accessibility problems. I have increased visual contrast and made links within articles slightly more obvious. The comments in code samples are still under the WCAG recommended constrast level, but they’re generated by a third party syntax highlighting library, so fixing them is more involved.

Please let me know if you have any difficulty using the site!

Switching this site from Jekyll to Hakyll

Update from four years later: I’ve switched away from Hakyll. These notes are here for their historical value only.

I used to manage this site with Jekyll. I’ve now switched to Hakyll. Here’s my reasoning and some notes on how it went.

My Recommended Publicfile Patches

While djb is perhaps best known for writing qmail, he also wrote a web server, publicfile. Like his other software, publicfile is simple and robust. I use it to serve this site, among other software.

Characteristically for djb, publicfile is pretty minimal out of the box. Here are a few patches I applied to the source to make my server faster, more flexible, and easier to use.