#server

Making my website faster

Since I started running this site in 2011, I’ve adhered to some principles to make it fast, cheap, and privacy-respecting.

  • As few third-party cross-domain requests as possible – ideally, none.
  • No trackers (which tends to follow naturally from the above).
  • Use Javascript but don’t require it – the site should work just fine with it disabled, only with some features missing.
  • No server-side code execution – everything is static files.

Out of respect for my readers who don’t have a fancy gigabit fiber internet connection, I test the website primarily on slower, high-latency connections – either a real one, or a simulated 2G connection using Firefox’s dev tools.

I was doing an upgrade of my httpd2 software recently and was frustrated at how long the site took to deliver, despite my performance optimizations in the Rust server code. To fix this, I had to work at a much higher level of the stack – where it isn’t about how many instructions are executed or how much memory is allocated, but instead how much data is transferred, when, and in what order.

On a simulated 2G connection, I was able to get load times down from 11.20 seconds to 3.44 seconds, and the total amount of data transferred reduced from about 630 kB to about 200 kB. This makes the site faster for everyone, whether you’re rocking gigabit fiber or struggling to get packets through.

In this post I’ll walk through how I analyzed the problem, and what changes I made to improve the site.

My Recommended Publicfile Patches

While djb is perhaps best known for writing qmail, he also wrote a web server, publicfile. Like his other software, publicfile is simple and robust. I use it to serve this site, among other software.

Characteristically for djb, publicfile is pretty minimal out of the box. Here are a few patches I applied to the source to make my server faster, more flexible, and easier to use.

Attacks on my Server: The Data

I try to maintain a reasonably secure webserver.

A webserver is a computer, connected to the public internet, that does things (serves pages, etc.) whenever anyone asks it to. This makes it an easy thing to attack: the first step toward attacking a computer is usually getting it to do your bidding, and a webserver does your bidding every time you click a link.

My system logs show that I get attacked several times a day, like (I imagine) most computers on the Internet. Fortunately, most attacks bounce off — not because I have some magic security-foo, but rather because the software I’m using — specifically publicfile — doesn’t work the way the attackers expect it to.

While I am not so naive or foolish as to say that my server is “secure” — I’m sure it has some exploitable hole, and it runs in a distant facility that probably forgets to lock the doors sometimes — these attacks are of mostly academic interest.

Here’s some data I’ve collected from the past month or so of attacks. I figure this might help someone else detect or prevent an attack in the future.