Since I started running this site in 2011, I’ve adhered to some principles to
make it fast, cheap, and privacy-respecting.
As few third-party cross-domain requests as possible – ideally, none.
No trackers (which tends to follow naturally from the above).
it disabled, only with some features missing.
No server-side code execution – everything is static files.
Out of respect for my readers who don’t have a fancy gigabit fiber internet
connection, I test the website primarily on slower, high-latency connections –
either a real one, or a simulated 2G connection using Firefox’s dev tools.
I was doing an upgrade of my httpd2 software recently and was frustrated at
how long the site took to deliver, despite my performance optimizations in the
Rust server code. To fix this, I had to work at a much higher level of the stack
– where it isn’t about how many instructions are executed or how much memory is
allocated, but instead how much data is transferred, when, and in what order.
On a simulated 2G connection, I was able to get load times down from 11.20
seconds to 3.44 seconds, and the total amount of data transferred reduced from
about 630 kB to about 200 kB. This makes the site faster for everyone, whether
you’re rocking gigabit fiber or struggling to get packets through.
In this post I’ll walk through how I analyzed the problem, and what changes I
made to improve the site.
LRtDW is a series of articles putting Rust features in context for low-level C
programmers who maybe don’t have a formal CS background — the sort of
people who work on firmware, game engines, OS kernels, and the like. Basically,
people like me.
I’ve added Rust to my toolbelt, and I hope to get you excited enough to do the