This is a position paper that I originally circulated inside the firmware
community at X. I’ve gotten requests for a public link, so I’ve cleaned it up
and posted it here. This is, obviously, my personal opinion. Please read the
whole thing before sending me angry emails.
tl;dr: C/C++ have enough design flaws, and the alternative tools are in good
enough shape, that I do not recommend using C/C++ for new development except in
extenuating circumstances. In situations where you actually need the power of
C/C++, use Rust instead. In other situations, you shouldn’t have been using
C/C++ anyway — use nearly anything else.
This post is the fourth in a series looking at the
design and implementation of my Glitch demo and the
m4vgalib code that powers it.
In part three we took a deep dive into the STM32F407’s internal architecture,
and looked at how to sustain the high-bandwidth flow that we set up in part
two.
Great, so we have pixels streaming from RAM at a predictable rate — but we
don’t have enough RAM to hold an entire frame’s worth of 8-bit pixels! What to
do?
Why, we generate the pixels as they’re needed, of course! But that’s easier
said than done: generate them how, and from what?
In this article, I’ll take a look at m4vgalib’s answer to these questions:
the rasterizer.
This post is the third in a series looking at the
design and implementation of my Glitch demo and the
m4vgalib code that powers it.
In part two, I showed a fast way to push pixels out of an STM32F407 by getting
the DMA controller to run at top speed. I described the mode as follows:
It just runs full-tilt, restricted only by the speed of the “memory” [or
memory-mapped peripheral] at either side…
But there’s a weakness in this approach, which can introduce jitter and hurt your video quality. I hinted at it in a footnote:
…and traffic on the AHB matrix, which is very important — I’ll come back
to this.
Quite a bit of m4vgalib’s design is dedicated to coordinating matrix traffic,
while imposing few restrictions on the application. In this article, with a
minimum of movie puns, I’ll explain what that that means and how I achieved it.
This post is the second in a series looking at the
design and implementation of my Glitch demo and the
m4vgalib code that powers it.
Updated 2015-06-10: clarifications from reader feedback.
For the first technical part in the series, I’d like to start from the very
end: getting the finished pixels out of the microprocessor and off to a display.
Why start from the end? Because it’s where I started in my initial experiments,
and because my decisions here had significant effects on the shape of the rest
of the system.