We Ran Our Own Infrastructure
Another revisionist myth. SourceForge (the first forge!) wasn’t “our own”, nor was BitBucket (mentioned by author), or Gitorious (which was famously archived btw). or sourceware, or …
I guess we can call ftp servers like the gnu ones “ours”. But that’s about it as far as where congregations of relevant projects lived.
Did some projects self-host with trac, cgit, mailman, …etc? Yes, of course. Just like how some projects self-host today with gitea/gitlab/…etc.


There is no substitute for the static analyzer within the compiler informed by the type system. Near-zero bugs require provable static analysis that guarantees preventing a certain bug class, i.e. (safe) Rust for the bug classes it guarantees preventing. Hopefully, future languages with even better type systems will help with even more bug classes, or incrementally improves on what Rust currently has to offer.
C code simply doesn’t have enough info for an external tool to push bugs down to near-zero count. This is also exactly the point of struggle that lead to complete failure in delivering guaranteed safety to C++.
There has been murmurings, mainly from non-technical people, about how “AI” will render advancements in safer type systems nearly useless, because the magic (mushroom) AI will just find all the issues in code written in older languages. What they don’t realize is that the effect will be reversed. Many established projects that come with a high reputation, and a veneer of maturity, indestructibility, and meticulousness will simply, and perhaps unfairly, lose that perception under the continuous barrage of potentially high impact bugs and vulnerabilities surfaced by these tools, with not enough human bandwidth to keep up with them, and with new code susceptible to the same problems repeating over and over. This will effectively lead to an even harder push for adopting technologies that prevent a good chunk of these bugs from ever happening at any point, not the other way around.