Performance of the AI-generated version was 20,000 times slower on one specific benchmark, but “only” about 2,000 times slower when averaging over multiple different benchmarks (which is, imo, a better measurement of the code’s quality).
So I suppose The Register pulled from multiple sources (as you should) and just linked to the most top-level of all of them.
The article cited by The Register cites this more detailed analysis in turn: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code
Performance of the AI-generated version was 20,000 times slower on one specific benchmark, but “only” about 2,000 times slower when averaging over multiple different benchmarks (which is, imo, a better measurement of the code’s quality).
So I suppose The Register pulled from multiple sources (as you should) and just linked to the most top-level of all of them.