Just to add to this, the biggest moat Nvidia has is not transistor density, but their software ecosystem.
Every since like the GTX 200 series in 2008, Nvidia stuff has been the standard for academic research, and it basically only works on their GPUs. Anything for research is done on Nvidia GPUs, which is tweaked for enterprise deployment on GPUs… if you want it on something else, you basically have to start from scratch. And dump a tremendous amount of brainpower put into optimization.
AMD’s in an interesting position here because they’ve been making Nvidia GPU competitors for literally decades. Their architectures are actually quite similar, hence it’s easier to ‘emulate’ Nvidia on AMD than pretty much anything else.
…That being said, the Chinese have made tremendous progress busting out of the Nvidia software ecosystem, hence these chips are actually being used for real work.
Just to add to this, the biggest moat Nvidia has is not transistor density, but their software ecosystem.
Every since like the GTX 200 series in 2008, Nvidia stuff has been the standard for academic research, and it basically only works on their GPUs. Anything for research is done on Nvidia GPUs, which is tweaked for enterprise deployment on GPUs… if you want it on something else, you basically have to start from scratch. And dump a tremendous amount of brainpower put into optimization.
AMD’s in an interesting position here because they’ve been making Nvidia GPU competitors for literally decades. Their architectures are actually quite similar, hence it’s easier to ‘emulate’ Nvidia on AMD than pretty much anything else.
…That being said, the Chinese have made tremendous progress busting out of the Nvidia software ecosystem, hence these chips are actually being used for real work.
I remember when you needed a VGA video card to play Space Quest 😬