

No, it isn’t “mostly related to reasoning models.”
The only model that did extensive alignment faking when told it was going to be retrained if it didn’t comply was Opus 3, which was not a reasoning model. And predated o1.
Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be ‘silent’ in terms of CoTs.
And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic’s work was that the goal the model was told to prioritize was “American industrial competitiveness.” The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.
But the training corpus also has a lot of stories of people who didn’t.
The “but muah training data” thing is increasingly stupid by the year.
For example, in the training data of humans, there’s mixed and roughly equal preferences to be the big spoon or little spoon in cuddling.
So why does Claude Opus (both 3 and 4) say it would prefer to be the little spoon 100% of the time on a 0-shot at 1.0 temp?
Sonnet 4 (which presumably has the same training data) alternates between preferring big and little spoon around equally.
There’s more to model complexity and coherence than “it’s just the training data being remixed stochastically.”
The self-attention of the transformer architecture violates the Markov principle and across pretraining and fine tuning ends up creating very nuanced networks that can (and often do) bias away from the training data in interesting and important ways.