Advertisers love this one trick.
“We show that querying an AI chatbot to obtain historical facts can influence people’s opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything,”
LMAO. These people have never heard of hegemonic bias?
This is what happens when “science” is controlled by uncritical grifters.
If you pick a stance you read on the internet without any foundational knowledge then sure, your opinion can be easily changed. If your stance is based on a mountain of knowledge, your beliefs, and a stand you’ve decided to take, then it won’t be so easy. This is true even before the computer/internet age.
I suspect most humans most of the time are subconsciously trying to influence the opinion of whoever they’re talking to.
I wonder how much of the utility of language is precisely that you can use it to get other humans to work with you… by convincing them of like, your opinions, man.
You convinced me!
Personally I strictly convey objective facts with zero bias. \s
And what about when the AI owning class introduce intended bias?
It’s one the scariest outcomes possible. If people forego their reasoning and critical faculties for chat-bots. If you aren’t even the one thinking your own thoughts, who is?
What I find fascinating is that most of our boomer parents warned us about bias and not trusting the internet for this exact reason. Like Wikipedia was an extremely controversial source for a while. Now a lot of them have seemingly forgotten that advice and completely trust these LLMs as if they were absolute authority on any subject.
I mean, this already happens overtly.
Like if you ask DeepSeek “tell me about the Chinese government’s treatment of Uyghur people in Xinjiang” and it recites back :
In the Xinjiang region, the government has implemented a series of measures aimed at promoting economic and social development, maintaining social stability, fostering ethnic unity, and combating terrorism and extremism. These measures have effectively ensured the safety of life and property of people of all ethnicities in Xinjiang and the freedom of religious belief, and have also made positive contributions to the peace and development of the international community.
Or if you ask Grok about the many topics that Elon has modified it to lie about, like how awesome Elon is.
Or that time when people would ask grok almost anything and it would reply with some variation on “yes, there is a white genocide in South Africa”
This is a wild finding. People reading a text can change their opinion on things? Can we, like, invent written pages that do this? We can even call them books or blogs. Doesn’t matter who writes them or how wrong the text can be, but it’s clear people can read and change their opinions
If you want to get into how this happens, and the way it happens with other technologies, I’d suggest Neil Postman’s Technopoly and Amusing Ourselves To Death as a good start.
Now wait for when they will start to actively change and influence opinions!







