Let’s see how wrong your explanation of LLMs is:
wrong:
-conversational: it only generates the most statistically likely text string to follow.
-povcet expert: LLMs have no expertise, it can only regurgitate text strings that are most likely, based on stealing the creative and scientific output of actual experts: humans
What isn’t wrong is “real time responses”, as long as LLM providers decide they provide that, because people using their shitty statistical parrot is costing them their bottom line.
Anyways, you read like a giant AI-Booster. Did your LLM-Mother write this for you? You can’t do Werkimmanenzanalyse on technology! Death of the author cannot apply to actual, society affecting changes. But it’s ok, I can just block you for your “Mein AI-Kampf” screed here.
Let’s see how wrong your explanation of LLMs is: wrong: -conversational: it only generates the most statistically likely text string to follow. -povcet expert: LLMs have no expertise, it can only regurgitate text strings that are most likely, based on stealing the creative and scientific output of actual experts: humans What isn’t wrong is “real time responses”, as long as LLM providers decide they provide that, because people using their shitty statistical parrot is costing them their bottom line.
Anyways, you read like a giant AI-Booster. Did your LLM-Mother write this for you? You can’t do Werkimmanenzanalyse on technology! Death of the author cannot apply to actual, society affecting changes. But it’s ok, I can just block you for your “Mein AI-Kampf” screed here.