

8·
6 days agoOr maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.
I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.