Never f**king guess my dude
The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to ‘fix’ the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.
Quite easy-to-believe, really.
These multiple safeguards toppling in rapid succession
Multiple safeguards? Really? Multiple paragraph prompts are not multiple safeguards… it’s half a safeguard at best. Applying limits on what the AI can do is a safeguard.
These people think giving the genai a prompt is coding. They dont understand the difference between actually coding in limits and just writing “pretty please dont delete everything”
I’m shocked and appalled that my addition of “do NOT make any mistakes!” didn’t singlehandedly make the word guessing technology underneath perfect.
Lol this is just like saying “I do declare bankruptcy”
Can you get an AI to code? Yes. Can you get it to stop you from running your operation in such a stupid way that it will end up destroying it? No.
Reminder that Anthropic’s AI system was used in targeting the school in Minab, killing 120 students. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/
The company is suing to be able to supply the US military again. It is in bed with the fascists.
That’s great to hear.
This is absolutely hilarious. “AI” users getting what they deserve chef’s kiss
This is what happens when there is a new technology and companies are run by commerce grads, not scientist or engineers that understand the technology.
AI has good therapeutic uses, particularly for disabled or impoverished people who may not be able to access mainstream therapy
JFC…there are already disclaimers on this. “For Entertainment Purposes Only”.
Same excuse Fox News used.
Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.
This is a post about Claude. It’s better than chatgpt and the sad thing is, it’s the best option a lot of people have.
And I’d like independent studies to prove it’s better than nothing before I’d recommend it to replace nothing. Especially when self guided mental health solutions such as meditation exist.
I don’t see how nothing would be better someone using a good quality AI to for example, ground them during a panic attack.
Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.
And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.
IME it depends which one
AI will not ground you, it will reinforce what you already believe. that’s why it’s very dangerous for “therapeutic” use.
IME it depends which one
I was about to reply that you forgot your /s, but then I refreshed my browser tab.
Like… there are multiple documented cases of sycophantic llms confirming people’s delusions. ‘ai psychosis’ is just a short way of saying the AI is a non-funny-improv-comedian and will always “yes and” your prompt.
prompt: “I feel bad and think I need to kill myself”
response: “You’re totally right, here’s some help in how to do that…”
prompt: “I have this great idea: If we eat broken glass, we’ll be healthier”
response: “Absolutely. Glass is made out of silicon dioxide, which has some health benefits if consumed in small amounts.”
prompt: “You told me to see a doctor, but I don’t want to”
response: “I’m sorry, you’re right. You don’t need to see a doctor. Your chest pain is perfectly normal.”
My examples are more physical things instead of mental because the consequence is more clear, but the same issue exists for mental health.
Using an AI for therapy or medical advice is a stupid, dumb, very bad idea. It will at best magnify problems.
Suggesting that disabled or impoverished people use it because they can’t access actual mental healthcare seems equivalent to eugenics to me.
the sad thing is, it’s the best option a lot of people have
That I will agree with. Maybe we should spend a small fraction of the money going into data centers on providing healthcare instead.
It depends which one you use and how you use it. They’re not all chatgpt quality.
No. Chatbots are machines built by billionaires with the agenda of making money. They litterally design these bots (even the therapeutic ones) to be sycophantic to the point they tell people anything to keep them chatting longer. To the point some of their users lose touch with reality. How many cases do we need of a chatbots helping a teenager plan and succeed at a suicide? Altruists did not design these machines. Even with a human therapist we have to watch for the landmines of their personal agendas. That’s a thousand times worse for machines that have no humanity, are capable of LIES, and have secret unwritten priorites written into their code by rich sociopathic creators. If facebook taught us anything it should be that if something is free on the internet it’s not because we are the customers.
Also DO NOT TELL ALL YOUR DEEPEST DARKEST SECRETS TO CHATBOTS! They aren’t required by any legal bodies to protect that information! OMFG
impoverished people need stable income and subsidized ration to reduce their burden. Not LLM subscriptions.
You can’t use therapy to escape hunger.
I hope you are not seriously advocating using the lying machine for therapy. You would get more value talking to a finger puppet.
It depends which one you use and how it’s used. Plus it’s a developing field. Bear in mind my comment was in response to someone saying AI users were “getting what they deserve”.
People that need therapy are one of the groups that should be kept away from ai as fr as possible.
AIs are yes-man, they agree with most of what you say. You really think its a good idea to reinforce the bad worldview or sense of self someone that desperately needs therapy most likely has.
It depends which one people use and how it’s used. Please bear in mind my comment was in response to someone saying about AI users getting “what they deserve”. Do you think that comment should be applied to disabled people who can’t access any other form of therapy?
It depends which one people use
It really doesn’t. Pretty much all models so far loose their guardrails once you are deep enough in the conversation. There were multiple news articles about ai giving someone the go ahead to off themselves.
and how it’s used
No matter which way you use it its bad. If you ask it for tips, you are essentially asking the average redditor for mental health advice. If you use it for conversations, you are forming a parasocial relationship with an AI that will constantly get things wrong you told it about before while reinforcing whatever worldview you have. The only thing that would slightly help is supervision by a human, but that would make the whole exercise redundant.
Do you think that comment should be applied to disabled people who can’t access any other form of therapy?
If they were desperate enough to be forced into using AI, then that above comment wouldn’t apply to them, but instead to the ones that are responsible for the broken system in the first place.
I see it differently but thanks for chatting with me
This was on Hacker News: https://news.ycombinator.com/item?id=47911524
Twitter link: https://xcancel.com/lifeof_jer/status/2048103471019434248
Hacker New’s sentiment on this from the comments I’ve read is that it is the author’s own fault.
100%
As much as I want to blame AI for this, there are many hurdles for the user to get through to even allow Claude to do that. I’d be very suprised if that’s not user error.
Exactly what I was thinking. Regardless of what one thinks about AI, it seems to me that only a long series of really bad a decisions could lead to something like this happening.
Always keep offline backup copies of your important data regardless of using AI slop to look over it! No, I don’t care that “optical media is obsolete and e-waste!”, or that “tapes are a 100 year old obsolete technology compared to cheap SSDs from TEMU!”.
they did not follow the 3-2-1 rule…
Optical media? Is that a viable part of backup strategies? I would expect tapes for sure, sounds like you know more than me.
- Better than not having an offline copy.
- Write-only, ransomware cannot delete/encrypt it.
- Drives are still cheap.
Downside is having techbros talk you about laser rot, how internal drives are obstructing the optimal airflow in GAMING PC cases, and how Gabe Newell is based and stuff.
Great points! Lotta my optical media use also included hot summers in cars lol, nothing like an archival use.
A quality disc can last 10 years or more. At a company I used to work at the backups were burned to discs coated with gold. They had 15 year old discs that still worked.
Dang that’s rad, had no idea (about it being used in such a way, I guess I mean, not too hard to imagine discs lasting that long).
I have 20+ yr old optical media cdr/dvdr and they are still good, the cheap ones like Pine and the ones with no name at all
What is this 10 year thing? I’ve also got CD RWs and CD Rs from 1998 that still work. And DVD Rs from like 2002 that are still fine.
That was my point, hehe. I also never spent on the “quality name brands” of disks, $10 for 100 cds, deal! $15 for 100 dvds insert fry meme. Maybe we just “took care” of our media better than others did? Personally, they are in spindles on a bookshelf, I just made sure no direct sunlight would hit them where they are, some days get warm before I can turn on the ac.
I definitely agree with you. I feel like I see people talking about optical media rotting all the time and it just doesn’t seem like a practical issue for 99% of use cases.
I seem to remember the conversation in the early 2000s being about how discs would rot in 50+ years and now I see people saying ten or 15.
To me it seems more criminal that the cloud provider has a “nuclear button” feature via the API that destroys everything including the backups with a single call and no confirmation whatsoever. What if the key gets accidentally leaked and someone wants to have fun?
It’s a feature.
It seems like actually criminal too. Like legitimately “we need to shred 2TB of incriminating data instantly or we’re all going to prison”

“That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱
Incredible emoji
Can we make sure to make Ted Farro suffers worse this time?
Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?
Nah but that’s what he wanted, he is the truest form of tech bro, destroy the world, refuse to accept consequences of his actions, weaseled his way out of the situation and managed to, in the wake of unimaginable human suffering, get more power over people and has a god complex tell me this isn’t some or all the characteristics of people like Peter Theil, Elon Musk, Mark Zuckerberg, Sundar Pichai, Bill Gates, hell even Tim Cook and Steve Jobs before him. Punishment doesn’t stop this sort of behavior but removing the possibility of someone having that level of control over others is the only way but the richest and most powerful have always sought ways of amassing more power not realizing that that leads to worse off situations for everyone including themselves, Horizon did great encapsulating that trait in Faro, but be it him, the people behind Skynet, the Matrix or whatever other tech dystopia that tech bros seem pathologically unable to not try to make happen in the worst way possible is only the beginning, they seem to forget that even with advanced tech that serves their needs and wants, which won’t help their mental health, the people lower down on the rungs of society have brains, wants and needs, and they have more expertise in all sorts of things than the 1% are except for mass exploitation. This inevitably goes wrong one of a few ways, either everyone dies from the tech, or so many that societal collapse is inevitable not great and even if society survives it can’t functionally reconstitute itself; 2 they win and kill off or supress enough of society that the society becomes less productive and instead of fighting the powerful they flee or don’t participate in wealth generating for the rich were they don’t have to, maybe to rise up again later or the economy of the region just ignores them completely and the government protects themselves from their people more than anything else, or 3rd your revolution with terror campaigns against any and all who can be credibly accused of being part of the former tyrants. In all 3 cases the richer people end up poorer overall because wealth flees or dies in autocracy.
Good.
“If your prod can be deleted by your AI, it should be.”
Did they write that title with AI also? Look terrible
Not to mention the image
Fucking lol.
Well deserved.

lmfao
Why, yes. I do like that!
New PornHub tag discovered
“Anthropic tortures developers and never lets them cum.”
The real artificial intelligence was all the files it deleted after being told not to along the way.
It looks like their website is pocketos.ai lol
This isn’t an AI story, it’s a “completely fucking idiotic sysadmins exist” story.
Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That’s entirely on you, zero sympathy. (Actually, give any developer that power? You get what you deserve.)
It could be a moronic sysadmin, it could just as easily be a moronic exec pushing staff to implement this crap right now and damn the consequences.
⤴️ #MyLastJob
I mean that’s kinda the whole point.
Companies are looking at AI to replace people. Either it’s ready or it’s not.
If you need to treat it like it’s an intern, then it’s not worth the expense. Anyone hiring interns to be productive doesn’t understand why you hire an intern.
As if a 90$/month intern wasn’t a good deal lol
You don’t hire interns for productivity. If you’re intern program is any good it’s a time/resource sink. However, it’s a good recruiting pipeline and provides young people an opportunity to get real world experience.
Right now it’s somewhere between a smart intern and a smart recent grad. A lot depends on what Skills.md and frameworks your org has set up.
I actually think it’s better than that and when you set up multiple pipelines that interact and cross check it starts to ramp up. Definitely true Lemmy has its head in the sand about it though.
This. Yes it seems wasteful or whatever but you need bots with prompts that review the work, kick it back to the coder bot to re-do, yadda. But at the end of the day you have a thing that Fixes Your Bugs and Implements Basic Features For You.
Is it really fixing if it’s only short-term with mounting technical debt?
Gogo gadget inefficient hallucinating predictive text generator grift
No it’s not. You’re giving it way too much credit.
People don’t wanna hear that around here. But I agree, with the right instructions it’s better than a junior Dev. Loads faster, and mistakes can be fixed faster, and if you update the prompts then it learns better from mistakes too.
People don’t want to hear it anywhere because you’re lauding the benefits of a parasitic technology which is inherently hostile towards workers.
And if you’re getting paid for it, it makes you a parasite too, or at least more complicit than the average person.
Maybe your position would be better served by not lashing out at people as if they’re your enemy.
Multiple things can be true at the same time. Statements about the technical capability of a technology don’t detract from the negative impacts on the world. Those are two different topics.
Fossil fuels have incredibly massive, civilization-scale problems that are actively harming the modern world AND ALSO have enabled industrialization, pulling billions out of poverty.
AI is objectively capable at some tasks AND ALSO is being used to disrupt the labor market and causing other harmful effects in society.
The world isn’t black and white
OMG adult balanced take with no detectable outrage
I’ll see you in Sort By: Controversial
Black and white, no, but things can be evaluated on their net impact. And in that evaluation, AI is shit.
I understand the arguments, today isn’t my first day on the Internets.
The comment that was responded to was in a conversation talking about the technical capabilities and how it doesn’t matter what the truth is on that topic because some people don’t want to hear it because they only can view AI in a 2-diminsional, black or white, net good or net bad way.
Then you showed up like a caricature of the type of irrationality that they were discussing.
I even explained the, very obvious, context that you breezed right passed and yet you’re still grinding that same talking point without a moment of self reflection.
I honestly think, it’s very cool for prototyping ideas at this point. It’s also parasitic. Although I think because of (maybe) different reasons: It gives people the power (which they unfortunately use way too much) to imitate an art, but in an non-arty imperfect way that doesn’t comprehend details (of the art), resulting in slop. For software that can go very wrong as we see here. This is also a reason why I mostly quit open-source, because now everyone can code a bad version of a library, it sucked the art out of good open source etc. and it’s increasingly difficult because of good wording/“look” etc. to differentiate on quality of code, previously you could often check a code-base review it somewhat and know how good the quality is, now it’s more like “is this slop or not?” (in which case I go a big circle around it, because reviewing is often not worth it)
At some point though, I think this automation of work is inevitable, we need to think about a society that can peacefully exist without having the requirement to work to exist. I actually think this could easily be utopian, everyone can focus on what they actually think is fulfilling life.
Though, it’s sad and concerning that technology is developing faster than society can adapt, which is why I’m mostly with you, because people (or representatives like politicians) just aren’t “programmed” for these fast-paced changes, to adapt the technology such that the future may be more utopian as it currently is heading towards a dystopian future…
Every commercial use of AI negatively impacts the environment in order to further the interests of capital and is therefore inherently immoral.
If we were in a nuclear fusion or otherwise all-renewable-energy-with-plenty-of-excess world, then I’d be more aligned with your mindset and agree that only uses which bastardize art / etc are immoral.
It gives people the power […] to imitate an art, but in an non-arty imperfect way
Is it okay for Skrillex to make loops? For Vanilla Ice or MC Hammer to sample?
The fact is, it can be a very useful technology when deployed sensibly. Yes, it’s going to inflict massive harm on society in multiple ways - but just dismissing it as shit is putting your head in the sand. We need to be figuring out how to ensure that the harm it does is minimised and ideally that it’s used in ways that benefit us all. Fuck knows how though.
But it’s not just going to go away, no matter how much we might want it to.
It destroys the environment inherently by virtue of its operation (in the context of our current energy infrastructure). I do not care how “useful” it is to you or any corporation if it takes even a single living organism off of this earth.
I dismiss it as shit and I don’t need your approval to do so. Medical and scientific applications are acceptable. Nothing else, no exceptions.
“Treat an AI like an idiot intern without any references you just hired.”
Instead of this, treat AI like some dude off the street who you didn’t hire and leave it out of your life. It’s shitty, it’s wasteful, and it’s subsidized by everyone to get a few tech bros rich.
Like seriously, it’s just theft of people’s work it “trained on”, powered by energy companies that charge us more to power it, at the cost of poisoning our water supplies, to ultimately try and steal our salaries one day.
It’s absolutely parasitic software at every level.
Nah, I think I’m going to keep using it
Hah, you just wrote a punchline similar to a presentation I’ve been giving at conferences.
Treat an AI like the idiot intern without any references you just hired.
My company is in the process of pivoting hard to Claude after 50yrs of doing virtually everything themselves and rolling their own versions of already-existing software, and this is almost verbatim how I’ve described to others what it feels like to use it.
It feels like cajoling an intern to understand a job for which they have some average skill but zero motivation, and they only want to do the bare minimum, so you spend all the time you could be doing your job holding their hand through basic tasks.
It’s fucking annoying.
you spend all the time you could be doing your job holding their hand through basic tasks
negl sounds like you need to spend some time writing good documentation. May as well do it in the form of Skills files so humans and bots both are more quickly able to be useful in your org.
give any developer that power?
Fun fact: giving developers access to production deployments violates FedRAMP and like half a dozen other compliance regimes SOC2/IRAP/ISMAP/G-Cloud/BSI C5/…
But it doesn’t mean it isn’t incredibly common. Especially with “DevOps” where the developers are pushed to handle literally every aspect.
IMO DevOps was always a stupid idea. Impedance mismatch.
Developers who are really good at designing complex enterprise-level shit need days-to-weeks of uninterrupted time to think and experiment. Please, skip the daily stand-up until you’ve figured out how to fix <insane-race-condition>
Coders who are good at fixing bugs or adding a new menu item need a few hours or a day uninterrupted. Daily stand-up, should have closed yesterday’s ticket or have hit a real roadblock with it.
Ops IT people are fixing like 4 fires at the literal same time, they are lucky to get minutes of uninterrupted thinking time. It’s about managing rate of tickets per day, and in contrast going full CAPA when there’s a significant outage.
Just… totally different workflows, personalities, and management
I totally agree. I think it stems from Ops people that are angry at developers for building bad software. Theoretically making devs responsible for their deployments would make them care more about the quality, but really it just splits their focus and now they make bad software and provide poor ops.
Agreed about salty ops people. That said it is important even for fancy-schamcy Architect-level engineers to be assigned real annoying bugs in the codebase they helped to shape
I was once the intern who did relatively stupid things with one very big consequence.
My biggest fuckup was unplugging a 10base2 (edit: I originally wrote 10-base-T) coax wire from the loop so I could plug in a newly built computer. Everyone at the time (including me) knew that an unterminated 10-base-T network would crash Win 3.11, so the accepted process was to tell the entire network you were about to disconnect a cable so they could save their work and be ready to drop to DOS. I spaced that step in my haste to test a newly built computer and ruined a day’s worth of work by the sales guy.
Ultimately, I was the one who fucked up and did know better. That’s AI. However, it only had consequences because Win 3.11 networking code was fucking awful and because the sales guy didn’t save his work frequently. If the same person in this story had asked Claude whether it was a good idea to have the backup and production databases on the same volume, the AI would have said No. If the person had asked Claude whether it was a good idea to delete a database without any confirmation dialogue, the AI would have said No. AI did it anyway. That’s what makes this an AI story.
Was their database environment stupid? Yes. Did the sysadmin fuck up by not treating AI like an intern? Yes. Did the AI do something it knew it shouldn’t do? Also yes. This is both an AI story and stupid sysadmin story.
I witnessed a sysadmin, on a production database, type a SQL
DELETE FROMquery, which was being read to him over a call.He ran the command before writing the WHERE clause.
Luckily, they had backups.
“OOPS!? What do you mean “oops”?” was a meme around the office for years.
Treat an AI like the idiot intern without any references you just hired.
An extremely enthusiastic intern that, if presented with a question/problem/prompt they don’t know the solution for will just overconfidently pull something out of their ass and run with it.
It’s both.
Problem is execs and stupid software devs wanna give these things full reign on systems because of “performance gainz “
It’s a collective stupidity that’s impossible to break because it’s hooked into the highest decision makers.
These things are bought specifically because they are trying to replace the sysadmins… Along with everyone else.
Any business who uses AI in that manner will fail like all of the dot com companies who went all-in on the Internet when it first achieved a bit of popularity.
AI is, at best, a tool that professionals may be able to use in some situations. Any company dumb enough to believe the hype generated by the chatbot companies is probably making other, similarly dumb, decisions in other areas.
Things like giving way too much access to a worker, not having a tested disaster recovery plan, and not having anyone who understands the technologies that their business depends on.
This company was heading towards disaster due to poor decision making, it just happened to be AI related but it could have also been an undetected cyberattack, 0-day exploits pushed to the client app, destructive ex-employee, etc.
This is a cautionary tale about bad management


















