Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.
And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.
That’s fair, but given the way the technology actually works, I stand by my position that there is a very real potential for harm and safer alternatives that are similarly accessible. If studies show it’s safe and helpful that’s cool, but at this moment I’d strongly discourage any loved one who’s interested in using an llm for this purpose and would instead point them towards other resources.
I don’t see how nothing would be better someone using a good quality AI to for example, ground them during a panic attack.
Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.
And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.
IME it depends which one
That’s fair, but given the way the technology actually works, I stand by my position that there is a very real potential for harm and safer alternatives that are similarly accessible. If studies show it’s safe and helpful that’s cool, but at this moment I’d strongly discourage any loved one who’s interested in using an llm for this purpose and would instead point them towards other resources.
Yes I see your point, and agree that actual therapy is dramatically safer and better
AI will not ground you, it will reinforce what you already believe. that’s why it’s very dangerous for “therapeutic” use.
IME it depends which one