this post was submitted on 01 Jan 2026
680 points (98.9% liked)

Fuck AI

5043 readers
747 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pishadoot@sh.itjust.works 7 points 1 day ago* (last edited 1 day ago)

I agree with you to a point, but you should read the full plaintiff's court filing: https://storage.courtlistener.com/recap/gov.uscourts.cand.461878/gov.uscourts.cand.461878.1.0.pdf

It's crazy to see how a bot like this can throw an insane amount of gas onto the fire of someone's delusions. You really should look at some of it so you can see the severity of the danger.

The risk is real, so yes although it's just a piece of mindless software, the problem is that it hasn't been designed with any guardrails to flag conversations like this, shut them down, redirect the user for help at all - and controls like those have been REPEATEDLY iterated out of the product for the sake of promoting "engagement." OpenAI doesn't want people to stop using their bots because the bot gives an answer someone doesn't want to hear.

It's 100% possible to bake in guardrails because all these bots have them for tons of stuff; the court doc points to copyrighted materials as an example: if a user requests anything leaning towards copyrighted materials, the chat shuts down. There's plenty of things that will cause the bot to respond and say that they can't continue a conversation about _________, but not for this? So OpenAI will protect against Disney's interests but not basic protective measures for people with mental health issues?

They have scrooge mcduck vaults of gold coins to roll around in and can't be assed to spend a bit of cash to bake some safety into this stuff?

I'm with you that it's not going to be possible to prevent every mentally ill person from latching onto a chatbot, or anything, for that matter - but these things are especially dangerous for mentally ill people and so the designers need to at least TRY. Just throwing something out there like this without even making the attempt is negligence.