this post was submitted on 01 Jan 2026
678 points (98.8% liked)
Fuck AI
5043 readers
885 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How the fuck was AI, a piece of software, supposed to know if this person was serious or roleplaying? It doesn’t. It’s a piece of software. This person needed mental help. Serious mental help. And the pathetic excuse of health care and especially mental health care failed. Don’t blame a piece of software for this persons mental health issues. There a shit ton of things I hate about AI. Plenty. But it’s a piece of software. A tool. I have had a conversion with AI where I got eaten by a bear. And died. Yet the conversion continued. It doesn’t know what is real or not.
How the fuck was , a piece of , supposed to know if this was or ? It doesn't. It's a piece of . This driver needed <better driver's ed>. Serious . […] Don't blame a piece of for this person . […]
Protip, Sparky: nobody is blaming "the software". They're blaming the people who unleashed the software on an unsuspecting public without running proper trials of what its impact could be. Who then do everything in their power to hide their incompetence/depraved indifference to human life. Who value "engagement" stats (read: addiction) over mitigating the harm their product visibly causes.
Stop clank-fucking and start thinking. We have safety standards for things precisely because humans are flawed.
It's an autonomous lawnmower: a thing. How's it supposed to know if it's supposed to run something over? That baby obviously needed better parental supervision, and our terrible system failed, but don't blame the tool.
Legal responsibility lies with the company that produced the tool, made it in such a way that it will confidently engage in roleplay that plausibly mimicks a dangerous mental break, advertised it as smart and competent, and didn't even out any sort of safeguards or fallback to check that something is roleplay if it shows signs of being worrying. And this is them failing to do so after multiple incidents of the software being unhealthy for people with certain mental conditions.
It may just be a tool, but we regularly hold tool makers responsible for building tools that hurt people.
Yeah, but what if we just didn't have a baby killing machine in the first place?
Just in case you missed the point of what I was saying: I don't think we should have a baby killing machine. The maker of a tool has a responsibility to take at least reasonable precautions to ensure their tool is used safely. In the case of an autonomous lawnmower, they typically have a lot of sensors to avoid stuff, are shaped so they can't easily run over things they're not supposed to, and in many cases have blades on pivots that while they could hurt someone are able to do significantly less damage.
Being "just a tool" doesn't exempt something from being critically judged because it could be an unsafe tool by an irresponsible maker. The tool maker has a responsibility to make their tool safely and properly, and if they can't they need to not make the tool.
Fair. Thank you for clarifying.
Except for the makers of tools that are singularly designed to kill: gun manufacturers.
Well that's different. We all know that the second amendment is the only enumerated right to have no exceptions or limitations, and the founders explicitly wanted every American to have as many guns as possible, and that it's downright treasonous to imply that "since militias are critical for free society, you can't stop people from owning guns" might imply an intended use case for said guns.
This is the same logic of blame the gun manufactures. In your scenario, what is the baby doing out in the yard unsupervised. You say baby, are we taking crawling or non-mobile? Who lets their baby that young out in the yard? Who lets their lawnmower run when they have their baby in the yard? I get what you are saying, but at some point things need to go back to the user. The person who made the choice to use the lawnmower when their baby was crawling around the yard. This is why we have “contents may be hot” and do not smoke around gas warnings. But if ChatGPT even had some popup that asked if the user is serious or not, maybe warning/“are you serious?” popups before or while using it, what is to stop the user from saying everything is okay and just clicking continue.
... Are you actually taking a "blame the baby" approach to "baby run over by lawnmower"?
Margaret was bringing in the groceries and her 18 month old went to pick a flower while she tried to get something unstuck in the trunk. Quiet street, nothing crazy going on. Kid darted off to the other side of the driveway, slipped on the dew on a small grassy incline and shot under the robot mower that had none of the safety features I mentioned. Margaret thought it was safe to let her child be within eyesight but out of reach in the front yard while the neighbor mowed the lawn, unaware there was no one there.
Are you satisfied that maybe the manufacturer has some blame in this tragedy, or are you going to continue to maintain that the maker of a thing is morally unencumbered by the impact that thing has on the world?
Consider what the world would be like if chatgpt just... Didn't engage with what appeared to be delusional lines of thinking? Or if, even if you promised it was for a story, it said it wasn't able to help you construct a plausible narrative to justify killing your mother?
We do not need the tool, and so defending unsafe design choices is just "personal responsibility stops at the cash register".
Fun fact: I think that firearm and firearm accessory manufacturers continued drive for high sales at all costs should make them legally liable for certain attrocities committed with the tools they made.
The argument that it's the users fault for using the tool in the way it was designed isn't a compelling defense, particularly when the accusation is that it was reckless to make it in the first place.
so people should receive special authorisation to use LLMs to ensure they arent misusing them?
So you agree that the sole purpose of GenAI is destruction? That's the only way this analogy works.
the only thing worse than the details of the sessions is your apologist attitude toward it. You didn't read the whole chat log though. I know that because you're an AI fanboi so you're lazy (ChatGPT told me so)
Someone is running down the street dressed as Batman and yelling they are Batman. Are they serious or not? How would you know? How would a piece of software know?
bad analogy; his parents are already dead.
Its dangerous software that should not be in the hands of the general public until it has been made to not answer these types of questions. And yet apps are specifically being built for these types of questions for ChatGPT.
Thats like knowing gasoline shouldn't be drank but opening a gasoline serving lemonade stand.
Fixed that for you.
No argument there.
How does this piece of software know if the user is roleplaying or serious? Just a random user on the ChatGPT site. Not some purpose built application for anything.
why should it matter?
i personally value human life over having the ability to have an AI roleplay a realistic scenario where it is suggesting murder
I agree with you to a point, but you should read the full plaintiff's court filing: https://storage.courtlistener.com/recap/gov.uscourts.cand.461878/gov.uscourts.cand.461878.1.0.pdf
It's crazy to see how a bot like this can throw an insane amount of gas onto the fire of someone's delusions. You really should look at some of it so you can see the severity of the danger.
The risk is real, so yes although it's just a piece of mindless software, the problem is that it hasn't been designed with any guardrails to flag conversations like this, shut them down, redirect the user for help at all - and controls like those have been REPEATEDLY iterated out of the product for the sake of promoting "engagement." OpenAI doesn't want people to stop using their bots because the bot gives an answer someone doesn't want to hear.
It's 100% possible to bake in guardrails because all these bots have them for tons of stuff; the court doc points to copyrighted materials as an example: if a user requests anything leaning towards copyrighted materials, the chat shuts down. There's plenty of things that will cause the bot to respond and say that they can't continue a conversation about _________, but not for this? So OpenAI will protect against Disney's interests but not basic protective measures for people with mental health issues?
They have scrooge mcduck vaults of gold coins to roll around in and can't be assed to spend a bit of cash to bake some safety into this stuff?
I'm with you that it's not going to be possible to prevent every mentally ill person from latching onto a chatbot, or anything, for that matter - but these things are especially dangerous for mentally ill people and so the designers need to at least TRY. Just throwing something out there like this without even making the attempt is negligence.
To be fair it is programmed to be as affirmative as possible. This in itself is not ofcourse an inherit problem of LLMs or AI. You could as well engineer that LLM to direct anyone seeking medical or psychological help to a professional (much like they engineered deepseek to never comment on Tienn. square or say anything bad about China). But they won't because they want to maximise engagement, similar reason as to why google has become such a shitty search engine. So the blame lies more in greedy tech bros/oligarchs. Ofcourse you can probably never solve all the problems that LLM or gen ai has introduced (kids cheating in exams, gen ai porn of real people, aggressive web scraping, price increases in RAMs etc). So on a meta level you can perhaps still say that such tools enable selfish people with bad intentions.
Problems that are inherit in LLMs are other such as being extremely energy and data hungry therefore fucking up environment and privacy (latter indirectly but nevertheless). It is also a ton of investment which likely is a dead end for AGI studies (much like the tech developed during moon race), which might in the end result with another AI winter. Non-sentient AI if efficient and not left to monopoly of tech bros and CEOs, could make life easier but I highly doubt these vultures would let it be that way or goverments implement the necessary guardrails to prevent abuse.
Sincerely, you're take is idiotic. The criticism is not directed at "a piece of software" but at the parasitic fuckheads who roll it out and force-feed it to everyone they can reach.
*your
In my defense, I started that sentence as "you're an idiot" and censored myself, then forgot to adjust the beginning accordingly :p
*yro'ue
This person willingly went to ChatGPT and started the conversation. No one made them do it. No one forced them to use it. This person, who needed mental help, made the decision to use it.
There are models being marketed as being therapy dispensing models that in their terms of service are correctly described as entertainment and not endorsed by actual therapists because what AI therapy actually is is just a tool that tricks users into being their own unlicensed therapists. AI "therapy" flatters negativity bias and agrees with whatever it's users think sounds right about themselves walking people into danger.
We are in the zone of cocaine being available over the counter for toothaches here. Companies are being legitimately reckless in their marketing and AI is a black box by nature where companies cannot tell what is happening inside their products, they can only test them without having the manpower and counter ingenuity to test everything. If any other product lead to the death of multiple consumers or demonstrates harm to multiple people you usually pull it from the shelves and go back to the drawing board.
Excellent point! Try telling it to all the AI sycophants who think LLMs are magic and will solve all the problems in the world.