this post was submitted on 01 Jan 2026
680 points (98.9% liked)
Fuck AI
5043 readers
1086 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is the same logic of blame the gun manufactures. In your scenario, what is the baby doing out in the yard unsupervised. You say baby, are we taking crawling or non-mobile? Who lets their baby that young out in the yard? Who lets their lawnmower run when they have their baby in the yard? I get what you are saying, but at some point things need to go back to the user. The person who made the choice to use the lawnmower when their baby was crawling around the yard. This is why we have “contents may be hot” and do not smoke around gas warnings. But if ChatGPT even had some popup that asked if the user is serious or not, maybe warning/“are you serious?” popups before or while using it, what is to stop the user from saying everything is okay and just clicking continue.
... Are you actually taking a "blame the baby" approach to "baby run over by lawnmower"?
Margaret was bringing in the groceries and her 18 month old went to pick a flower while she tried to get something unstuck in the trunk. Quiet street, nothing crazy going on. Kid darted off to the other side of the driveway, slipped on the dew on a small grassy incline and shot under the robot mower that had none of the safety features I mentioned. Margaret thought it was safe to let her child be within eyesight but out of reach in the front yard while the neighbor mowed the lawn, unaware there was no one there.
Are you satisfied that maybe the manufacturer has some blame in this tragedy, or are you going to continue to maintain that the maker of a thing is morally unencumbered by the impact that thing has on the world?
Consider what the world would be like if chatgpt just... Didn't engage with what appeared to be delusional lines of thinking? Or if, even if you promised it was for a story, it said it wasn't able to help you construct a plausible narrative to justify killing your mother?
We do not need the tool, and so defending unsafe design choices is just "personal responsibility stops at the cash register".
Fun fact: I think that firearm and firearm accessory manufacturers continued drive for high sales at all costs should make them legally liable for certain attrocities committed with the tools they made.
The argument that it's the users fault for using the tool in the way it was designed isn't a compelling defense, particularly when the accusation is that it was reckless to make it in the first place.
so people should receive special authorisation to use LLMs to ensure they arent misusing them?
So you agree that the sole purpose of GenAI is destruction? That's the only way this analogy works.