this post was submitted on 15 Nov 2025
301 points (93.9% liked)

Technology

76918 readers
3076 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
301
LLMDeathCount.com (llmdeathcount.com)
submitted 4 days ago* (last edited 4 days ago) by brianpeiris@lemmy.ca to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[–] atrielienz@lemmy.world 1 points 10 hours ago (1 children)

I like your username, and generally even agree with you up to a point.

But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

Because chatbots are being... Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

These companies won't let you look up how to make a bomb on their LLM, but they'll let people confide suicidal ideation and not put in any safeguards for that, and because they're designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.

[–] REDACTED 1 points 8 hours ago* (last edited 8 hours ago) (1 children)

I get your point, but the reality is that companies do actually put (well, started to) safeguards in place. I feel like I could get murdered on lemmy for saying this, but I was a ChatGPT subscriber for a year, up until last month. The amount of "Sorry Dave, I cannot do that" replies I recently started getting was ruining my experience. OpenAI recently implemented entire new system that transfers you to a different model if it detects something mental going on with you.

[–] atrielienz@lemmy.world 1 points 8 hours ago

The negligence lies in marketing a product without considering the implications of what it can do in scenarios that would make it a danger to the public.

No company is supposed to be allow d to endanger the public without accepting due responsibility, and all companies are expected to mitigate public endangerment risks through safeguards.

"We didn't know it could do that, but we're fixing it now" doesn't absolve them of liability for what happened before because they lacked foresight, did no preliminary testing, and or planning to mitigate their liability. And I'm sure that sounds heartless. But companies do this all the time.

It's why we have warning labels and don't sell specific chemicals in bulk without a license, or to children etc. it's why, even if you had the money, you can't just go buy 20 tonnes of fertilizer without the proper documentation and licenses, as well as an acceptable use case for 20 tonnes.

The changes they have made don't protect Monsanto from litigation for the deaths their products caused in the before times. The only difference there is that there was proof they had knowledge of the detrimental affects of those products and didn't disclose them.

So I suppose we'll see.