this post was submitted on 15 Nov 2025
301 points (93.9% liked)

Technology

76918 readers
4381 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
301
LLMDeathCount.com (llmdeathcount.com)
submitted 4 days ago* (last edited 4 days ago) by brianpeiris@lemmy.ca to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[–] snoons@lemmy.ca 146 points 4 days ago
[–] DasFaultier@sh.itjust.works 38 points 3 days ago

Shit, I just read the link name and was hoping for a list of AI companies that have died.

This shit's dark...

[–] AntY@lemmy.world 21 points 3 days ago

Where I live, there’s been a rise in people eating poisonous mushrooms. I suspect that it might have to do with AI use. No proof though.

[–] lemmie689@lemmy.sdf.org 44 points 4 days ago

Went up by one already, I only saw this a little earlier today, was at 13, now14.

[–] Tehhund@lemmy.world 34 points 4 days ago (1 children)

This website is going to be very busy when the LLM-designed nuke plants come online. https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/

[–] echodot@feddit.uk 12 points 3 days ago (2 children)

Can't read the article because it's paywalled but I can't imagine they are actually building power stations with AI, that will just be a snappy headline. Maybe the AI is laying out the floor plans or something, but nuclear power stations are intensely regulated. If you want to build a new reactor design, or even if you want to change an existing design very slightly, it has to go through no end of safety checks. There's no way that an AI or even a human would be allowed to design a reactor, and then have it be built with no checks.

[–] Tehhund@lemmy.world 6 points 3 days ago

Actually they're using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what's actually going on in the plant, potentially bypassing the regulations.

[–] xeroxguts@lemmy.dbzer0.com 3 points 3 days ago

404 accounts are free

[–] SnotFlickerman@lemmy.blahaj.zone 29 points 4 days ago* (last edited 4 days ago) (1 children)

LLMs Have ~~Lead~~ Led to 14 Deaths

FTFY

[–] brianpeiris@lemmy.ca 13 points 4 days ago (2 children)

You're welcome. Easy mistake to make, I make it constantly, in fact haha!

[–] glowie 3 points 4 days ago (1 children)

Should have gotten an LLM to spellcheck /s

[–] brianpeiris@lemmy.ca 16 points 4 days ago (2 children)

A friendly human spell checked me and probably used less than a peanut worth of energy.

[–] mark@programming.dev 7 points 4 days ago

Anybody else hungry?

[–] LainTrain@lemmy.dbzer0.com 5 points 4 days ago

Rare insult

[–] MrLLM@ani.social 15 points 3 days ago

I swear I’m innocent!

[–] Prove_your_argument@piefed.social 18 points 4 days ago (2 children)

How many people decided to end their life by using methods they googled?

I’m sure google is a bigger loss leader than any ai company… so far anyway. Even beyond search results, the societal impact of so many things the do overtly and covertly for themselves and other organizations.

Not trying to justify anything, billionaire owned everything is terrible with few exceptions. In the early days of web search many controversies like this were mentioned, but the reality is that a screwdriver is a great tool, even if someone can lose a life from one. As can be these tools.

[–] Manjushri@piefed.social 34 points 4 days ago

How many people has Google convinced to kill themselves? That is the relevant question. Looking up the means to do the deed on Google is very different from being talked into doing it by an LLM that you believe you can trust.

[–] starman2112@lemmy.world 28 points 3 days ago (5 children)

Google doesn't tell you that killing yourself is a good idea and that you shouldn't talk to anyone else about your suicidal ideation

[–] Auth@lemmy.world 4 points 3 days ago

Google doesn’t tell you that killing yourself is a good

It does now! Thanks Gemini

[–] Credibly_Human@lemmy.world 3 points 3 days ago

Nor do any llms I've ever seen that is immediately accessible.

It also doesnt matter. AI isn't killing anyone with those any more than call of duty lobbies are killing people.

[–] WorldsDumbestMan@lemmy.today 2 points 3 days ago

Claude freaks out any time I even hint I'm not happy about my life. They lobotomized it so hard.

[–] echodot@feddit.uk 2 points 3 days ago

It'll certainly take you to websites where people will do that though so I'm not sure if there's really any distinction.

[–] chunes@lemmy.world 0 points 3 days ago

Plenty of its search results do

[–] jaykrown@lemmy.world 4 points 3 days ago (1 children)
[–] REDACTED 2 points 1 day ago* (last edited 1 day ago) (1 children)

Seriously. There have been always people with mental problems or tendency towards self harm. You can easily find ways to off yourself on google. You can get bullied on any platform. LLMs are just a tool. How detached from reality you get by reading religious texts or ChatGPT convo highly depends on your own brain.

It's like how entire genre of videogames are now getting censored because of few online incels.

[–] atrielienz@lemmy.world 1 points 9 hours ago (1 children)

I like your username, and generally even agree with you up to a point.

But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

Because chatbots are being... Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

These companies won't let you look up how to make a bomb on their LLM, but they'll let people confide suicidal ideation and not put in any safeguards for that, and because they're designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.

[–] REDACTED 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

I get your point, but the reality is that companies do actually put (well, started to) safeguards in place. I feel like I could get murdered on lemmy for saying this, but I was a ChatGPT subscriber for a year, up until last month. The amount of "Sorry Dave, I cannot do that" replies I recently started getting was ruining my experience. OpenAI recently implemented entire new system that transfers you to a different model if it detects something mental going on with you.

[–] atrielienz@lemmy.world 1 points 7 hours ago

The negligence lies in marketing a product without considering the implications of what it can do in scenarios that would make it a danger to the public.

No company is supposed to be allow d to endanger the public without accepting due responsibility, and all companies are expected to mitigate public endangerment risks through safeguards.

"We didn't know it could do that, but we're fixing it now" doesn't absolve them of liability for what happened before because they lacked foresight, did no preliminary testing, and or planning to mitigate their liability. And I'm sure that sounds heartless. But companies do this all the time.

It's why we have warning labels and don't sell specific chemicals in bulk without a license, or to children etc. it's why, even if you had the money, you can't just go buy 20 tonnes of fertilizer without the proper documentation and licenses, as well as an acceptable use case for 20 tonnes.

The changes they have made don't protect Monsanto from litigation for the deaths their products caused in the before times. The only difference there is that there was proof they had knowledge of the detrimental affects of those products and didn't disclose them.

So I suppose we'll see.

[–] Simulation6@sopuli.xyz 8 points 3 days ago

I thought this was going to be a counter of AI companies that have gone bankrupt.
I mean, even the original Battlestar Galactica (with Lorne Green) had a death count.

[–] jayambi@lemmy.world 6 points 4 days ago (3 children)

I'm asking myself how could we track how many woudln't have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?

[–] JoshuaFalken@lemmy.world 11 points 4 days ago

Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.

Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.

By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.

This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life's biggest answer is forty-two, what is the question?

[–] morto@piefed.social 5 points 4 days ago

For me, the suicide-related data is so hard to measure and so open for debates, that I'd treat it separately, or not include it at all, if using death count as an argument against llms, since it's a breach for deviating the debate.

[–] echodot@feddit.uk 3 points 3 days ago

I can't really see how we could measure that. How do you distinguish between people who are alive because they're just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?

I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.

[–] Fedditor385@lemmy.world -3 points 3 days ago (2 children)

I guess my opinion will be hugely unpopular but it is what it is - I'd argue it's natural selection and not an issue of LLM's in general.

Healthy and (emotionally) inteligent humans don't get killed by LLM's. They know it's a tool, they know it's just software. It's not a person and it does not guarantee correctness.

Getting killed because LLM's told you so - the person was in mental distress already and ready to harm themselves. The LLM's are basically just the straw that broke the camels back. Same thing with physical danger. If you believe drinking bleach helps with back pain - there is nothing that can save you from your own stupidity.

LLM's are like a knife. It can be a tool to prepare food or it can be a weapon. It's up to the one using it.

[–] dzsimbo@lemmy.dbzer0.com 4 points 3 days ago (1 children)

Why do you think we have seatbelt laws?

[–] Fedditor385@lemmy.world 1 points 2 days ago

Same reason there is a sticker on car batteries that says "Not for drinking".

[–] Ural@lemmy.world 2 points 3 days ago (1 children)

Healthy and emotionally intelligent humans will be killed constantly over the next few years and decades as a result of data centers poisoning the air in their communities (see South Memphis, TN), not to mention the general environmental impacts on the climate caused by the obscene power requirements. It's not an issue exclusive to LLMs, lots of unregulated industries cause reckless amounts of pollution and put needless strain on our electrical grids, but LLMs definitely fit into that category.

[–] Fedditor385@lemmy.world 3 points 3 days ago

Agree, but then you would need to count a lot of things, and many of them would be general mass comodity like cars, electricity, heating... besides LLM's being the new thing killing us, we have stuff killing us for ages...

[–] chunes@lemmy.world -3 points 3 days ago

LLM bad upvotes to the left please

load more comments
view more: next ›