this post was submitted on 09 Jul 2025
512 points (91.7% liked)

Science Memes

15726 readers
1918 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] WrenFeathers@lemmy.world 20 points 2 days ago* (last edited 2 days ago)

When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

[–] finitebanjo@lemmy.world 51 points 3 days ago* (last edited 3 days ago) (8 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

load more comments (8 replies)
[–] Karyoplasma@discuss.tchncs.de 135 points 3 days ago (2 children)

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

[–] BroBot9000@lemmy.world 30 points 3 days ago (3 children)

It is definitely both:

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

load more comments (3 replies)
load more comments (1 replies)
[–] glimse@lemmy.world 98 points 3 days ago (3 children)

Holy shit guys, does DDG want me to kill myself??

What a waste of bandwidth this article is

[–] TempermentalAnomaly@lemmy.world 15 points 2 days ago

What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.

[–] Stalinwolf@lemmy.ca 18 points 3 days ago (2 children)

"I have mild diarrhea. What is the best way to dispose of a human body?"

Google's AI recently chimed in and told me disposing of a body is illegal. It was responding to television dialogue.

[–] Crazyslinkz@lemmy.world 8 points 3 days ago (1 children)

Movie told me once it's a pig farm...

Also, stay hydrated, drink clear liquids.

load more comments (1 replies)
[–] Samskara@sh.itjust.works 10 points 3 days ago (1 children)

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

[–] Scubus@sh.itjust.works 11 points 3 days ago (8 children)

... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

load more comments (8 replies)
[–] FireIced@lemmy.super.ynh.fr 14 points 2 days ago

It took me some time to understand the problem

That’s not their job though

[–] Honytawk@lemmy.zip 108 points 3 days ago* (last edited 3 days ago) (2 children)

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative

[–] Skullgrid@lemmy.world 19 points 3 days ago

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

[–] Trainguyrom@reddthat.com 10 points 3 days ago

There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job

[–] TimewornTraveler@lemmy.dbzer0.com 8 points 2 days ago (1 children)

what does this have to do with mania and psychosis?

[–] phoenixz@lemmy.ca 3 points 2 days ago (1 children)

There are various other reports of CGPT pushing susceptible people into psychosis where they think they're god, etc.

It's correct, just different articles

[–] TimewornTraveler@lemmy.dbzer0.com 1 points 20 hours ago* (last edited 20 hours ago)

ohhhh are you saying the img is multiple separate articles from separate publications that have been collaged together? that makes a lot more sense. i thought it was saying the bridge thing was symptomatic of psychosis.

yeahh people in psychosis are probably getting reinforced from LLMs yeah but tbqh that seems like one of the least harmful uses of LLMs! (except not rly, see below)

first off they are going to be in psychosis regardless of what AI tells them, and they are going to find evidence to support their delusions no matter where they look, as thats literally part of the definition. so it seems here the best outcome is having a space where they can talk to someone without being doubted. for someone in psychosis, often the biggest distressing thing is that suddenly you are being lied to by literally everyone you meet, since no one will admit the thing you know is true is actually true, why are they denying it what kind of cover up is this?! it can be really healing for someone in psychosis to be believed

unfortunately it's also definitely dangerous for LLMs to do this since you cant just reinforce the delusions, you gotta steer towards something safe without being invalidating. i hope insurance companies figure out that LLMs are currently incapable of doing this and thus must not be allowed to practice billable therapy for anyone capable of entering psychosis (aka anyone) until they resolve that issue

[–] sad_detective_man@leminal.space 37 points 3 days ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] Vanilla_PuddinFudge 5 points 2 days ago* (last edited 2 days ago) (2 children)

fall to my death in absolute mania, screaming and squirming as the concrete gets closer

pull a trigger

As someone who is also planning for 'retirement' in a few decades, guns always seemed to be the better plan.

[–] daizelkrns@sh.itjust.works 4 points 2 days ago (2 children)

Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.

Would be happily retired otherwise

[–] InputZero@lemmy.world 6 points 2 days ago

Resume by Dorothy Parker.

Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.

There are not many ways to kill one's self that don't usually end up a botched suicide attempt. Pills are a painful and horrible way to go.

[–] Shelbyeileen@lemmy.world 3 points 1 day ago (1 children)

I'm a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don't destroy the body's appearance, so they can still have funeral visitation.

[–] Sunrosa@lemmy.world 1 points 1 day ago

Why not nitrogen suffocation in a large enough bag to hold the co2?

[–] bathing_in_bismuth@sh.itjust.works 3 points 2 days ago* (last edited 2 days ago)

Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision

[–] OldChicoAle@lemmy.world 4 points 2 days ago

Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They're evil like every other industry

[–] Nikls94@lemmy.world 69 points 3 days ago (1 children)

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

[–] CTDummy@aussie.zone 43 points 3 days ago* (last edited 3 days ago) (1 children)

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

[–] ragebutt@lemmy.dbzer0.com 13 points 3 days ago

The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

[–] burgerpocalyse@lemmy.world 20 points 2 days ago (2 children)

AI life coaches be like 'we'll jump off that bridge when we get to it'

load more comments (2 replies)
[–] BB84@mander.xyz 42 points 3 days ago (2 children)

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

[–] FuglyDuck@lemmy.world 11 points 3 days ago* (last edited 3 days ago)

Lol. Ancient Atlantean Curse: May you have the dystopia you create.

load more comments (1 replies)
[–] Venus_Ziegenfalle@feddit.org 22 points 3 days ago (1 children)
load more comments (1 replies)
[–] RheumatoidArthritis@mander.xyz 29 points 3 days ago (2 children)

It's a helpful assistant, not a therapist

load more comments (2 replies)
[–] 20cello@lemmy.world 4 points 2 days ago

Futurama vibes

[–] rumba@lemmy.zip 11 points 3 days ago (1 children)
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

I disagree. Stupid people are ruining the world. In my country, half the population is illiterate and enabling psychopaths. People who have no critical thinking skills are dragging down the rest of humanity. Off the bridge they go, if that saves the species as a whole. Things need to stop getting worse constantly. Let AI take them.

[–] some_guy@lemmy.sdf.org 12 points 3 days ago (1 children)

It made up one of the bridges, I'm sure.

load more comments (1 replies)
[–] jjjalljs@ttrpg.network 2 points 2 days ago

AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that's probably true of most big org's leadership.

[–] kibiz0r@midwest.social 12 points 3 days ago (5 children)

Pretty callous and myopic responses here.

If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

load more comments (5 replies)
[–] MystikIncarnate@lemmy.ca 3 points 2 days ago

AI is the embodiment of "oh no, anyways"

[–] Nikls94@lemmy.world 13 points 3 days ago* (last edited 3 days ago) (2 children)

Second comment because why not:

Adding "to jump off“ changes it

load more comments (2 replies)
[–] samus12345@sh.itjust.works 6 points 2 days ago* (last edited 2 days ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

load more comments
view more: next ›