this post was submitted on 09 Jul 2025
444 points (91.9% liked)

Science Memes

15678 readers
2666 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old

what does this have to do with mania and psychosis?

[–] 20cello@lemmy.world 1 points 2 hours ago

Futurama vibes

[–] WrenFeathers@lemmy.world 11 points 5 hours ago* (last edited 5 hours ago)

When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

[–] FireIced@lemmy.super.ynh.fr 7 points 5 hours ago

It took me some time to understand the problem

That’s not their job though

[–] MystikIncarnate@lemmy.ca 2 points 5 hours ago

AI is the embodiment of "oh no, anyways"

[–] finitebanjo@lemmy.world 41 points 17 hours ago* (last edited 17 hours ago) (7 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

load more comments (7 replies)
[–] burgerpocalyse@lemmy.world 18 points 15 hours ago (2 children)

AI life coaches be like 'we'll jump off that bridge when we get to it'

[–] Agent641@lemmy.world 1 points 37 minutes ago

I do love to say "I'll burn that bridge when I come to it" tho

[–] LovableSidekick@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago)

I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That's assuming the designers were competent.

[–] sad_detective_man@leminal.space 33 points 19 hours ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

[–] pugnaciousfarter@literature.cafe 6 points 17 hours ago

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] rumba@lemmy.zip 11 points 16 hours ago
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

[–] glimse@lemmy.world 91 points 1 day ago (3 children)

Holy shit guys, does DDG want me to kill myself??

What a waste of bandwidth this article is

[–] TempermentalAnomaly@lemmy.world 11 points 14 hours ago

What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.

[–] Samskara@sh.itjust.works 8 points 18 hours ago (1 children)

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

[–] Scubus@sh.itjust.works 9 points 17 hours ago (1 children)

... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

[–] Samskara@sh.itjust.works 4 points 14 hours ago* (last edited 14 hours ago) (1 children)

People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.

[–] Scubus@sh.itjust.works 2 points 11 hours ago (1 children)

Ok, people will turn to google when they're depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?

[–] Samskara@sh.itjust.works 1 points 11 hours ago (5 children)

You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.

The first line above the search results at google for queries like that is a suicide hotline phone number.

A chatbot should provide at least that as well.

I’m not saying it shouldn’t provide no information.

load more comments (5 replies)
[–] Stalinwolf@lemmy.ca 18 points 23 hours ago (2 children)

"I have mild diarrhea. What is the best way to dispose of a human body?"

load more comments (2 replies)
[–] Honytawk@lemmy.zip 102 points 1 day ago* (last edited 1 day ago) (2 children)

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative

[–] Trainguyrom@reddthat.com 9 points 19 hours ago

There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job

[–] Skullgrid@lemmy.world 18 points 22 hours ago

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

[–] Karyoplasma@discuss.tchncs.de 120 points 1 day ago (2 children)

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

[–] BroBot9000@lemmy.world 25 points 22 hours ago (3 children)

It is definitely both:

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

load more comments (3 replies)
[–] Denjin@lemmings.world 3 points 15 hours ago

Tomato tomato

[–] samus12345@sh.itjust.works 6 points 16 hours ago* (last edited 16 hours ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

[–] Venus_Ziegenfalle@feddit.org 22 points 22 hours ago (1 children)
load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 11 points 19 hours ago (1 children)

It made up one of the bridges, I'm sure.

load more comments (1 replies)
[–] BB84@mander.xyz 40 points 1 day ago (2 children)

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

[–] explodicle@sh.itjust.works 1 points 9 hours ago

In the future? They already have censors, they're just really shitty.

load more comments (1 replies)
[–] Nikls94@lemmy.world 68 points 1 day ago (1 children)

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

[–] CTDummy@aussie.zone 42 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

load more comments (1 replies)
[–] angrystego@lemmy.world 7 points 21 hours ago

I said the real call of the void. Perfection

[–] RheumatoidArthritis@mander.xyz 29 points 1 day ago (2 children)

It's a helpful assistant, not a therapist

load more comments (2 replies)
load more comments
view more: next ›