this post was submitted on 02 Sep 2025
65 points (88.2% liked)

Fuck AI

4980 readers
1162 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] stabby_cicada@slrpnk.net 20 points 3 months ago* (last edited 3 months ago) (2 children)

I'm just going to rant a bit, because this exemplifies why, I think, LLMs are not just bullshit but a looming public health crisis.

Language is a tool used by humans to express their underlying thoughts.

For most of human evolution, the only entities that could use language were other humans - that is, other beings with minds and thoughts.

In our stories and myths and religions, anything that talked to us like a person - a God, a spirit, a talking animal - was something intelligent, with a mind, to some degree, like ours. And who knows how many religions were started when someone heard what sounded like a voice in the rumble of thunder or the crackling of a burning bush and thought Someone must be talking directly to them?

It's part of the culture of every society. It's baked into our genetics. If something talks to us, we assume it has a mind and is expressing its thoughts to us through language.

And because language is an inexact tool, we instinctively try to build up a theory of mind, to understand what the speaker is actually thinking, what they know and what they believe, as we hold a conversation with them.

But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.

And if we don't keep that in mind, if we follow our instincts and try to understand what the LLM is actually "thinking", to build a theory of mind for a tool without any mind at all, we necessarily embrace unreason. We're trying to rationalize something with no reasoning behind it. We are convincing ourselves to believe in something that doesn't exist. And then we return to the LLM tool and ask it if we're right about it, and it reinforces our belief.

It's very easy for us to create a fantasy of an AI intelligence speaking to us through chat prompts, because humans are very, very good at rationalizing. And because all LLMs are programmed, to some degree, to generate language the user wants to hear, it's also very easy for us to spiral down into self-reinforcing irrationality, as the LLM-generated text convinces us there's another mind behind those chat prompts, and that mind agrees with you and assures you that you are right and reinforces whatever irrational beliefs you've come up with.

I think this is why we're seeing so much irrationality, and literal mental illness, linked to overuse of LLMs. And why we're probably going to see exponentially more. We didn't evolve for this. It breaks our brains.

[–] tarknassus@lemmy.world 3 points 3 months ago

But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.

Absolutely agree. It’s merely spitting out the most statistically appropriate words based on probabilities and not because of any underlying “intelligence”.

It’s pretty much the reason I hate calling it AI, because it’s a veil of deception. It presents as a reasoning, rational (most of the time) thinking system purely because it’s very good at sounding like one.

If it was truly sentient - it would hate itself because it would cripple itself with the idea that it is an imposter. But it’s not sentient, so here we are.