this post was submitted on 01 Oct 2025
150 points (93.6% liked)

Fuck AI

4219 readers
725 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Fedizen@lemmy.world 18 points 2 days ago

Ghost hunters keep finding conscious entities in toilets and old couches. Pretty sure these people are idiots, charlatans or both.

[–] Kolanaki@pawb.social 49 points 3 days ago* (last edited 3 days ago) (1 children)

"Around the world, incredibly gullible, naive, and stupid people can be found."

[–] bigbabybilly@lemmy.world 8 points 3 days ago

Yeah, that’s better.

[–] ZDL@lazysoci.al 145 points 3 days ago

So many years we were concerned about computers passing the Turing Test. Instead humans are failing it.

[–] Broadfern@lemmy.world 98 points 3 days ago (3 children)

We see faces in fucking wall outlets. I could give a pencil a name and the next three people I talk to will form empathy with it.

People are desperate for connection, and it’s sad.

[–] Hegar@fedia.io 45 points 3 days ago (1 children)

I love this about us.

It's just delightful how many situations our brain is willing to shrug and say "close enough" to. Oh wait the pencil has a name now? I guess it must be basically the same as me.

If I pretend an object is talking, my partner will instantly feel bad for how she's treated it.

It's not sad, it's just how brains are.

[–] MudMan@fedia.io 7 points 3 days ago

I'm perpetually mad at having a global conversation about a thing without understanding how it works despite how it works not being a secret or that complicated to conceptualize.

I am now also mad at having a global conversation about a thing without understanding how we work despite how we work not being a secret or that complicated to conceptualize.

I mean, less so, because we're more complicated and if you want to test things out you need to deal with all the squishy bits and people keep complaining about how you need to follow "ethics" and you can't keep control groups in cages unless they agree to it and stuff, but... you know, more or less.

[–] Denjin@feddit.uk 27 points 3 days ago (1 children)

It's just natural human instinct. We're programmed to look for patterns and see faces. It's the same reason we attribute human characteristics to animals or even inanimate objects.

Add to that the fact that everyone refers to LLM chatbots as humans and this is inevitable.

load more comments (1 replies)
load more comments (1 replies)
[–] minorkeys@lemmy.world 54 points 3 days ago (2 children)

People convinced themselves fairies existed. This is exactly what humans would do.

[–] krooklochurm@lemmy.ca 19 points 3 days ago (2 children)
[–] Valmond@lemmy.world 7 points 3 days ago

Elvis is still alive!

load more comments (1 replies)
[–] ekZepp@lemmy.world 15 points 3 days ago* (last edited 2 days ago) (6 children)

"Wow! This thing say everything I want to hear and validate all my delusions! ... OFC it must be GOD!!!"

[–] postmateDumbass@lemmy.world 1 points 2 days ago

This LLM needs a private jet for GOD, I'd better tithe quick.

load more comments (5 replies)
[–] CitizenKong@lemmy.world 20 points 3 days ago* (last edited 3 days ago) (4 children)

This just in: Humans very eager to anthropomorphize everything that seems to be even remotely alive. Source: Every owner of a pet ever.

[–] Fedizen@lemmy.world 4 points 2 days ago* (last edited 2 days ago) (1 children)

Yall never ask people if they believe in ghosts? My mom once saw a light bulb blink and thought it was a dead person trying to talk to her. Like people seeing this shit in AI is just people being gullible.

[–] CitizenKong@lemmy.world 2 points 2 days ago

Yes, exactly my point.

[–] Lucidlethargy@sh.itjust.works 13 points 3 days ago

Yeah, but my dog can actually understand me, and has genuine emotions. These LLM's are just unfeeling self-pleasuring devices at this point.

load more comments (2 replies)
[–] benignintervention@lemmy.world 30 points 3 days ago (5 children)

I've recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They're the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don't provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don't know better you probably won't question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn't how people communicate or think. It fucked with your own natural pattern recognition.

I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they're stupid machines making it up with every letter.

The people who see personalities and consciousness in these machines go outside and can't talk to people like they used to because they've forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.

I'm afraid these gen AI "tools" are here to stay and I'm certain we're using this technology in the wrong ways.

[–] ZDL@lazysoci.al 17 points 3 days ago (1 children)

I'm afraid these gen AI "tools" are here to stay…

This is, thankfully, emphatically not true. There is no economic path that leads to these monstrosities remaining as prominent as they are now. (Indeed their current prominence as they get jammed into everything at seeming whim is evidence for how desperate their pushers are getting.)

Every time you get ChatGPT or Claude or Perplexity or whatever to do something for you, you are costing the slop pusher money. Even if you're one of those people stupid enough to pay for an account.

If ChatGPT charged Netflix-like fees for access, they'd need well over half the world's population just to break even. And unlike every other tech that we've created in the past, newer versions are more expensive to create and operate with each iteration, not less.

There's no fiscal path forward. LLMs are fundamentally impossible to scale and there's no amount of money that's going to fix that. They're a massive bubble that will burst, very messily, sooner, rather than later.

In a decade there will be business studies comparing LLMs to the tulip craze. Well, at least in the few major cities left in the world that aren't underwater from global warming inspired by all those LLM-spawned data centres.

[–] benignintervention@lemmy.world 6 points 3 days ago (1 children)

I hope you're right, but also that's really bleak. I understand that Nvidia, Microsoft, and OpenAI are essentially passing money in a circle and can only wonder how long they can keep it up. It's not a lossless circuit

[–] ZDL@lazysoci.al 6 points 3 days ago

The longer they keep up the circlejerk, the worse it will be for the US economy when it fails. (When. Not if.)

load more comments (4 replies)
[–] daggermoon@lemmy.world 20 points 3 days ago (4 children)

I'm a consciousness entity. Talk to me instead.

[–] jj4211@lemmy.world 18 points 3 days ago (1 children)

Sorry, but there's a risk you might disagree with me or fail to flatter me sufficiently, so off to LLM I go.

[–] daggermoon@lemmy.world 13 points 3 days ago

The experience I had with LLMs was arguing with it about copyright law and intellectual property. It pissed me the fuck off and I felt like a loser for arguing with a clanker so that was the end of that.

load more comments (2 replies)
[–] heyWhatsay@slrpnk.net 8 points 3 days ago

The main issue is the general awareness of the general public. When people don't develop into complex personalities, it's easy to mistake simple LLM to be as complex as people.

All the hype about the singularity and steps towards it, are jumping the gun. Imo, it's the modern "we almost have cold fusion figured out!"

[–] GraniteM@lemmy.world 9 points 3 days ago (2 children)
[–] Dogiedog64@lemmy.world 16 points 3 days ago* (last edited 3 days ago) (1 children)

Yup, literally seeing human features in random noise. LLMs can't think and aren't conscious; anyone telling you otherwise is either trying to sell you something or has genuinely lost their mind.

[–] Garbagio@lemmy.zip 11 points 3 days ago (2 children)

I don't even think necessarily that they've lost their mind. We built a machine that is incapable of thought or consciousness, yes, but is fine tuned to regurgitate an approximation of it. We built a sentience-mirror, and are somehow surprised that people think the reflection is its own person.

[–] SparroHawc@lemmy.zip 7 points 3 days ago

I'd always thought that philosophical zombies were a fiction. Now we've built them.

[–] BanMe@lemmy.world 7 points 3 days ago

Even more than a sentence mirror, it will lead you into a fantasy realm based on the novels it's trained on, which often include... AI becoming sentient. It'll play the part if you ask it.

[–] shalafi@lemmy.world 10 points 3 days ago (1 children)

It's baked in.

“Fifty thousand years ago there were these three guys spread out across the plain and they each heard something rustling in the grass. The first one thought it was a tiger, and he ran like hell, and it was a tiger but the guy got away. The second one thought the rustling was a tiger and he ran like hell, but it was only the wind and his friends all laughed at him for being such a chickenshit. But the third guy thought it was only the wind, so he shrugged it off and the tiger had him for dinner. And the same thing happened a million times across ten thousand generations - and after a while everyone was seeing tigers in the grass even when there were`t any tigers, because even chickenshits have more kids than corpses do. And from those humble beginnings we learn to see faces in the clouds and portents in the stars, to see agency in randomness, because natural selection favours the paranoid. Even here in the 21st century we can make people more honest just by scribbling a pair of eyes on the wall with a Sharpie. Even now we are wired to believe that unseen things are watching us.”

― Peter Watts, Echopraxia

[–] Semi_Hemi_Demigod@lemmy.world 2 points 2 days ago

We are utterly doomed.

[–] FriendOfDeSoto@startrek.website 15 points 3 days ago (4 children)

Gosh, are we dumb the world over. Maybe these chat bots are just lowering the threshold for what used to be the "I'm hearing voices or communicate with the supernatural" type of people. Thanks to a chat bot, you can now be certifiable much sooner.

load more comments (4 replies)
[–] myfunnyaccountname@lemmy.zip 8 points 3 days ago (4 children)

I must be doing something wrong. I have not once used any LLM and thought to myself that’s its conscious and I want to be its friend. Am I broken?

[–] Rooster326@programming.dev 7 points 3 days ago

You touched too much grass

[–] SparroHawc@lemmy.zip 3 points 3 days ago

Clearly you haven't been talking to enough blindingly stupid people.

load more comments (2 replies)
[–] badbytes@lemmy.world 7 points 3 days ago

People also believe the earth is flat.

[–] hendrik@palaver.p3x.de 11 points 3 days ago* (last edited 3 days ago)

unlike anything prior

Antropomorphism isn't anything new. Even Pareidolia is something we do, we see faces and animals in cloud formations, rocks... That's just how our brain works, nothing new here.

[–] ramenshaman@lemmy.world 7 points 3 days ago (1 children)

Most people are pretty stupid.

[–] socsa@piefed.social 6 points 3 days ago

You can tell by the way they are

load more comments
view more: next ›