this post was submitted on 16 Jun 2025
342 points (98.0% liked)

Fuck AI

3160 readers
364 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Via Xcancel)

top 50 comments
sorted by: hot top controversial new old
[–] Gullible@sh.itjust.works 115 points 4 days ago* (last edited 4 days ago) (3 children)

Two hours to read a book? How long has it been since he touched a piece of adult physical literature?

[–] HenryBenry@piefed.social 37 points 4 days ago

ChatGPT please tell me if spot does indeed run.

[–] Wrufieotnak@feddit.org 9 points 4 days ago

And not THAT kind of adult literature.

[–] TheBat@lemmy.world 6 points 4 days ago (1 children)
[–] Almacca@aussie.zone 7 points 4 days ago

Welp, that's gonna fuck up my search algorithm for a while.

"Chuck Tingle". :D

[–] SpaceNoodle@lemmy.world 108 points 4 days ago (3 children)

2 minutes + 58 minutes = 2 hours

Bro must have asked the LLM to do the math for him

[–] Brainsploosh@lemmy.world 29 points 4 days ago

Might be that it takes them an hour to read the summary

[–] pulsewidth@lemmy.world 16 points 4 days ago (1 children)

The additional hour might be the time they have to work so that they can pay for the LLM access.

Because that is another aspect of what LLMs really are, another Silicon Valley rapid-scale venture capital money-pit service hoping that by the time they've dominated the market and spent trillions they can turn around and squeeze their users hard.

Only trouble for fighting this with logic is that the market they're attempting to wipe out is people's ability to assess data and think critically.

[–] PP_BOY_@lemmy.world 5 points 4 days ago

Indeed. Folks right now dont understand that their queries are being 99.9% subsidized by trillions in VC hoping to dominate a market. Tech tale as old as time and people are falling for it hook, line, and sinker

[–] d00ery@lemmy.world 4 points 4 days ago

Impressed that he can think of the information he needs in 2 minutes - why even bother researching if you already know what you need ...

Seriously though, reading and understanding generally just leaves me with more, very relevant, questions and some answers.

[–] ech@lemm.ee 78 points 4 days ago* (last edited 4 days ago) (3 children)

Did they ask an LLM how LLM's work? Because that shit's fucking farcical. They're not "traversing" anything, bud. You get 17 different versions because each model is making that shit up on the fly.

[–] LeninOnAPrayer@lemm.ee 27 points 4 days ago* (last edited 4 days ago) (1 children)

Nah see they read thousands of pages in like an hour. That's why. They just don't need to anymore because they're so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.

Seriously, that's what they would say.

They don't actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that's as good as the software engineer that actually developed the LLMs.

Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn't understand that difference so they get away with it.

It's the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.

(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)

I really think some parts of our tech world need to be made LESS user friendly. Not more.

[–] Aceticon@lemmy.dbzer0.com 3 points 4 days ago

It's people at the peak point of the Dunning-Krugger curve sharing their "wisdom" with the rest of us.

[–] Jesus_666@lemmy.world 9 points 4 days ago

There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that's getting all the hype these days.

Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it'd be done in a minute.

And there's still the hallucination issue, especially with everything then getting filtered through another LLM.

So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain...

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 64 points 4 days ago (3 children)

They think this is impressive.

I read books because I want knowledge and understanding. You get bite-sized bits of information. We are not the same.

[–] TwitchingCheese@lemmy.world 22 points 4 days ago (1 children)
[–] LogicalFallacy@lemm.ee 18 points 4 days ago

"hallucinations"

Orwell's Animal Farm is a novella about animal husbandry . . .

[–] brendansimms@lemmy.world 1 points 4 days ago

for a large portion of the population, "if it doesn't make money, then it is worthless" applies to EVERYTHING.

[–] karashta@fedia.io 37 points 4 days ago (1 children)

Imagine being proud of wasting the time drinking coffee instead of reading and understanding for yourself...

Then posting that you are proud of relying on hallucinating, made up slop.

Lmfao.

[–] TonyTonyChopper@mander.xyz 5 points 4 days ago

They also imply that 2+58 minutes is equal to 2 hours

I've seen this at work.

We installed a new water sampler and they sent an official installer to set up and commission the device. The guy couldn't answer a damn question about the product without chatGPT. When I asked a relatively complex question that the bot couldn't answer (that was at the third question), I decided that I had enough and spend an hour reading the manual of the thing. Turns out the bot was making up the answers and I learned how to commission the device without the "official support".

you read books and eat vegetables like a loser

my daddy lets me play nintendo 64 and eat cotton candy

we are not the same

[–] PP_BOY_@lemmy.world 45 points 4 days ago* (last edited 4 days ago)

This is the same "I'll do my own research, thanks" crowd btw

spoonfeed me harder Silicon Valley VC daddy

[–] kryptonianCodeMonkey@lemmy.world 24 points 4 days ago* (last edited 4 days ago) (1 children)

Imagine thinking "I outsource all of my thinking to machines, machines that are infamous for completely hallucinating information out of the aether or pulling from sources that are blatantly fabrications. And due to this veil of technology, this black box that just spits out data with no way to tell where it came from, and my unwillingness to put in my own research efforts to verify anything, I will never have any way to tell if the information is just completely wrong. And yet I will claim this to be my personal knowledge, regurgitate this information with full confidence and attach my personal name and reputation to its veracity regardless, and be subject to the consequences when someone with actual knowledge fact checks me," is a clever take. Imagine thinking that taking the easy way out, the lazy way, the manipulative way that gets others to do your work for you, is the virtuous path. Modern day Tom Sawyers, I swear. Sorry, AI bros, have an AI tell you who Tom Sawyer is so you can understand the insult.

[–] joyjoy@lemmy.zip 5 points 4 days ago

Obviously it's the fact checkers who are wrong /s

[–] RememberTheApollo_@lemmy.world 23 points 4 days ago (1 children)

“I used many words to ask the AI to tell me a story using unverified sources to give me the answer I want and have no desire to fact check.”

GIGO.

[–] stabby_cicada@slrpnk.net 2 points 3 days ago* (last edited 3 days ago) (1 children)

I mean, how many people fact check a book? Even at the most basic level of reading the citations, finding the sources the book cited, and making sure they say what the book claims they say?

In the vast majority of cases, when we read a book, we trust the editors to fact check.

AI has no editors and generates false statements all the time because it has no ability to tell true statements from false. Which is why letting an AI summarize sources, instead of reading those sources for yourself, introduces one very large procedurally generated point of failure.

But let's not pretend the average person fact checks anything. The average person decides who they trust and relies on their trust in that person or source rather than fact checking themselves.

Which is one of the many reasons why Trump won.

This is a two part problem. The first is that LLMs are going to give you shoddy results riddled with errors. This is known. Would you pick up a book and take it as the truth if analysis of the author’s work said 50% of their facts are wrong?The second part is that the asker has no intent to verify the LLM’s output, they likely just want the output and be done with it. No critical thinking required. The recipient is only interested in a copy-paste way of transferring info.

If someone takes the time to actually read and process a book with the intent of absorbing and adding to their knowledge, mentally they take the time to balance what they read with what they know and hopefully cross referencing that information internally and gauging it with “that sounds right” at least, but hopefully by reading more.

These are not the same thing. Books and LLMs are not the same. Anyone can read the exact same book and offer a critical analysis. Anyone asking an LLM a question might get an entirely different response depending on minor differences in asking.

Sure, you can copy-paste from a book, but if you haven’t read it, then yeah…that’s like copy-pasting an LLM response. No intent of learning, no critical thought, etc.

[–] ideonek@piefed.social 30 points 4 days ago* (last edited 4 days ago) (1 children)

Without the knwoledge, you don't even know what precise information you need.

[–] shalafi@lemmy.world 3 points 4 days ago

When I started learning SQL Server, I was so ignorant I couldn't even search for what I needed.

[–] leraje@lemmy.blahaj.zone 25 points 4 days ago (1 children)

You're right OOP, we are not the same. I have the full context, processing time, an enjoyable reading experience and a framework to understand the book in question and its wider relevance. You have a set of bullet points that, when asked to talk about on the mind numbing mens rights/crypto podcast you no doubt have, you cannot talk about, a lot of which will be wrong anyway.

[–] supersquirrel@sopuli.xyz 5 points 4 days ago* (last edited 4 days ago)

spittakes coffee all over keyboard

I just spent the last 57 minutes drinking that coffee, I was almost done too, thanks a lot.

[–] lowered_lifted@lemmy.blahaj.zone 22 points 4 days ago (1 children)

while you were studying books, he studied a cup of coffee. TBH I can spend an hour both reading and drinking coffee at the same time idk why it's got to be its own thing.

load more comments (1 replies)
[–] nthavoc@lemmy.today 19 points 4 days ago

After all that long description, AI tells you eating rocks is ok.

[–] NigelFrobisher@aussie.zone 19 points 4 days ago

This is the most Butlerian Jihad thing I’ve ever read. They should replace whatever Terminator-lite slop Brian Herbert wrote with this screengrab and called it Dune Book Zero.

[–] supersquirrel@sopuli.xyz 24 points 4 days ago

2 mins? Sam Altman can spiritually ascend at least 10 divorced dads in that epoch of time.

This is business baby.

[–] lath@lemmy.world 14 points 4 days ago

"I ran this Convo through an LLM and it said i should fire and replace you with an LLM for increased productivity and efficiency.

Oh wait, hold on. I read that wrong, it said I should set you on fire...

Well, LLMs can't be wrong so.."

[–] groucho@lemmy.sdf.org 8 points 4 days ago

Maybe we don't need 30 remedial IQ points from a magic hallucination box?

[–] iAvicenna@lemmy.world 10 points 4 days ago

Oh no not the reading! Great thing we had AI to create AI and we did not have to depend on all those computer scientists and engineers whose only skill is to read stuff.

[–] rem26_art@fedia.io 13 points 4 days ago

bro needs 58 minutes to drink coffee

[–] natecox@programming.dev 13 points 4 days ago

They must just have missed training on the book about how many “r”s are in the word “strawberry”.

[–] Protoknuckles@lemmy.world 12 points 4 days ago

We're not the same. I learned something.

[–] Tartas1995@discuss.tchncs.de 4 points 4 days ago

I have read books in which the definition of certain words get redefined to be more precise and clear in the communication while making things less verbose. I don't think an ai summary will reliably properly introduce me to the definition on page 100 of a book that took the previous 99 pages to set up the required definitions to understand the definition it gives on page 100.

But I could be wrong.

[–] ZombiFrancis@sh.itjust.works 2 points 4 days ago

my overseer agent

Welp. That's all I need!

[–] phoenixz@lemmy.ca 2 points 4 days ago

Ignoring all the obvious problems with AI, this shows another issue as well.

Rewading books is beautiful, it makes you disappear into a world , immerses you, makes your head fantasise about how this world looks like, you go on a long vacation.

You lose all that, you stop using your own brain to outsource all that beauty to a datacenter

[–] hperrin@lemmy.ca 2 points 4 days ago

In other words: I don’t understand why someone would want to think when being lazy is available to them.

load more comments
view more: next ›