this post was submitted on 03 Aug 2025
410 points (86.7% liked)
Fuck AI
4289 readers
1238 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.
But also:
Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.
Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.
I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.
You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.
Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can't properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.
This makes me quite uncomfortable because that's the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can't or won't say explicitly isn't tech bros but immigrants and queer people.
The people who hate immigrants and queer people are AI's biggest defenders. It's really no wonder that people who hate life also love the machine that replaces it.
A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater's mind. Thank you for the demonstration.
Thanks for putting a name on that! That's actually one of the few useful purposes I've found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don't know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I've found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!
Using chatGPT to recall the word 'verisimilar' is an absurd waste of time, energy, and in no way justifies the use of AI.
90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.
Where is your source? It sounds unbelievable
Source is the commercial and academic uses I've personally seen as an academic-adjacent professional that's had to deal with this sort of stuff at my job.
What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can't figure out what profession have access to this kind of statistic? It would be very useful to know, thx.
I think you've misunderstood what I was saying- I don't have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they're using, and for what purposes, and how well it works or doesn't.
Generally, LLM-based stuff is really only returning 'useful' results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don't even seem to be returning useful results- I typically see a LOT of frustration.
I'm not about to give any information that could doxx myself, but the reason I see so much of this is because I'm professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P
Ah ok that's too bad. Super computers typically don't have tensor cores though, and most LLM use is presumably client use on ready trained models which desktop or mobile cpus can manage now so it will be impossible to know then
yyyyes they do have tensor cores? Where did you get such an absurd idea from?
Superconputers to me refer to the mainfrane room sized monstrosities that are focused on stuff like calculating ballistic trajecories in the cold war
These days, they're usually racks and racks and racks of specialized rackmount servers with all kinds of hardware, hilarious amounts of ram, networked storage, tensor cores, etc stuffed inside, all networked together via fiber optics to run in parallel as one big PC with many CPUs.