this post was submitted on 03 Aug 2025
410 points (86.7% liked)

Fuck AI

4258 readers
1050 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[–] kartoffelsaft@programming.dev 105 points 2 months ago (12 children)

I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

But also:

Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.

[–] BigDiction@lemmy.world 24 points 2 months ago

I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.

[–] azertyfun@sh.itjust.works 11 points 2 months ago (2 children)

You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.

Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can't properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.

This makes me quite uncomfortable because that's the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can't or won't say explicitly isn't tech bros but immigrants and queer people.

load more comments (2 replies)
load more comments (10 replies)
[–] kopasz7@sh.itjust.works 82 points 2 months ago (2 children)

My issues are fundsmentally two fold with gen AI:

  1. Who owns and controls it (billionares and entrenched corporations)

  2. How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)

I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers' house of cards.

[–] BlameTheAntifa@lemmy.world 20 points 2 months ago* (last edited 2 months ago) (2 children)

When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.

I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.

load more comments (2 replies)
[–] iAmTheTot@sh.itjust.works 11 points 2 months ago (13 children)

You really take no issue with how they were all trained?

[–] storm@lemmy.blahaj.zone 6 points 2 months ago (3 children)

*Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It's very frustrating to see so many people take issue with AI as "theft" as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work ("Property is theft" and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.

load more comments (3 replies)
load more comments (12 replies)
[–] ArbitraryValue@sh.itjust.works 47 points 2 months ago (32 children)

Are people expected not to follow anyone they disagree with?

[–] De_Narm@lemmy.world 39 points 2 months ago

Reading other opinions? On my echo chamber platform of choice?! /s

[–] kibiz0r@midwest.social 18 points 2 months ago (1 children)

Follow to expose yourself to different perspectives? Sure.

But it sounds like the users in question are following with the intent to reply “you’re wrong” to everything the OP puts out.

Which… I do, sadly, expect. But I wouldn’t wish for it.

load more comments (1 replies)
load more comments (30 replies)
[–] Limonene@lemmy.world 30 points 2 months ago (4 children)

Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I'm not a copyright lawyer.

Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It's equivalent to pirating a movie to watch at home.

But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you're failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.

Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.

load more comments (4 replies)
[–] Atlas_@lemmy.world 18 points 2 months ago (3 children)

Do y'all hate chess engines?

If yes, cool.

If no, I think you hate tech companies more than you hate AI specifically.

[–] theunknownmuncher@lemmy.world 19 points 2 months ago (4 children)

Yup, as always, none of these problems are inherent to AI itself, they're all problems with capitalism.

[–] Randomgal@lemmy.ca 6 points 2 months ago (3 children)

I had to check to make sure I was in the right app. Rational discussion on my Lemmy? No way.

But yes. The machine can't take responsibility for shit. You hate the people and what they are doing to you. If AI didn't exist, they do it somehow else

load more comments (3 replies)
load more comments (3 replies)
[–] princessnorah@lemmy.blahaj.zone 17 points 2 months ago* (last edited 2 months ago) (8 children)

The post is pretty clearly* about genAI, I think you're just choosing to ignore that part. There's plenty of really awesome machine learning technology that helps with disabilities, doesn't rip off artists and isn't environmentally deleterious.

load more comments (8 replies)
load more comments (1 replies)
[–] vivalapivo@lemmy.today 17 points 2 months ago

First of all, intellectual property rights do not protect the author. I'm the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.

Secondly, your personal carbon footprint is bullshit.

Thirdly, everyone in the picture is an asshole.

[–] dandelion@lemmy.blahaj.zone 16 points 2 months ago (4 children)

I do use AI (mostly like Google), but I don't think it's justified or OK, lol - I'm the problem, and I know it.

[–] AquaTofana@lemmy.world 6 points 2 months ago (1 children)

I've said it before and I'll say it again, one of my favorite things is the AI rp chatbots. They're stories written by me and an AI, for me, however the fuck I want to write them.

I used to do it with other people over the web - including my bestie who Ive been writing with for 20+ years now - but I don't write with other humans anymore.

AI solves the ghosting issue, the "life got in the way" issues, the "I'm just not into it anymore" issues, and the "Oh you wanna make this smutty please for the love of god I hope you're not lying about being 26" issue, and finally, the biggest issue for me: "Please I told you I'm happily married please stop asking for me socials or email. I just wanna write fun angsty romance stories with you."

So I'm with you. I'm also the problem, its me. But you know what? When I discovered these AI chatbots in February of this year, my doomscrolling was cut down to a third of what it was, and I all of a sudden was sleeping better and less angry.

I'm not gonna stop.

load more comments (1 replies)
[–] chunkystyles@sopuli.xyz 6 points 2 months ago

I use it to help me solve tech and code issues, but only because searching the web for help has become so bad. LLM answers are almost always better, and I hate it.

Everything is bullshit. Everything sucks. Capitalism has ruined everything.

load more comments (2 replies)
[–] ruuster13@lemmy.zip 16 points 2 months ago

AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.

[–] theunknownmuncher@lemmy.world 15 points 2 months ago* (last edited 2 months ago) (3 children)

the fact that it is theft

There are LLMs trained using fully open datasets that do not contain proprietary material... (CommonCorpus dataset, OLMo)

the fact that it is environmentally harmful

There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave...

the fact that it cuts back on critical, active thought

This is a usecase problem. LLMs aren't suitable for critical thinking or decision making tasks, so if it's cutting back on your "critical, active thought" you're just using it wrong anyway...

The OOP genuinely doesn't know what they're talking about and are just reacting to sensationalized rage bait on the internet lmao

[–] csh83669@programming.dev 16 points 2 months ago (7 children)

Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.

load more comments (7 replies)
[–] hpx9140@fedia.io 9 points 2 months ago (1 children)

You're implying the edge cases you presented are the majority being used?

[–] theunknownmuncher@lemmy.world 11 points 2 months ago* (last edited 2 months ago) (2 children)

No, and that's irrelevant. Their post is explicitly not about the majority, but about exceptions/edge cases.

I am responding to what they posted (I even quoted them), showing that the position that "there is no ethical use for generative AI" and that there are no exceptions is provably false.

I didn't think it needed to be said because it's not relevant to this discussion, but: the majority of AI sucks on all fronts. It's bad for intellectual property, it's bad for the environment, it's bad for privacy, it's bad for people's brains, and it's bad at what it's used for.

All of these problems are not inherent to AI itself, and instead are problems with the massive short-term-profit-seeking corporations flush with unimaginable amounts of investor cash (read: unimaginable expectations and promises that they can't meet) that control the majority of AI. Once again capitalism is the real culprit, and fools like the OOP will do these strawman mental gymnastics and spread misinformation to defend capitalism at all costs.

[–] hpx9140@fedia.io 5 points 2 months ago (1 children)

I can get behind this clarification, so thanks for that.

I'm a realist. To that end, relevance is assigned less on the basis on pedantic deconstruction on a single post and more on the practical reality of what is unfolding around us. Are there ethical applications for generative AI? Possibly. Will they become the standard? Unlikely, given incumbent power structures that are defining and dictating long term use.

As with most things stitched into the human experience, gaming human psychology/behavioral mechanics are key to trendsetting. What the majority accepts is what reality re-acclimates to. At the moment, that appears to be mass adoption of unethical AI systems.

I don't disagree on these problems not being inherent to AI. But that sentiment has the same flavour as 'guns don't kill people' ammosexuals like to bust out when confronted.

Either way, it's clear you have a good read on what needs to happen to get all this to a better place. Hope you keep fighting to make that happen.

[–] theunknownmuncher@lemmy.world 5 points 2 months ago (1 children)

Yeah, agreed. But that's not what the OOP is saying in their post and their attitude and language makes me believe they're purposefully being wrong and outrageous for attention/trolling

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] kibiz0r@midwest.social 13 points 2 months ago* (last edited 2 months ago) (5 children)

It’s so surreal when someone posts a meme about That Guy™ doing That Thing™ and then all of a sudden That Guy™ shows up in the comments, doing That Thing™

Like, can I get your autograph? You’re famous, bro!

load more comments (5 replies)
[–] anarchiddy@lemmy.dbzer0.com 11 points 2 months ago

I sure am glad that we learned our lesson from the marketing campaigns in the 90's that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.

Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.

[–] khaleer@sopuli.xyz 9 points 2 months ago* (last edited 2 months ago)

I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd

[–] axEl7fB5@lemmy.cafe 8 points 2 months ago (2 children)

Do people who self-host count? Like ollama? It's not like my PC is going to drain a lake.

[–] Auth@lemmy.world 7 points 2 months ago

To that person, yeah self hosting still counts.

load more comments (1 replies)
[–] BradleyUffner@lemmy.world 6 points 2 months ago (2 children)

They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.

load more comments (2 replies)
[–] OmegaLemmy@discuss.online 5 points 2 months ago

This is extreme

load more comments
view more: next ›