this post was submitted on 15 Feb 2026
1514 points (99.5% liked)

Fuck AI

5920 readers
1862 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

top 50 comments
sorted by: hot top controversial new old
[–] luciferofastora@feddit.org 46 points 3 days ago (1 children)

I'm a data analyst and primary authority on the data model of a particular source system. Most questions for figures from that system that can't be answered directly and easily in the frontend end up with me.

I had a manager show me how some new LLM they were developing (which I had contributed some information about the model to) could quickly answer some questions that usually I have to answer manually, as part of a pitch to make me switch to his department so I can apply my expertise for improving this fancy AI instead of answering questions manually.

He entered a prompt, got a figure that I knew wasn't correct and I queried my data model for the same info, with a significantly different answer. Given how much said manager leaned on my expertise in the first place, he couldn't very well challenge my results and got all sheepish about how the AI still in development and all.

I don't know how that model arrived at that figure. I don't know if it generated and ran a query against the data I'd provided. I don't know if it just invented the number. I don't know how the devs would figure out the error and how to fix it. But I do know how to explain my own queries, how to investigate errors and (usually) how to find a solution.

Anyone who relies on a random text generator - no matter how complex that generation method to make it sound human - to generate facts is dangerously inept.

[–] jj4211@lemmy.world 15 points 3 days ago (2 children)

I don’t know how the devs would figure out the error and how to fix it.

This is like the biggest factor that people don't get when thinking of these models in the context of software. "Oh it got it wrong, but the developers will fix it in an update". Nope, they can fix traditional software mistakes, LLM output and machine learning things... They can throw more training data at it (which sometimes just changes what it gets wrong) and hope for the best, they can do better job at curating the context window to give the model the best shot at outputting the right stuff (e.g. the guy who got Opus to generate a slow crappy buggy compiler had to traditionally write a filter to find and show only the 'relevent' compiler output back to the models), they can try to generate code to do what you want and have you review the code and correct issues. But debugging and fixing the model itself... that's just not a thing at all.

I was in a meeting where a sales executive was bragging about the 'AI sales agent' they were working, but admitting frustration with the developres and a bit confused why the software developers weren't making progress when those same developers always made decent progress before, and they should be able to do this even faster because they have AI tools to help them... It eternally seemed in a state that almost worked but not quite no matter what model or iteration they went to, no matter how much budget they allocated, when it came down to the specific facts and figures it would always screw up.

I cannot understand how long these executives wade in the LLM pool and still believes in capabilities beyond what anyone has experienced.

load more comments (2 replies)
[–] pseudo@jlai.lu 51 points 3 days ago (3 children)

When you delegate, to a person, a tool or a process, you check the result. You make sure that the delegated tasks get done and correctly and that the results are what is expected.

Finding that it is not the case after months by luck shows incompetence. Look for the incompetent.

[–] Tja@programming.dev 7 points 3 days ago

100%

Hallucinations are widely known, this is a collective failure of the whole chain of leadership.

load more comments (2 replies)
[–] MuteDog@lemmy.world 20 points 2 days ago (1 children)

Apparently that reddit post itself was generated with AI. Using AI to bash AI is an interesting flex.

[–] Crozekiel@lemmy.zip 4 points 2 days ago

Have any evidence of that? The only thing I saw was commentors in that thread (who were obvious AI-bros) claiming it must be AI generated because "it just wouldn't happen"...

[–] excral@feddit.org 247 points 4 days ago (14 children)

I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.

[–] pankuleczkapl@lemmy.dbzer0.com 47 points 4 days ago (3 children)

Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.

[–] jj4211@lemmy.world 21 points 3 days ago

I've been through the cycle of the AI companies repeatedly saying "now it's perfect" only admitting it's complete trash when they release the next iteration and claim "yeah it was broken, we admit, but now it's perfect" so many times now...

Problem being there's a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I'm just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a "oh, those are probably going to be fixed in 4.7 or 5 or whatever".

Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn't make sense. They use the same version number scheme after all, so expectations should be similar.

load more comments (2 replies)
[–] 0x0f@piefed.social 25 points 4 days ago* (last edited 4 days ago) (2 children)

My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.

[–] jj4211@lemmy.world 21 points 3 days ago (3 children)

The problem is that we've had a culture of people who don't know things very well control the purse strings relevant to those things.

So we have executives who don't know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more "executive" like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn't support their decision, just a "yes, and..." result agreeing with whatever dumbass request they thought would be correct and simple.

Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.

load more comments (3 replies)
load more comments (1 replies)
load more comments (12 replies)
[–] db_null@lemmy.dbzer0.com 17 points 3 days ago (1 children)

I guarantee you this is how several, if not most, fortune 500 companies currently operate. The 50k DOW is not just propped up by the circlejerk spending on imaginary RAM. There are bullshit reports being generated and presented every day.

I patiently wait. There is a diligent bureaucrat sitting somewhere going through fiscal reports line by line. It won't add up.. receipts will be requested.. bubble goes pop

load more comments (1 replies)
[–] ICastFist@programming.dev 10 points 2 days ago

I-want-to-believe.jpg

[–] Jankatarch@lemmy.world 19 points 3 days ago (1 children)

Tbf at this point corporate economy is made up anyway so as long as investors are gambling their endless generational wealth does it matter?

[–] wabasso@lemmy.ca 9 points 3 days ago (1 children)

This is how I’m starting to see it too. Stock market is just the gambling statistics of the ownership class. Line goes down and we’re supposed to pretend it’s harder to grow food and build houses all of a sudden.

[–] jj4211@lemmy.world 6 points 3 days ago

There's a difference. If I go and gamble away my life savings, then I'm on the street. If they gamble away their investments, the government will say 'poor thing' and give them money to keep the economy ok.

[–] mudkip@lemdro.id 21 points 3 days ago

Ah yes, what a surprise. The random word generator gave you random numbers that aren't actually real.

[–] Bubbaonthebeach@lemmy.ca 36 points 3 days ago (5 children)

To everyone I've talked to about AI, I've suggested a test. Take a subject that they know they are an expert at. Then ask AI questions that they already know the answers to. See what percentage AI gets right, if any. Often they find that plausible sounding answers are produced however, if you know the subject, you know that it isn't quite fact that is produced. A recovery from an injury might be listed as 3 weeks when it is average 6-8 or similar. Someone who did not already know the correct information, could be damaged by the "guessed" response of AI. AI can have uses but it needs to be heavily scrutinized before passing on anything it generates. If you are good at something, that usually means you have to waste time in order to use AI.

[–] NABDad@lemmy.world 15 points 3 days ago

I had a very simple script. All it does is trigger an action on a monthly schedule.

I passed the script to Copilot to review.

It caught some typos. It also said the logic of the script was flawed and it wouldn't work as intended.

I didn't need it to check the logic of the script. I knew the logic was sound because it was a port of a script I was already using. I asked because I was curious about what it would say.

After restating the prompt several times, I was able to get it to confirm that the logic was not flawed, but the process did not inspire any confidence in Copilot's abilities.

load more comments (4 replies)
[–] AllNewTypeFace@leminal.space 32 points 3 days ago (3 children)

My broseph in Christ, what did you think a LLM was?

[–] GalacticSushi@lemmy.blahaj.zone 23 points 3 days ago (1 children)

Bro, just give us a few trillion dollars, bro. I swear bro. It'll be AGI this time next year, bro. We're so close, bro. I just need need some money, bro. Some money and some god-damned faith, bro.

load more comments (1 replies)
load more comments (2 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 54 points 3 days ago* (last edited 3 days ago)

As an unemployed data analyst / econometrician:

lol, rofl, perhaps even... lmao.

Nah though, its really fine, my quality of life is enormously superior barely surviving off of SSDI and not having to explain data analytics to thumb sucking morons (VPs, 90% of other team leads), and either fix or cover all their mistakes.

Yeah, sure, just have the AI do it, go nuts.

I am enjoying my unexpected early retirement.

[–] stoy@lemmy.zip 92 points 4 days ago (3 children)

I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart

[–] Kirp123@lemmy.world 89 points 4 days ago (2 children)

AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.

[–] wonderingwanderer@sopuli.xyz 38 points 4 days ago

Holy shit, that's what they get for being so evil that they trained dogs as suicide bombers.

load more comments (1 replies)
[–] Spezi@feddit.org 38 points 4 days ago (1 children)

And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.

[–] whyNotSquirrel@sh.itjust.works 35 points 4 days ago (3 children)

while also blaming employees

load more comments (3 replies)
[–] jj4211@lemmy.world 27 points 4 days ago (3 children)

They haven't drifted apart, they were never close in the first place. People have been increasingly confident in the models because they've increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.

load more comments (3 replies)
[–] tover153@lemmy.world 48 points 3 days ago (3 children)

Before anything else: whether the specific story in the linked post is literally true doesn’t actually matter. The following observation about AI holds either way. If this example were wrong, ten others just like it would still make the same point.

What keeps jumping out at me in these AI threads is how consistently the conversation skips over the real constraint.

We keep hearing that AI will “increase productivity” or “accelerate thinking.” But in most large organizations, thinking is not the scarce resource. Permission to think is. Demand for thought is. The bottleneck was never how fast someone could draft an email or summarize a document. It was whether anyone actually wanted a careful answer in the first place.

A lot of companies mistook faster output for more value. They ran a pilot, saw emails go out quicker, reports get longer, slide decks look more polished, and assumed that meant something important had been solved. But scaling speed only helps if the organization needs more thinking. Most don’t. They already operate at the minimum level of reflection they’re willing to tolerate.

So what AI mostly does in practice is amplify performative cognition. It makes things look smarter without requiring anyone to be smarter. You get confident prose, plausible explanations, and lots of words where a short “yes,” “no,” or “we don’t know yet” would have been more honest and cheaper.

That’s why so many deployments feel disappointing once the novelty wears off. The technology didn’t fail. The assumption did. If an institution doesn’t value judgment, uncertainty, or dissent, no amount of machine assistance will conjure those qualities into existence. You can’t automate curiosity into a system that actively suppresses it.

Which leaves us with a technology in search of a problem that isn’t already constrained elsewhere. It’s very good at accelerating surfaces. It’s much less effective at deepening decisions, because depth was never in demand.

If you’re interested, I write more about this here: https://tover153.substack.com/

Not selling anything. Just thinking out loud, slowly, while that’s still allowed.

load more comments (3 replies)
[–] Tattorack@lemmy.world 11 points 3 days ago

Leopard meets face.

[–] cronenthal@discuss.tchncs.de 80 points 4 days ago (10 children)

I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.

[–] rozodru@piefed.world 30 points 4 days ago (3 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

load more comments (3 replies)
load more comments (8 replies)
[–] Strider@lemmy.world 36 points 3 days ago

It doesn't matter. Management wants this and will not stop until they run against a wall at full speed. 🤷

[–] Decq@lemmy.world 15 points 3 days ago (1 children)

Surely this is just fraud right? Seeing they have a board directors they have shareholders probably? I feel they should at least all get fired, if not prosecuted. This lack of competency is just criminal to me.

[–] Jankatarch@lemmy.world 14 points 3 days ago (2 children)

Are you suggesting we hold people responsible?

load more comments (2 replies)
[–] untorquer@lemmy.world 45 points 4 days ago (3 children)

This would suggest the leadership positions aren't required for the function of the business.

[–] PapaStevesy@lemmy.world 23 points 4 days ago

This has always been the case, in every industry.

load more comments (2 replies)
[–] sundray@lemmus.org 66 points 4 days ago
[–] sukhmel@programming.dev 54 points 4 days ago (2 children)

Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all

[–] ivanafterall@lemmy.world 35 points 4 days ago (6 children)

I feel like no analytics is probably better than decisions based on made-up analytics.

load more comments (6 replies)
[–] PhoenixDog@lemmy.world 27 points 4 days ago

I don't need AI to fabricate data. I can be stupid on my own, thank you.

[–] CaptPretentious@lemmy.world 26 points 3 days ago (1 children)

My workplace, the senior management, is going all in on Copilot. So much so that at the end of last year to told us to use Copilot for year end reviews! Even provided a prompt to use, told us to link it to Outlook (not sure why, since our email retention isn't very long)... but whatever.

I tried it, out of curiosity because I had no faith. It started printing out stats for things that never happened. It provided a 35% increase here, a 20% decress there, blah blah blah. It didn't actually highlight anything I do or did. And I'm banking that a human will partially read my review, not just use AI.

If someone read it, I'm good. If AI reads it, I do wonder if I screwed myself. Since senior mgmt is just offloading to AI...

load more comments (1 replies)
[–] Lemminary@lemmy.world 20 points 3 days ago* (last edited 3 days ago) (1 children)

Our AI that monitors customer interactions sometimes makes up shit that didn't happen during the call. Any agent smart enough could probably fool it into giving the wrong summary with the right key words. I only caught on when I started reading the logs carefully, but I don't know if management cares so long as the business client is happy.

load more comments (1 replies)
[–] FlashMobOfOne@lemmy.world 36 points 4 days ago (3 children)

Jesus Christ, you have to have a human validate the data.

[–] 474D@lemmy.world 32 points 4 days ago (5 children)

Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"

load more comments (5 replies)
load more comments (2 replies)
[–] wonderingwanderer@sopuli.xyz 35 points 4 days ago

Dumbasses. Mmm, that's good schadenfreude.

[–] titanicx@lemmy.zip 13 points 3 days ago

I fucking love this. It's amazing.

[–] nonentity@sh.itjust.works 18 points 3 days ago

The output from tools infected with LLMs can intrinsically only ever be imprecise, and should never be trusted.

load more comments
view more: next ›