this post was submitted on 15 Feb 2026
1515 points (99.5% liked)
Fuck AI
5920 readers
2413 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart
AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.
Holy shit, that's what they get for being so evil that they trained dogs as suicide bombers.
It's not trained to get the right answer. It's trained to know what sequence of words tends to follow another sequence of words, and then a little noise is added to that function to make it a bit creative. So, if you ask it to make a legal document, it has been trained on millions of legal documents, so it knows exactly what sequences of words are likely. But, it has no concept of whether or not those words are "correct". It's basically making a movie prop legal document that will look really good on camera, but should never be taken into court.
And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.
while also blaming employees
Of course, it is the employees who used it. /s
Probably more this, the idea to use LLMs is good, but the employees just didn't know how to use it right. To say the LLMs are the problem is to admit being wrong, or worse, being gullible in the face of marketing material. The one thing that is drilled in as a first principle of business leadership is to never ever look weak by being wrong or tricked.
What employees?
They haven't drifted apart, they were never close in the first place. People have been increasingly confident in the models because they've increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.
Yeah it's not even drift. It's just smoke and mirrors that looks convincing if you don't know what you're talking about. It's why you see writers say "AI is great at coding, but not writing" and then you see coders say "AI is great at writing, but not coding."
If you have any idea what good looks like, you can immediately recognize AI ain't it.
For a fun example, at my company we had a POC done by a very well known AI company. It was supposed to analyze a MS Project schedule, then compare tasks in that schedule to various data sources related to to tasks, then flag potential schedule risks. In the demo to the COO, they showed the AI look at a project schedule and say "Task XYZ could be at risk due to vendor quality issues or potential supply chain issues."
The COO was amazed. Wow it looked through all this data and came back with such great insight. Later I dug under the hood and found that it wasn't looking at any data behind the scenes at all. It was just answering specifically "what could make a project task at risk?" and then giving a hypothetical answer.
Anyone using AI to make any sort of decision is basically doing the equivalent of Googling your issue and then taking the top response as gospel. Yeah that might work for a few basic things, but anything important that requires any thought whatsoever is going to fail spectacularly.
I've always thought of this as being just like Hollywood. If you have expertise in whatever field they present an expert in, it's painful how off they are but it lookks fine for everyone outside the field of expertise.
It wasn't even doing that. It was "looking" at training data for what a an analysis like that might look like, and then inventing a sequence of words that matched that training data. Maybe "vendor quality issues" is something that appears in the training data, so it's a good thing to put in its output.