this post was submitted on 25 Dec 2025
601 points (98.7% liked)

Fuck AI

4995 readers
1188 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

cross-posted from: https://fed.dyne.org/post/822710

Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness

you are viewing a single comment's thread
view the rest of the comments
[–] ATPA9@feddit.org 16 points 1 day ago (2 children)

Is it a good solution if you have to work hard do find a problem for it to solve?

[–] BradleyUffner@lemmy.world 4 points 1 day ago

I'd say "yes" because it means you're pushing behind the current limits. It becomes bad when you have the manufacture a problem for it to solve.

[–] MimicJar@lemmy.world 3 points 1 day ago (1 children)

Maybe. If a task takes 8 hours, and you have to do that task weekly, how much time should you invest to make that task take less time? What if it's once a month instead? What if it's once a year? What if you can reduce it by an hour? What if you can eliminate the work completely?

Ignoring AI for a moment you could probably find someone who could estimate using current tools and answer the question as above. If you invest 20 hours to eliminate an 8 hour task once a week, that quickly pays for itself. If you invest 200 hours to reduce an 8 hour task once a year to 4 hours, that will likely never pay for itself by the time the requirements change.

But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it. But even worse AI isn't getting estimated, it's just being thrown into existing tasks.

Now IF AI were truly magical and had amazing task reduction then just throwing it at things isn't a terrible idea. IF AI can just immediately improve 10 things, even if it fails at a few others, it might be worth it.

AI also has a shitton of money riding on it, so the first entity to figure out how to make money with it also wins big.

[–] ZDL@lazysoci.al 10 points 1 day ago (1 children)

But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it.

Literally every successful new tool in history was made because there was a problem the tool was meant to solve. (Don't get me wrong, a lot of unsuccessful tools started the same way. They were just ineptly made, or leap-frogged by better tools.) This is such an ingrained pattern that "a solution in search of a problem" is a disparaging way to talk about things that have no visible use.

LLMs are very much a solution in search of a problem. The only "problem" the people who made them and pitched them had was "there's still some money out there that's not in my pocket". They were made in an attempt to get ALL TEH MONEEZ!, not to solve an actual problem that was identified.

Every piece of justification for LLMs at this point is just bafflegab and wishful thinking.

[–] MimicJar@lemmy.world 2 points 23 hours ago

I completely agree that LLMs are the solution in search of a problem. I'm just trying to explain why someone might look at it and think it's worth something.

The biggest reason really is just that a bunch of money is involved. The first entity to find a way to make money is going to Maya killing. The problem of course is that day may never come.