this post was submitted on 15 Feb 2026
1516 points (99.5% liked)

Fuck AI

5920 readers
2862 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[โ€“] jj4211@lemmy.world 5 points 3 days ago (1 children)

Problem being is that whoever is checking the result in this case had to do the work anyway, and in such a case... why bother with the LLM that can't be trusted to pull the data anyway?

I suppose they could take the facts and figures that a human pulled and have an LLM verbose it up for people who for whatever reason want needlessly verbose BS. Or maybe an LLM can do a review of the human generated report to help identify potential awkward writing or inconsistencies. But delegating work that you have to do anyway to double check the work seems pointless.

[โ€“] pseudo@jlai.lu 1 points 3 days ago

Like someone here said "trust is also thing". Once you check a few time that the process is right and the result are right, you don't need to check more than ponctually. Unfortunatly, that's not what happened in this story.