this post was submitted on 13 May 2025
453 points (100.0% liked)
TechTakes
1899 readers
78 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No the fuck it's not
I'm a pretty big proponent of FOSS AI, but none of the models I've ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking
grep
for me.People who think AI codes well are shit at their job
Well grep doesn't hallucinate things that are not actually in the logs I'm grepping so I think I'll stick to grep.
(Or ripgrep rather)
Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG
But the models themselves fundamentally can't write good, new code, even if they're perfectly factual
@vivendi @V0ldek * hallucinations are a fundamental trait of LLM tech, they're not going anywhere
God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.