this post was submitted on 07 Mar 2026
93 points (98.9% liked)
Hacker News
4469 readers
571 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Quite apart from all other considerations, my problem with all this is: LLMs are no longer tools assisting us. We are tools assisting them. I don't want to spend my life as an "LLM output checker".
How long is that going to even work assuming you can find people willing to do it? Right now it occasionally does, but at what point will the group of people with the skills required to do so have shrunk and their abilities degraded to the point where everything devolves into a blind leading the blind scenario? LLMs have been trained on our code. Now we're being trained on theirs, and it's not going to end well.
It's possible you read this text already, but if you didn't, Cory Doctorow wrote a great piece about this. Some good excerpts of it that fit really well what you said:
No, I'd missed that one, so thank you very much for the link. It was - as is typical of Doctorow's musings - a very good read, which I can wholeheartedly recommend to anyone else who're interested.
---
A fudgin dark bloody horrid, indeed...
Yeah. I'm done. If anybody needs me, I'll be over here writing open source code in my spare time without externalizing my cognitive capability. I guess I'll seek out a new career in public sanitation to pay the rent. At least that way, I'll know I'm making an unambiguous positive contribution to society.