this post was submitted on 21 Mar 2026
137 points (94.8% liked)
Fuck AI
6484 readers
957 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sure, but the thing is that bad human code also looks plausible and correct if you're not taking the time to carefully analyze it. Bad code can be something as small as a missed comma. It can be writing the correct statements, but putting them in the wrong order. It can be an incorrect indent. It can be 100% correct code that doesn't work because your project is using an older or newer version of a library.
The problem, a lot of the time, with LLMs is that they introduce the necessity of a review step - the ubiquitous "Always double check the output" - that is so time consuming, or so thoroughly invalidates the need for the original output, that you might as well just skip the LLM and go straight to the double checking stage. If you're asking an LLM for information about Brazilian visa policies, but you can't trust that information unless you check against the Brazilian government website, then you should just check the website and not bother asking the LLM.
But with coding, the review stage is already baked in. All code, human or machine, requires careful review. And all bad code can look like good code if you don't know what you're looking for (and a lot of the time it looks like good code even if you do know what you're looking for. That's why a second set of eyes is so important). So as long as the LLM isn't producing significantly more issues than a human coder would, there's no real downside.
There are still dangers to be aware of, of course. But it's a very different scenario from, say, dispensing medical advice.
There's also the question of how LLMs are used in a project. There's a big difference between firing up Claude and saying "Write a program that will make Windows games run on Linux" vs saying "Write a function that checks if an instance of BattleNet is already running." And both the scope of your prompt and the completeness matter. If you are an experienced coder you will know what information you should supply to the LLM to correctly construct the output. If you don't, it'll just fill in the blanks.
People with more substantial coding knowledge have the ability to be more specific about both what they want and how they want it done, so they will get much more consistent results back. And, of course, they have the skills needed to identify bad results for themselves, as long as they are taking the time to do so.