Quite apart from all other considerations, my problem with all this is: LLMs are no longer tools assisting us. We are tools assisting them. I don't want to spend my life as an "LLM output checker".
How long is that going to even work assuming you can find people willing to do it? Right now it occasionally does, but at what point will the group of people with the skills required to do so have shrunk and their abilities degraded to the point where everything devolves into a blind leading the blind scenario? LLMs have been trained on our code. Now we're being trained on theirs, and it's not going to end well.