because made-up stats/sources will get their entire grift thrown out if they’re discovered
I believe it is not just that. Making up some of those references as a human (in a way that sounds credible) would require quite a lot of effort and creativity. I think this is a case where the AI actually performs “excellently” at a task that is less than useless in practice.
To me, in terms of the chatbot's role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn't just support this man's delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further "incriminating" various people including his mother, whom he eventually killed. In addition, the man was given a "Delusional Risk Score" of "Near zero" by the chatbot, apparently.
On the other hand, I'm sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.