this post was submitted on 02 Jan 2026
43 points (100.0% liked)
Hacker News
3393 readers
335 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This has been my biggest concern whenever I hear of generative AI doing these things. Grok is getting the training data from somewhere and has enough of it to generate these images on demand. You can't even get most generative AI models to show you a glass of wine filled to the brim because it has no training data for such an image but it can generate CSAM no problem.
There was an article a few weeks ago about a developer who used a standard research AI image training dataset and had his Google account locked out when he uploaded it to Google Drive. Turns out it has CSAM in it and it was flagged by Google’s systems. The developer reported the data set to his country’s reporting authorities and they investigated the set and confirmed it contains images of abuse.
Bet it had full access to the Epstein files.