this post was submitted on 02 Jan 2026
43 points (100.0% liked)

Hacker News

3364 readers
365 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 1 year ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] TommySoda@lemmy.world 10 points 4 days ago (2 children)

I didn’t go looking for evidence for obvious reasons, but I find reports that it’s generating CSAM plausible.

This has been my biggest concern whenever I hear of generative AI doing these things. Grok is getting the training data from somewhere and has enough of it to generate these images on demand. You can't even get most generative AI models to show you a glass of wine filled to the brim because it has no training data for such an image but it can generate CSAM no problem.

[–] jqubed@lemmy.world 5 points 4 days ago

There was an article a few weeks ago about a developer who used a standard research AI image training dataset and had his Google account locked out when he uploaded it to Google Drive. Turns out it has CSAM in it and it was flagged by Google’s systems. The developer reported the data set to his country’s reporting authorities and they investigated the set and confirmed it contains images of abuse.

[–] meco03211@lemmy.world 1 points 4 days ago

Bet it had full access to the Epstein files.

[–] JokeDeity@sh.itjust.works 1 points 4 days ago

Surprising no one.