this post was submitted on 13 Jan 2026
259 points (99.6% liked)

Canada

10878 readers
754 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jaselle@lemmy.ca 2 points 1 day ago (1 children)

Well, grok is capable of producing csam with a straightforward text prompt right? This would seem to me to be illegal on x's part but I could be mistaken.

[–] I_Has_A_Hat@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

TL;DR at the bottom.

It's a bit more complex than that, it's not a straightforward text prompt as they did attempt to have filters to prevent stuff like this. However, this being a Musk company, those filters are shitty and people quickly found ways to bypass it, likely through a series of prompts or very highly tailored prompts.

But thats just the nature of AI. AI generators are never specifically trained using CSAM (at least I really fucking hope not). But neither are they specifically trained to generate giraffes made out of dumplings dancing on the concept of time. However, if you ask it to make the latter, it will dutifully spit out some slop that matches. The point is, AI image generators can make ANYTHING, or at least try to. That's what they do. You can build filters and put in restrictions to try to prevent users from asking it to make certain things, or prevent those things from getting delivered, but the actual ability for the AI to make those things is still there. And due to the black box nature of machine learning, it can never actually be removed.

Now, there is a VERY big argument to be made against AI as a whole for that reason. If you spend a little while thinking about what it actually means to have something with the ability to create ANYTHING, or at least an approximation of it, you should be scared shitless. The only real safeguards are creating filters on either the input or the output side, but filters can be worked around. You could see it with early versions of things like ChatGPT, where you could create a carefully worded prompt to have it create a duplicate version of itself with the filters removed and return a secondary response from that duplicated instance, leading to it replying to normally off-limit topics (like building explosives or committing suicide) with a generic "I'm sorry Dave, I'm afraid I can't do that.", followed by another response that gives the full, unredacted answer. Because it always has the ability to create these things, it's just company created filters which stop it from showing them.

Anyways, this comment has gotten away from me. The point is, it's not really about Grok. It's not really about CSAM. It's about AI as a whole, but that's too big and abstract of a concept for the masses to grasp. So instead we get articles and legislation specifically dealing with one particular issue from one particular program because that's just the first thing people have become outraged at, without seeing the big picture.

TL;DR: No, it's not as simple as a straightforward prompt, and it's far from just Grok that is at issue.

[–] jaselle@lemmy.ca 1 points 21 hours ago

I understand that it's a general purpose machine for producing images given prompt/context. I don't feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but... seems like grok doesn't have good enough safeguards?