this post was submitted on 10 Apr 2026
29 points (93.9% liked)

Privacy

4346 readers
151 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

The European parliament has blocked the extension of a law that permits big tech firms to scan for child sexual exploitation on their platforms, creating a legal gap that child safety experts say will lead to crimes going undetected.

The regulatory gap has created uncertainty for big tech companies, because while scanning for harms on their platforms is now illegal, they still remain liable to remove any illegal content hosted on their platforms under a different law, the Digital Services Act. Google, Meta, Snap and Microsoft said they would continue to voluntarily scan their platforms for CSAM, in a joint statement posted on a Google blog.

Privacy advocates argue that big tech scanning messages for child abuse threatens fundamental privacy rights and data security for EU citizens, equating these measures to “chat control” that could lead to mass surveillance and false positives.

“There are claims of surveillance or infringement of privacy,” Swirsky said. “Blocking CSAM is not an evasion of privacy. Free speech does not include sexual abuse of children.”

top 12 comments
sorted by: hot top controversial new old
[–] Hoimo@ani.social 2 points 2 hours ago (1 children)

I think this is a difficult dilemma. My immediate instinct is that blocking illegal material is obviously an invasion of privacy. It is impossible to block one type of message without first reading all messages and classifying them.

But on the other hand, we're talking about other people's servers here. They shouldn't have to host illegal material. In fact, it is illegal for them to do so. So it is their right to know what they're hosting and clean it out.

Should we really have any expectation of privacy on big tech platforms? If you're sending obviously illegal material in plain bytes to a Microsoft server, what do you think is going to happen?

[–] sneezycat@sopuli.xyz 1 points 1 hour ago

At the same time, can't bad actors just encrypt whatever they want to share, so that it can't be scanned?

In the end, regular users get the short stick.

[–] FreddiesLantern@leminal.space 1 points 3 hours ago

Oh hey big tech, how about you guys manage to make your products not suck and maybe then you get to have an opinion on matters.

Not fooling anyone with this.

[–] onlinepersona@programming.dev 2 points 5 hours ago

Yes, the companies that kowtow to the pedophile with the finger over the red button are upset about child protection.

[–] sp3ctre@feddit.org 22 points 11 hours ago* (last edited 11 hours ago)

They're upset about EU's privacy laws?

Good. EU is clearly doing something right.

[–] hendrik@palaver.p3x.de 11 points 11 hours ago
[–] Avicenna@programming.dev 9 points 11 hours ago

Big tech whose business model partially depends on getting children hooked on low quality but rapidly changing content of almost infinite variation (atleast by human life span measure). Lets start for instance by asking which of these platforms allowed prime drink ads. That already eliminates everything except Microsoft probably.

[–] baguette@piefed.social 16 points 13 hours ago* (last edited 13 hours ago) (1 children)

I found this article interesting because big tech seems to care so much about scanning, that they are pushing on this.

Unfortunately the article doesn't really delve into the privacy precedents and implications it would set, and mostly voices the arguments of big tech.

[–] Valmond@lemmy.dbzer0.com 10 points 11 hours ago (2 children)

They only care for data to train their AI models.

[–] antisoumerde@quokk.au 1 points 7 hours ago

bouffe mes boules

[–] slazer2au@lemmy.world 5 points 11 hours ago (1 children)

Not just models, it's more about getting additional data points for our advertising profiles.

[–] Valmond@lemmy.dbzer0.com 1 points 7 hours ago

Good catch!