The EU built a system called CounterR that essentially performs pre-crime thought surveillance. The TLDR is that an AI company, with direct input from half a dozen European police forces, built a tool that scrapes social media, forums, and other sources to assign citizens a score based on what they think as opposed to what they've actually done. The EC also has not released details of the project..
The report itself acknowledges that this sort of automated system "can trigger new fundamental rights risks that affect rights different than the protection of personal data and privacy."
The police were active co-developers, sitting in meetings to define the criteria and feeding real, anonymized data from their investigations to train the LLM. So now you have a feedback loop where police define the threat, the LLM learns it, and the police validate the results, with zero external oversight.
And of course, it's all shrouded in secrecy. The whole thing is confidential, the source code is proprietary so even partners can't audit it, and the ethics board is made up of the same people building the thing. There's no clear requirement to track false positives, so you could be flagged as a potential radical and never know why.
The cherry on top? The core technology, developed with public funds, was recently acquired by a private company, Logically, who can now sell this dystopian scoring system to whoever they want.
The citizens of the EU literally paid to build our own panopticon. The whole project is about normalizing the idea that the state gets to algorithmically monitor and judge your political beliefs before you ever commit a crime.
this post was submitted on 07 Oct 2025
85 points (97.8% liked)
Privacy
2697 readers
131 users here now
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So what happens when you ask the robot if it's penis is metal?