this post was submitted on 16 Sep 2025
39 points (100.0% liked)

Technology

1098 readers
84 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 5 months ago
MODERATORS
 

On Tuesday, OpenAI announced plans to develop an automated age-prediction system that will determine whether ChatGPT users are over or under 18, automatically directing younger users to a restricted version of the AI chatbot. The company also confirmed that parental controls will launch by the end of September.

In a companion blog post, OpenAI CEO Sam Altman acknowledged the company is explicitly "prioritizing safety ahead of privacy and freedom for teens," even though it means that adults may eventually need to verify their age to use a more unrestricted version of the service.

"In some cases or countries we may also ask for an ID," Altman wrote. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff." Altman admitted that "not everyone will agree with how we are resolving that conflict" between user privacy and teen safety.

Read full article

Comments

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 5 points 1 month ago* (last edited 1 month ago) (3 children)

Oh wow. Their "system tracked 377 messages flagged for self-harm content" and they did not intervene or suspend the account? Why? They don't even allow me to write smut... Or I'm too stupid to coerce it into doing it. But self-harm is okay? And then the next step is collecting IDs? How about everyone gets 30 self-harm conversations before the account switches over to a safe mode... That'd be a very straightforward solution.

[–] PhilipTheBucket@piefed.social 6 points 1 month ago (1 children)
  1. I think they're going to continue with their overall strategy of just not really being good at stuff. I don't anticipate that there will be too much connection between the "safety" mode and actual safety
  2. I think overall, OpenAI was founded by incredibly smart people but has now succumbed to the San Francisco "tech idiot" culture and so doing age verification is just what all the cool kids are doing and so they have to do it.
  3. I think the attempt to create a connection between the age verification and the chatbot encouraging users to kill themselves is just a combination of tech idiocy plus a dishonest attempt to justify the age verification and convince people they're doing big things which will definitely be effective as far as the whole suicide thing.
[–] hendrik@palaver.p3x.de 1 points 1 month ago* (last edited 1 month ago) (1 children)

Yes. Yesterday's hot shit was to make everyone put in their phone number to sign up to stuff. Happens to be a nice long-living tracking ID. Today we're at outright showing IDs. And I guess there aren't many more invasive steps left after that. Unless they find a reason to install a camera in my bathroom or something... There's always a new bandwagon. Now it's age-verification. And I'm sure that's bound to destroy many more things we have when they roll it out to more services and countries.... Not that I think protecting children isn't a noble cause. But I highly doubt that's why they do it. It's more to normalize absolute control by big tech, abolishment of privacy... That's why they do it. And OpenAI is part of that. Fortunately I don't rely on their services, I'll just quit. But it's going to be a sad day once they do the same to my Google/Youtube account and a few other ones.

I mean it's super obvious that the process of showing an ID card into a camera is probably not going to stop someone from harming themselves... And if they cared for this teenager, well maybe they would have done something after the hundredth report? or the 200th? or the 300th?? Obviously there's already a system in place for this exact thing. But that might not be the goal here.

I'm a bit sad the article doesn't call them out for that and instead engages with OpenAIs (probably) fake arguments.

[–] DreamAccountant@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Wait until they require multiple biological samples for ID. That's the direction were headed in.

load more comments (1 replies)