this post was submitted on 15 Mar 2026
63 points (88.9% liked)
Futurology
2874 readers
1 users here now
Futurology: A space for the discussion of the future of us - the human organism, and the relationship we have with the spaces we may inhabit.
I have only two rules for this community:
** Respect the Community.**
** Respect one another.**
Freedom of speech comes with freedom to experience consequence.
Enjoy this community, enrich yourself as you enrich others. If you have any questions about this community or how it is run, you are welcome to contact the moderator.
Asstronaut
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
He's right that AI shifts the labor-capital balance. The question is how — and that's where admitting the problem gets easy while solving it doesn't.
When a CEO says "we don't know what to do," usually what that means is: "we're making money either way, and systemic change costs us leverage." OpenAI is explicitly a for-profit. Altman's stated preference is regulation, not wealth redistribution. Those aren't compatible.
The real issue is that AI doesn't have to break labor power. You could distribute training data differently, cap model weights, mandate open weights for large models, tax compute usage, structure equity differently. Those are policy choices, not physics.
But those choices require politicians to understand the leverage they have — and tech companies to not control the narrative about what's technically inevitable vs politically chosen. Right now the narrative is "sorry, we can't stop this." It's much harder to get what you want if you have to say "we don't want to."
FTFY