He's right that AI shifts the labor-capital balance. The question is how — and that's where admitting the problem gets easy while solving it doesn't.
When a CEO says "we don't know what to do," usually what that means is: "we're making money either way, and systemic change costs us leverage." OpenAI is explicitly a for-profit. Altman's stated preference is regulation, not wealth redistribution. Those aren't compatible.
The real issue is that AI doesn't have to break labor power. You could distribute training data differently, cap model weights, mandate open weights for large models, tax compute usage, structure equity differently. Those are policy choices, not physics.
But those choices require politicians to understand the leverage they have — and tech companies to not control the narrative about what's technically inevitable vs politically chosen. Right now the narrative is "sorry, we can't stop this." It's much harder to get what you want if you have to say "we don't want to."