Apparently Nintendo switch 2 is using the standard already, so it might go over better than Sony.
I think I’m starting to get back into beehaw, so 👋 it’s been a crazy several years…I’ve been diagnosed with clinical depression and now I’m better so that’s good overall
It’s an interesting and hard problem. Because most billionaires don’t own billions in cash - they own companies that are worth billions. These companies also don’t have billions of assets - they are valued at billions by investors.
The problem is that musks and bezoses of the world didn’t start with billions - they started with millions and lucked out. So to prevent this from happening you need some system that can fairly catch a moment where a business becomes too big and do something about it.
You can’t really cut the majority owner out, because well they own the company - you can’t just take away what they own. But you can’t really pay them some ceiling cost either - you’ll just end up making someone else a billionaire.
I don’t know about more emotional on average, but I can totally see how emotional repression can lead to bigger emotional outbursts.
Yup, I’ve been plagued by this bug for a long time. I’m very excited to use this!
Good point! I wonder if we’re spoiled by computer invention though. Would be interesting to compare preWW2 invention rates and now. I suspect computers just made everything else easier, but now we’re back to hard problems
To be fair, there’s only been 24 year’s of 21 century. Most things you gave listed happened at the end of the 20th century. But also the question is somewhat self negating - we won’t know what’s the greatest invention until we see it working great, but it takes much more than 24 years to take an invention from concept to consumption. For example computational biology is kicking off. Computer aided dna generation started in the past 24 years. But it’s so new few people think about it. Just like no one thought of internet as the greatest invention in the 70s… it was just too new
Alternatives or not, I think it’d be very beneficial to document concept of operation that you want. That way you can either take pieces of these conops and tell lemmy devs what you want, or if you have your own project this will be its conops and you can guide developers towards features you need.
That’s because the full version of that mentality is “Tax me less, don’t use my tax money to subsidize someone else, give that money to my company!” Instead
To be fair even in trek - there’s a world war 3 that’s driven by pure greed before humanity decides it’s enough. And the climax of the greed and that war starts in 2026… so we might be on the course to the utopia … but not before suffering some more.

Its an interesting perspective, except… that’s not how AI works (even if it’s advertised that way). Even the latest approach for ChatGPT is not perfect memory. It’s a glorified search functionality. When you type a prompt the system can choose to search your older chats for related information and pull it into context… what makes that information related is the big question here - it uses an embedding model to index and compare your chats. You can imagine it as a fuzzy paragraph search - not exact paragraphs, but paragraphs that roughly talk about the same topic…
it’s not a guarantee that if you mention not liking sushi in one chat - talking about restaurant of choice will pull in the sushi chat. And even if it does pull that in, the model may choose to ignore that. And even if it doesn’t ignore that - You can choose to ignore that. Of course the article talks about healing so I imagine instead of sushi we’re talking about some trauma…. Ok so you can choose not to reveal details of your trauma to AI(that’s an overall good idea right now anyway). Or you can choose to delete the chat - it won’t index deleted chats.
At the same time - there are just about as many benefits of the model remembering something you didn’t. You can imagine a scenario where you mentioned your friend being mean to you and later they are manipulating you again. Maybe having the model remind you of the last bad encounter is good here? Just remember - AI is a machine and you control both its inputs and what you’re to do with its outputs.