"You're absolutely right! That was a children's hospital, not a military base. Let's try that again!"
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Since this article, Anthropic's Claude AI app has claimed the #1 top spot over ChatGPT on both Android and iOS.
Think of the 3rd grade math, made up charts they willl trot out about #kill r#atios. 'ask ai about truth' /s
Amazon, gemini, perplexity are also in the us goverment baby bottle nipples.
Now imagine my shock when I had done the swap from ChatGPT to Claude the day before the news about Anthropic's (now backpedalled) deal. Anyway, I deleted ChatGPT and Gemini accounts and degoogled my life while I was at it.
The "Cancel ChatGPT movement" doesn't appear to be mentioned in the article, but other outlets say hashtags like #CancelChatGPT are trending on X.
From OpenAI's statement:
We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:
• No use of OpenAI technology for mass domestic surveillance.
• No use of OpenAI technology to direct autonomous weapons systems.
• No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).
It specifically states their AI can't/won't be used for surveillance and autonomous weapons. Of course I'm not saying I trust them, but isn't this the same thing Anthropic says they're against? What's the difference here or what did I miss?
Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”
Sam Altman is the king of the trust me bro and than backpedaling on it.
hmm, not to sound smug but, you people need a hashtag to tell which way the wind blows. 'going viral' is such a 'breaking news' clickbait. windows users need to get a clue, that part is true. i kicked my brother out of my house for kept tryting to show me how easy ai made art. is os political now? you can guess his other issues. Rambling now
Uh ok
Just the little push I needed to close my ChatGPT account
Same! Was planning on doing this today.
What do you plan to switch to? I’m currently thinking a combination of Claude and something else for images if it turns out I really need to pay for it.
I use ppq.ai , which lets yo chose which llm/service you need, and then you are charged per query. It has all the latest models for imaging, text, video. Etc. So you get to use the one that fits the task best, and no need to pay for a membership. Which is way cheaper if you don't have professional use of heavy models, imo. It allows allows privacy payments (monero and such. I dont have that, just mentioning it is possible :)).
I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.
Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.
That makes a lot of sense
Dude the only guardrails are
-
No fully automated killings
-
No mass surveillance
You could literally do anything else, you could automate killing people with a person approving.
Trump booted anthropic because they couldn't lift these two guardrails. Fuck me
The lesser of two evils is still...evil. Anthropic's hands aren't clean either...they're just minimally less caked in blood.
BUT
One can hope that this is the 'turn towards the light side'. If 'don't be evil' can finally be made profitable, well, self interest might actually be a lever for good. Ha.
I wish there was a clearly, unambiguously good guy in the cloud AI space. I don't know how to make that work with economies of scale being what they are. Yes, that includes Lumo - though one has faint hope on that end to.