this post was submitted on 18 Apr 2026
161 points (86.8% liked)

Technology

83929 readers
2525 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] whotookkarl@lemmy.dbzer0.com 24 points 22 hours ago
  • 4 capitalist executives quoted
  • 0 labor leaders quoted
  • 0 relevant scientists quoted

There is good information in the article showing how executives lied and are currently lying for profits centralizing wealth under oligarchs, but there is another voice that can be quoted instead of the article writer directly opposing quotes. In that way it reads more like an editorial than journalism.

[–] hume_lemmy@lemmy.ca 26 points 1 day ago* (last edited 1 day ago)

The article, with the Musk section, points out what nearly everyone else has identified as the primary problem: the people saying that AI will obsolete all workers, and the people saying that those who don't work don't deserve to eat, ARE THE EXACT SAME PEOPLE.

Even the most dumbfuck Magat is going to eventually figure out where that goes and react accordingly.

[–] BarneyPiccolo@lemmy.today 33 points 1 day ago* (last edited 1 day ago) (1 children)

“Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,”

And here is the crux of the problem - they are lying to us. After making it very clear that they wanted us to integrate AI into our jobs, it has also become clear that their ultimate objective is to replace as many jobs as possible with AI, even if the AI's results are substandard, because the AI is so much more profitable.

We KNOW the objective is to fire as many of us as possible, so the general public has become extremely hostile toward AI. Now the AI companies want to re-brand as family friendly assistants to our lives. Too late, assholes, we're already onto you. Tell your lies walking.

It must be awful to have fought to become a billionaire, thinking you could relax on the bodies of your vanquished foes, and enjoy the tranquility that you've earned, only to find out that you have created an endless supply of enemies who want you dead. You have to pay millions for security, only to find that someone can still put a bullet through your front window where you were standing only five minutes before. All that money, and the best it can do is buy you a windowless bunker to cower in.

[–] e461h@sh.itjust.works 17 points 1 day ago (3 children)

Except it’s not profitable at all. It’s a huge bubble waiting to collapse.

[–] sunbeam60@feddit.uk 2 points 6 hours ago (1 children)

They are currently selling it at a huge loss, agreed. They’ve got plenty of runway for specialised hardware prices to come down, for companies to get hooked and plugged into the ecosystem and for real value to be demonstrated.

When this happens they’ll raise prices and companies will gladly pay it.

Profit at this point is not relevant, seen from the perspective of investors.

[–] e461h@sh.itjust.works 1 points 4 hours ago (1 children)

That’s ’embrace, extend, extinguish’ for you. Question is if there is a profitable model to come. The usual economies of scale don’t seem capable of adding up in this case. Even the maniacs on Wall Street are balking.

[–] sunbeam60@feddit.uk 2 points 3 hours ago* (last edited 3 hours ago) (1 children)

That’s not quite my understanding of EEE.

  • Embrace - adopt something that someone else has done
  • Extend - add proprietary extensions on top of the original, quicker than the original owner can
  • Extinguish - Kill the original owner off by moving quicker then either slow down or kill your own support for the product

What the AI model owners are doing seems to me just to be normal loss-leading with a view to gain market share.

[–] e461h@sh.itjust.works 1 points 3 hours ago

That’s fair. I think they are trying to utilize EEE to replace search, content creation, and more - everything AI is being shoveled into. But the main goal is just to force utilization through any means necessary and establish a new market & sales model they are unable to define.

[–] BarneyPiccolo@lemmy.today 9 points 1 day ago (2 children)

Not yet, but wait until they've reduced their workforce by 75%, and they can save all those associated expenses.

It won't work, of course, but they've deluded themselves into believing it.

[–] e461h@sh.itjust.works 5 points 1 day ago* (last edited 21 hours ago) (1 children)

Certainly part of the sales pitch. But so far it turns out humans are more efficient (cost less). I think the appeal to companies is the control (and the cost while it’s so heavily subsidized by the industry pushing it). The appeal to the major AI investors and execs is to… privatize the profits and socialize the losses. They will golden parachute themselves and leave the people with their mess.

[–] nile_istic@lemmy.world 3 points 22 hours ago

I think the appeal to companies is the control

This part. Rich people never stopped jerking off over the idea of owning slaves.

[–] ripcord@lemmy.world 3 points 1 day ago (1 children)

The vast majority of the costs are HW and infra

I think they're hoping that reaches more of a steady state

[–] Passerby6497@lemmy.world 7 points 1 day ago (1 children)

I think they're hoping that reaches more of a steady state

With how quickly tech advances and hardware degrades under heavy use, they're going to be pushing that rock up a hill for a good while lol

[–] ripcord@lemmy.world 2 points 1 day ago

Oh, agreed. And other tech companies are 1000% counting on that being true.

[–] eleitl@lemmy.zip 3 points 1 day ago (1 children)

It is very profitable in certain roles in the enterprise. This is orthogonal to it being a massive bubble, about to blow up.

[–] e461h@sh.itjust.works 6 points 1 day ago* (last edited 1 day ago) (1 children)

https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/

It could be, but it doesn’t look promising - and the fact that it’s pretty much impossible to know what the actual costs are is, in itself, very telling.

When you use these services, the company in question then pays for access to the AI models in question, either at a per-million-token rate to an AI lab, or (in the case of Anthropic and OpenAI) whatever cloud provider is renting them the GPUs to run the models. A token is basically ¾ of a word.

As a user, you do not experience token burn, just the process of inputs and outputs. AI labs obfuscate the cost of services by using “tokens” or “messages” or 5-hour-rate limits with percentage gauges, and you, as the user, do not really know how much any of it costs. On the back end, AI startups are annihilating cash, with up until recently Anthropic allowing you to burn upwards of $8 in compute for every dollar of your subscription. OpenAI allows you to do the same, though it’s hard to gauge by how much.

[–] eleitl@lemmy.zip 2 points 7 hours ago

Some enterprises do run their own hardware (these can ROI in 6 months or so), and there the economics is very well known. For the majority of the current use it's a giant bubble, as Ed Zitron's great analyses keep telling us.

[–] Aatube@lemmy.dbzer0.com 93 points 1 day ago (7 children)

Have the comments here read the article? It's arguing that the CEOs themselves have spread the doomer narrative and are now being molotov'd as a result. The subject of the title is/includes Altman, hence the Altman cover photo. This was way way better than I expected of Gizmodo (bravo Gizmodo), warning us that execs are only toning down their AI dooming for self-protection.

Whatever happens, it feels like the AI executives have painted themselves into a corner. They’ve told everyone their product has the potential to destroy everything. They were the doomers, if we want to call it that, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI try to get us to look exclusively at what they claim are enormous benefits for society; so far, with little to show.

@gravitas_deficiency@sh.itjust.works @Sundray@lemmus.org

[–] willington@lemmy.dbzer0.com 1 points 17 hours ago

Have the comments here read the article? It's arguing that the CEOs themselves have spread the doomer narrative and are now being molotov'd as a result.

The CEOs were talking to investors. They didn't used to care if the proles overheard all that radical signaling intended for investors. I guess now they have a reason to care.

[–] EvergreenGuru@lemmy.world 35 points 1 day ago

They should’ve chosen a lane. OpenAI was about free LLMs, then they went LLC and decided that AI could make money. It doesn’t make money though, so now we’re watching the idiots realize they have burned all this money investing it into AI.

All the experts told us it couldn’t do any of the things sci-fi writers love to write stories about. Nothing changed except perception, and with by directing perception they managed to use an old technology to temporarily buttress the economy.

[–] Iconoclast@feddit.uk 10 points 1 day ago* (last edited 1 day ago) (1 children)

Have the comments here read the article?

You serious? Ofcourse not - but they did see the letters "AI" in the title.

[–] ExcessShiv@lemmy.dbzer0.com 6 points 1 day ago (1 children)

Man everyone just hates on Al these days, but I thinks he's a pretty chill dude despite being a little weird.

[–] chunes@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

first thought, but probably

[–] XLE@piefed.social 6 points 1 day ago

It's an understandable conclusion if you only read the title of the article. Surely an AI doomer is someone that thinks it's garbage, right?

But if people familiarize themselves with what professional AI doomers look like, and what AI safety groups look like, it becomes abundantly clear that they are all pro-industry. They will only ever criticize AI in ways that covertly praise it on its non-existent capacity.

[–] sundray@lemmus.org 8 points 1 day ago (1 children)

I did. As well written as it is, I don't think the premise of "the REAL doomers were the CEOs!" is going to spread far enough to dethrone the present, much more popular understanding of what an AI doomer is. It didn't seem worth addressing. We'll see though; perhaps every time someone says "AI doomer" on Lemmy, some wag will reply with, "Um a-kually, I think you'll find the tech CEOs are the real doomers, LOL."

As to the the notion that the dangers these techbros have released are now coming home to roost: it's overstated. In my opinion, the techbros will continue not to give the merest shit about the harms they've caused, and one misguided soul with a molly isn't going to change that -- or bring back all the dead people LLMs contributed to killing. Will it increase the CEO's feelings of paranoia? My dude, the wealthy are already maximally paranoid.

[–] Aatube@lemmy.dbzer0.com 7 points 1 day ago* (last edited 1 day ago) (1 children)

interesting. I don't think the article is saying "the real doomers are the CEOs", though. what you've written in the second paragraph (and just that is incredibly interesting even if it doesn't have the impact you've outlined. it's incredibly Greek) is fully compatible with agreeing that AI is doomish. I'll also repeat my point that the article advises increased caution more than before of tech's claiming of great AI net benefits.

load more comments (1 replies)
load more comments (2 replies)
[–] deathbird@mander.xyz 26 points 1 day ago

"Oh no what if someone believes my hype about building a Torment Nexus and, instead of throwing more money on my money fire, tries setting me on fire instead."

[–] sundray@lemmus.org 35 points 1 day ago
[–] GreenKnight23@lemmy.world 3 points 1 day ago

push your local governments to tax companies that replace workers with AI at a higher percentage.

this tax can then be used to offset the socioeconomic stress that the job losses will impose on your region.

[–] gravitas_deficiency@sh.itjust.works 24 points 1 day ago (1 children)

Fuck you, Gizmodo, and fuck off. There are consequences when you break the societal contract. This is that.

[–] terabyterex@lemmy.world 17 points 1 day ago* (last edited 1 day ago) (1 children)

You love sam altman that much that you have an emotional response to gizmodo giving very valid criticism of him? I'm sorry gizmodo is right and altman is a tool. Please dont worship a man.

[–] atrielienz@lemmy.world 8 points 1 day ago (1 children)

I don't think them saying this has much to do with liking Altman. Rather, I think they are raging at Gizmodo (because well, Gizmodo) and also at the headline of an article they didn't read.

[–] ripcord@lemmy.world 3 points 1 day ago (2 children)

I suspect the person you replied to was also calling them out for not reading the article but nevertheless having very strong opinions about it

[–] terabyterex@lemmy.world 2 points 15 hours ago

Yeah, i was making a point. I think not reading the article is an analogous to society as a whole taking everything at face value and don't think for themselves.

I have this fantasy where the person decides to read it to see why i called him a sam altman lover but i know i am just a pollyanna

[–] atrielienz@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Certainly a possibility. Lots of people really dislike Gizmodo as a news outlet for past controversy.

[–] pelespirit@sh.itjust.works 7 points 1 day ago

But it’s hard to take that argument seriously after everything guys like Altman have been saying. It didn’t even start as late as 2022, either. Back in 2015, Altman said, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

load more comments
view more: next ›