this post was submitted on 12 Sep 2025
1065 points (98.8% liked)

Technology

75846 readers
2214 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Itdidnttrickledown@lemmy.world 8 points 3 weeks ago

If he is wrong about that then he is probably wrong about nearly everything else he says. They just pull these statements out of their ass and try to make them real. The eternal problem with making something real is that reality cant be changed. The garbage they have now isn't that good and he should know that.

[–] lustyargonian@lemmy.zip 7 points 3 weeks ago* (last edited 3 weeks ago)

I can say 90% of PRs in my company clearly look or declared to be AI generated because of how random things that still slip by in the commits, so maybe he's not wrong. In fact people are looked down upon if they aren't using AI and are celebrated for figuring out how to effectively make AI do the job right. But I can't say if that's the case for other companies.

[–] philosloppy@lemmy.world 7 points 3 weeks ago

The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy's prognostications on his company's products perhaps they should work on being less credulous.

[–] Simulation6@sopuli.xyz 7 points 3 weeks ago (1 children)

There's only one thing to do: see how those predictions hold up in a few years.

Or, you know, do the sensible thing and called the dude the snake oil salesman he is and run him out of town on a rail.

load more comments (1 replies)
[–] demizerone@lemmy.world 6 points 3 weeks ago

I was wondering why the context had gotten so bad recently. Apparently they reduced the context and hid the old limit behind a button in cursor called "Max" that costs more money. This shit is bleeding out.

[–] blockheadjt@sh.itjust.works 6 points 3 weeks ago

Does it really count if most of that "code" is broken and unused?

Churning out 9x as much code as humans isn't really impressive if it just sits in a folder waiting to be checked for bugs

[–] Salvo@aussie.zone 6 points 3 weeks ago

90% of non-functional code, maybe.

[–] petrjanda@gonzo.markets 6 points 3 weeks ago

I agree with everyone else. The only thing that A(Non)I is good for is writing bullshit and making it sound intelligent, however deep inside there is no intelligence but all artificial. It's semi useful for background research because of its ability to index huge amounts of data but ultimately everything it makes has to be verified by a human.

[–] m33@lemmy.zip 5 points 3 weeks ago

That’s 90% true: today AI is writing 90% of all bullshit I read

[–] SaveTheTuaHawk@lemmy.ca 5 points 3 weeks ago

He's as prophetic as Elon Musk.

[–] FlashMobOfOne@lemmy.world 5 points 3 weeks ago

They're certainly trying.

And the weird-ass bugs are popping up all over the place because they apparently laid off their QA people.

[–] renrenPDX@lemmy.world 5 points 3 weeks ago (1 children)

It's not just code, but day to day shit too. Lately corporate communications and even training modules feel heavily AI generated. Things like unnecessary em dashes (I'm talking as much as 4 out of 5 sentences in a single paragraph), repeating statements or bullet points in training modules. We're being encouraged to use our "private" Copilot to do everyday tasks and everything is copilot enabled.

I don't mind if people use it, but it's dangerous and stupid to think that it produces near perfect results every time. It's been good enough to work as an early rough draft or something similar, but it REQUIRES scrutiny and refinement by hand. It's like it can get you from nothing to 60-80% there, but never higher. The quality of output can vary significantly from prompt to prompt in my limited experience.

load more comments (1 replies)
[–] Bonesince1997@lemmy.world 5 points 3 weeks ago (1 children)

I think we're already supposed to be on Mars, too, according to some predictions from years ago. People can't tell these things very well.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›