this post was submitted on 30 Sep 2025
1103 points (98.8% liked)

Technology

75756 readers
2582 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

top 50 comments
sorted by: hot top controversial new old
[–] Evotech@lemmy.world 12 points 2 days ago (1 children)

For most large projects, writing the code is the easy part anyway.

[–] Jankatarch@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Writing new code is easier than editing someone else's code but editing a portion is still better than writing the entire program again from start to end.

Then there is LLMs which force you to edit the entire thing from start to end.

[–] chaosCruiser@futurology.today 9 points 2 days ago* (last edited 2 days ago) (1 children)

About that "net slowdown". I think it's true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.

However, there are ways to make it useful, but it requires specific circumstances. For example, you can't be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.

Can't be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn't spend 15 minutes reading stack overflow posts that don't answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.

If you have zero programming experience, you can use an LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.

[–] BilboBargains@lemmy.world 9 points 2 days ago

This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It's kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It's a really useful tool for learning syntax and libraries of unfamiliar languages.

On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.

[–] Aljernon@lemmy.today 17 points 3 days ago (1 children)

Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It's why buzzwords catch like wildfire.

[–] Jankatarch@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Lmao calling nobility would imply we can't vote our senior management and it often ends up being whoever "the king" wants or one of king's children.
Wait...

[–] Sibshops@lemmy.myserv.one 129 points 4 days ago (7 children)

I mean.. At best it's a stack overflow/google replacement.

[–] Warl0k3@lemmy.world 89 points 4 days ago* (last edited 4 days ago) (16 children)

There's some real perks to using AI to code - it helps a ton with templatable or repetitive code, and setting up tedious tasks. I hate doing that stuff by hand so being able to pass it off to copilot is great. But we already had tools that gave us 90% of the functionality copilot adds there, so it's not super novel, and I've never had it handle anything properly complicated at all successfully (asking GPT-5 to do your dynamic SQL calls is inviting disaster, for example. Requires hours of reworking just to get close.)

[–] Feyd@programming.dev 85 points 4 days ago (1 children)

But we already had tools that gave us 90%

More reliable ones.

[–] MaggiWuerze@feddit.org 20 points 3 days ago

Deterministic ones

load more comments (15 replies)
[–] Steve@startrek.website 21 points 4 days ago (1 children)

I found that it only does well if the task is already well covered by the usual sources. Ask for anything novel and it shits the bed.

load more comments (1 replies)
load more comments (5 replies)
[–] melfie@lemy.lol 15 points 3 days ago* (last edited 3 days ago)

This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.

https://www.linkedin.com/pulse/does-ai-actually-boost-developer-productivity-striking-%C3%A7elebi-tcp8f

[–] ChaoticEntropy@feddit.uk 14 points 3 days ago* (last edited 3 days ago) (2 children)

Are you trying to tell me that the people want to sell me their universal panacea for all human endeavours were... lying...? Say it ain't so.

load more comments (2 replies)
[–] peoplebeproblems@midwest.social 70 points 4 days ago (15 children)

“No Duh,” say senior developers everywhere.

I'm so glad this was your first line in the post

load more comments (15 replies)
[–] elbiter@lemmy.world 36 points 3 days ago (2 children)

AI coding is the stupidest thing I've seen since someone decided it was a good idea to measure the code by the amount of lines written.

[–] ellohir@lemmy.world 12 points 3 days ago

More code is better, obviously! Why else would a website to see a restaurant menu be 80Mb? It's all that good, excellent code.

[–] Slotos@feddit.nl 3 points 2 days ago

It did solve my impostor syndrome though. Turns out a bunch of people I saw to be my betters were faking it all along.

[–] altphoto@lemmy.today 8 points 3 days ago (2 children)

Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.

[–] Smokeless7048@lemmy.world 7 points 3 days ago

Yea, I use it for home assistant, it's amazingly powerful... And so incredibly dumb

It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust... While missing that one of the arguments is entirely in the wrong place.

[–] theterrasque 5 points 2 days ago* (last edited 2 days ago) (1 children)

It regurgitates old code, it cannot come up with new stuff.

The trick is, most of what you write is basically old code in new wrapping. In most projects, I'd say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn't hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.

The actual new interesting part is gonna be a small small percentage of the total code.

load more comments (1 replies)
[–] kadaverin0@lemmy.dbzer0.com 35 points 4 days ago

Imagine if we did "vibe city infrastructure". Just throw up a fucking suspension bridge and we'll hire some temps to come in later to find the bad welds and missing cables.

[–] Tollana1234567@lemmy.today 6 points 3 days ago

so is the profit it was foretold to generate, but it actually costs money than its actually generating.

[–] dylanmorgan@slrpnk.net 40 points 4 days ago (3 children)

The most immediately understandable example I heard of this was from a senior developer who pointed out that LLM generated code will build a different code block every time it has to do the same thing. So if that function fails, you have to look at multiple incarnations of the same function, rather than saying “oh, let’s fix that function in the library we built.”

load more comments (3 replies)
[–] badgermurphy@lemmy.world 12 points 3 days ago (5 children)

I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.

Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"

[–] SparroHawc@lemmy.zip 22 points 3 days ago (1 children)

From what I've seen and heard, there are a few factors to this.

One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.

Another is that LLMs are uniquely suited to extending the honeymoon period.

The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.

It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.

[–] badgermurphy@lemmy.world 6 points 3 days ago

The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.

[–] JcbAzPx@lemmy.world 9 points 3 days ago

This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.

[–] leastaction@lemmy.ca 8 points 3 days ago

AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.

[–] brunchyvirus@fedia.io 3 points 2 days ago

I think right now companies are competing until they're only 1 or 2 that clearly own the majority of the market.

Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.

They'll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.

Maybe not though, I'm usually pretty cynical when it comes to what the incentives of businesses are.

load more comments (1 replies)
[–] drmoose@lemmy.world 17 points 3 days ago* (last edited 3 days ago) (9 children)

I code with LLMs every day as a senior developer but agents are mostly a big lie. LLMs are great for information index and rubber duck chats which already is incredible feaute of the century but agents are fundamentally bad. Even for Python they are intern-level bad. I was just trying the new Claude and instead of using Python's pathlib.Path it reinvented its own file system path utils and pathlib is not even some new Python feature - it has been de facto way to manage paths for at least 3 years now.

That being said when prompted in great detail with exact instructions agents can be useful but thats not what being sold here.

After so many iterations it seems like agents need a fundamental breakthrough in AI tech is still needed as diminishing returns is going hard now.

load more comments (9 replies)
[–] arc99@lemmy.world 11 points 3 days ago* (last edited 3 days ago) (11 children)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

load more comments (11 replies)
[–] vrighter@discuss.tchncs.de 24 points 3 days ago (1 children)

it's slowing you down. The solution to that is to use it in even more places!

Wtf was up with that conclusion?

load more comments (1 replies)
[–] favoredponcho@lemmy.zip 25 points 4 days ago (1 children)

Glad someone paid a bunch of worthless McKinsey consultants what I could’ve told you myself

[–] StefanT@lemmy.world 16 points 3 days ago (1 children)

It is not worthless. My understanding is that management only trusts sources that are expensive.

load more comments (1 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 32 points 4 days ago* (last edited 4 days ago)

Almost like its a desperate bid to blow another stock/asset bubble to keep 'the economy' going, from C suite, who all knew the housing bubble was going to pop when this all started, and now is.

Funniest thing in the world to me is high and mid level execs and managers who believe their own internal and external marketing.

The smarter people in the room realize their propoganda is in fact propogands, and are rolling their eyes internally that their henchmen are so stupid as to be true believers.

[–] RagingRobot@lemmy.world 18 points 3 days ago (4 children)

I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It's just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.

That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.

So now I have a game that's kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.

load more comments (4 replies)
[–] simplejack@lemmy.world 31 points 4 days ago (1 children)

Might be there someday, but right now it’s basically a substitute for me googling some shit.

If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

[–] degen@midwest.social 29 points 4 days ago (5 children)

I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

load more comments (5 replies)
[–] andros_rex@lemmy.world 9 points 3 days ago (2 children)

So when the AI bubble burst, will there be coding jobs available to clean up the mess?

[–] Alaknar@sopuli.xyz 7 points 3 days ago

There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".

load more comments (1 replies)
[–] donalonzo@lemmy.world 13 points 3 days ago (5 children)

LLMs work great to ask about tons of documentation and learn more about high-level concepts. It's a good search engine.

The code they produce have basically always disappointed me.

load more comments (5 replies)
[–] MrScottyTay@sh.itjust.works 14 points 3 days ago

I use AI as an entryway to learning or for finding the name or technique that I'm thinking of but can't remember or know it's name so then i can look elsewhere for proper documentation. I would never have it just blindly writing code.

Sadly search engines getting shitter has sort of made me have to use it to replace them.

Then it's also good to quickly parse an error for anything obviously wrong.

[–] Deflated0ne@lemmy.world 8 points 3 days ago* (last edited 3 days ago) (1 children)

According to Deutsche Bank the AI bubble is ~~a~~ the pillar of our economy now.

So when it pops. I guess that's kinda apocalyptic.

Edit - strikethrough

load more comments (1 replies)
[–] JackbyDev@programming.dev 7 points 3 days ago (3 children)

The people talking about AI coding the most at my job are architects and it drives me insane.

load more comments (3 replies)
load more comments
view more: next ›