For most large projects, writing the code is the easy part anyway.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Writing new code is easier than editing someone else's code but editing a portion is still better than writing the entire program again from start to end.
Then there is LLMs which force you to edit the entire thing from start to end.
About that "net slowdown". I think it's true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.
However, there are ways to make it useful, but it requires specific circumstances. For example, you can't be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.
Can't be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn't spend 15 minutes reading stack overflow posts that don't answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.
If you have zero programming experience, you can use an LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.
This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It's kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It's a really useful tool for learning syntax and libraries of unfamiliar languages.
On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.
Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It's why buzzwords catch like wildfire.
Lmao calling nobility would imply we can't vote our senior management and it often ends up being whoever "the king" wants or one of king's children.
Wait...
I mean.. At best it's a stack overflow/google replacement.
There's some real perks to using AI to code - it helps a ton with templatable or repetitive code, and setting up tedious tasks. I hate doing that stuff by hand so being able to pass it off to copilot is great. But we already had tools that gave us 90% of the functionality copilot adds there, so it's not super novel, and I've never had it handle anything properly complicated at all successfully (asking GPT-5 to do your dynamic SQL calls is inviting disaster, for example. Requires hours of reworking just to get close.)
But we already had tools that gave us 90%
More reliable ones.
Deterministic ones
I found that it only does well if the task is already well covered by the usual sources. Ask for anything novel and it shits the bed.
This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.
Are you trying to tell me that the people want to sell me their universal panacea for all human endeavours were... lying...? Say it ain't so.
“No Duh,” say senior developers everywhere.
I'm so glad this was your first line in the post
AI coding is the stupidest thing I've seen since someone decided it was a good idea to measure the code by the amount of lines written.
More code is better, obviously! Why else would a website to see a restaurant menu be 80Mb? It's all that good, excellent code.
It did solve my impostor syndrome though. Turns out a bunch of people I saw to be my betters were faking it all along.
Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.
Yea, I use it for home assistant, it's amazingly powerful... And so incredibly dumb
It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust... While missing that one of the arguments is entirely in the wrong place.
It regurgitates old code, it cannot come up with new stuff.
The trick is, most of what you write is basically old code in new wrapping. In most projects, I'd say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn't hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.
The actual new interesting part is gonna be a small small percentage of the total code.
Imagine if we did "vibe city infrastructure". Just throw up a fucking suspension bridge and we'll hire some temps to come in later to find the bad welds and missing cables.
so is the profit it was foretold to generate, but it actually costs money than its actually generating.
The most immediately understandable example I heard of this was from a senior developer who pointed out that LLM generated code will build a different code block every time it has to do the same thing. So if that function fails, you have to look at multiple incarnations of the same function, rather than saying “oh, let’s fix that function in the library we built.”
I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.
Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.
In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.
So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"
From what I've seen and heard, there are a few factors to this.
One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.
Another is that LLMs are uniquely suited to extending the honeymoon period.
The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.
The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.
It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.
Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.
The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.
This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.
AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.
I think right now companies are competing until they're only 1 or 2 that clearly own the majority of the market.
Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.
They'll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.
Maybe not though, I'm usually pretty cynical when it comes to what the incentives of businesses are.
I code with LLMs every day as a senior developer but agents are mostly a big lie. LLMs are great for information index and rubber duck chats which already is incredible feaute of the century but agents are fundamentally bad. Even for Python they are intern-level bad. I was just trying the new Claude and instead of using Python's pathlib.Path it reinvented its own file system path utils and pathlib is not even some new Python feature - it has been de facto way to manage paths for at least 3 years now.
That being said when prompted in great detail with exact instructions agents can be useful but thats not what being sold here.
After so many iterations it seems like agents need a fundamental breakthrough in AI tech is still needed as diminishing returns is going hard now.
I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.
I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.
it's slowing you down. The solution to that is to use it in even more places!
Wtf was up with that conclusion?
Glad someone paid a bunch of worthless McKinsey consultants what I could’ve told you myself
It is not worthless. My understanding is that management only trusts sources that are expensive.
Almost like its a desperate bid to blow another stock/asset bubble to keep 'the economy' going, from C suite, who all knew the housing bubble was going to pop when this all started, and now is.
Funniest thing in the world to me is high and mid level execs and managers who believe their own internal and external marketing.
The smarter people in the room realize their propoganda is in fact propogands, and are rolling their eyes internally that their henchmen are so stupid as to be true believers.
I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It's just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.
That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.
So now I have a game that's kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.
Might be there someday, but right now it’s basically a substitute for me googling some shit.
If I let it go ham, and code everything, it mutates into insanity in a very short period of time.
I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.
So when the AI bubble burst, will there be coding jobs available to clean up the mess?
There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".
LLMs work great to ask about tons of documentation and learn more about high-level concepts. It's a good search engine.
The code they produce have basically always disappointed me.
I use AI as an entryway to learning or for finding the name or technique that I'm thinking of but can't remember or know it's name so then i can look elsewhere for proper documentation. I would never have it just blindly writing code.
Sadly search engines getting shitter has sort of made me have to use it to replace them.
Then it's also good to quickly parse an error for anything obviously wrong.
According to Deutsche Bank the AI bubble is ~~a~~ the pillar of our economy now.
So when it pops. I guess that's kinda apocalyptic.
Edit - strikethrough
The people talking about AI coding the most at my job are architects and it drives me insane.