“Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”
Yep.
Management is often out of touch and full of shit
This is a most excellent place for technology news and articles.
“Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”
Yep.
Management is often out of touch and full of shit
You wanna know who really bags on LLMs? Actual AI developers. I work with some, and you've never heard someone shit all over this garbage like someone who works with neural networks for a living.
There's this great rage blog post from 1.5 years ago by a data scientist
Missed this one, thanks
Management: "No, that doesn't work, because employees spend so much time doing the actual work that they lack the vision to know what's good for them. Luckily for them I am not distracted by actual work so I have the vision to save them by making them use AI."
If the tool were good, we’d all just use it.”
Eggs-mothafucking-zackly!!!
There are no daily pressure campaigns to convince you to use a laptop or a smartphone. The value of those are self-evident.
AI on the other hand... -_-
From reading all the comments from the community, it’s amazing (yet not surprising) that all these managers have fallen for the marketing of all these LLMs. These LLMs have gotten people from all levels of society to just accept the marketing without ever considering the actual results for their use cases. It’s almost like the sycophant nature of all LLMs has completely blinded people from being rational just because it is shiny and it spoke to them in a way no one has in years.
On the surface level, LLMs are cool no doubt, they do have some uses. But past that everyone needs to accept their limitations. LLMs by nature can not operate the same as a human brain. AGI is such a long shot because of this and it’s a scam that LLMs are being marketed as AGI. How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.
I agree with almost all of your comment. The only part I disagree on is:
How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.
An implementation of AGI does not need to be inspired from the human brain, or any existing organic brain. Nothing tells us organic brains are the optimal way to develop intelligence. In fact, I'd argue it's not.
That being said, it doesn't change the conclusion: We are nowhere near AGI, and LLMs being marketed as such is absolutely a scam.
I don't think LLMs will become AGI, but... planes don't fly by flapping their wings. We don't necessarily need to know how animal brains work to achieve AGI, and it doesn't necessarily have to work anything like animal brains. It's quite possible if/when AGI is achieved, it will be completely alien.
Aircraft wings operate on pretty much the same principle as bird wings do. We just used a technology we had already developed (fans, essentially) to create the forward movement necessary to create the airflow over the wings for lift. We know how to do it the bird way too, but restrictions in material science at scale make the fan method far easier and less error prone.
I can’t wait until billionaires realize how worthless they actually are without people doing everything for them
They will never realize that, they will blame any failures on others naturally. They truly believe they are better than everyone else, that their superior ability led them to invest in a company that increased in value enough for them to become filthy rich.
Surrounded by yes men and woman that agree with everything they say and tell them what a genius they are. Of course any ill outcome isn't their fault.
"All my successes are thanks to my superior intellect and skill! All my failures are the fault of bad serfs who didn't follow my vision!" - Every billionaire
When you think about it, it's not too different from how some people treat the current crop of AI, so it makes sense that they're so hypnotized by the promises.
We can't to wait for them to realize this themselves. We need to demonstrate this by actively creating a society which excludes them.
Eh, as the world goes to shit there will always be desperate people willing to work for them, probably cheaper than before even with the AI failures, so they wouldn't care.
Might be a minute. The brain damage that lets them think they've "earned" those billions kinda hides the work of others. Especially the poors.
At work today we had a little presentation about Claude Cowork. And I learned someone used it to write a C (maybe C++?) compiler in Rust in two weeks at a cost of $20k and it passed 99% of whatever hell test suite they use for evaluating compilers. And I had a few thoughts.
I think this is a cool thing in the abstract. But in reality, they cherry picked the best possible use case in the world and anyone expecting their custom project is going to go like this will be lighting huge piles of money on fire.
It's even simpler than that: using an LLM to write a C compiler is the same as downloading an existing open source implementation of a C compiler from the Internet, but with extra steps, as the LLM was actually fed with that code and is just re-assembling it back together but with extra bugs - plagiarism hidden behind an automated text parrot interface.
A human can beat the LLM at that by simply finding and downloading an implementation of that more than solved problem from the Internet, which at worse will take maybe 1h.
The LLM can "solve" simple and well defined problems because its basically plagiarizing existing code that solves those problems.
Hey, so I started this comment to disagree with you and correct some common misunderstandings that I've been fighting against for years. Instead, as I was formulating my response, I realized you're substantially right and I've been wrong — or at least my thinking was incomplete. I figured I'd mention because the common perception is arguing with strangers on the internet never accomplishes anything.
LLMs are not fundamentally the plagiarism machines everyone claims they are. If a model reproduces any substantial text verbatim, it's because the LLM is overtrained on too small of a data set and the solution is, somewhat paradoxically, to feed it more relevant text. That has been the crux of my argument for years.
That being said, Anthropic and OpenAI aren't just LLM models. They are backed by RAG pipelines which are verbatim text that gets inserted into the context when it is relevant to the task at hand. And that fact had been escaping my consideration until now. Thank you.
Agree with all points. Additionally, compilers are also incredibly well specified via ISO standards etc, and have multiple open source codebases available, eg GCC which is available in multiple builds and implementations for different versions of C and C++, and DQNEO/cc.go.
So there are many fully-functional and complete sources that Claude Cowork would have pulled routines and code from.
The vibe coded compiler is likely unmaintainable, so it can't be updated when the spec changes even assuming it did work and was real. So you'd have to redo the entire thing. It's silly.
https://harshanu.space/en/tech/ccc-vs-gcc/ has a good overview how bad it really is
A C compiler in two weeks is a difficult, but doable, grad school class project (especially if you use lex and yacc instead of hand-coding the parser). And I guarantee 80 hours of grad student time costs less than $20k.
Frankly, I'm not impressed with the presentation in your anecdote at all.
Also, software development is already the best possible use case for LLMs: you need to build something abiding by a set of rules (as in a literal language, lmao), and you can immediately test if it works.
In e.g. a legal use case instead, you can jerk off to the confident sounding text you generated, then you get chewed out by the judge for having hallucinated references. Even if you have a set of rules (laws) as a guardrails, you cannot immediately test what the AI generated - and if an expert needs to read and check everything in detail, then why not just do it themselves in the same amount of time.
We can go on to business, where the rules the AI can work inside are much looser, or healthcare, where the cost of failure is extremely high. And we are not even talking about responsibilities, official accountability for decisions.
I just don’t think what is claimed for AI is there. Maybe it will be, but I don’t see it as an organic continuation of the path we’re in. We might have another dot com boom when investors realize this - LLMs will be here to stay (same as the internet did), but they will not become AGI.
99% pass rate? Maybe that’s super impressive because it’s a stress test, but if 1% of my code fails to compile I think I’d be in deep shit.
Also - one of the main arguments of vibe coding advocators is that you just need to check the result several times and tell the AI assistant what needs fixing. Isn't a compiler test suite ideal for such workflow? Why couldn't they just feed the test failures back to the model and tell it to fix them, iterating again and again until they get it to work 100%?
I wanna make sure I got this right. They used $20,000 in fees in 2 weeks to make a compiler? Also, to what end? Like what's the expected ROI on that?
Well it's Anthropic, creators of Claude. It's a way to show off and convince people AI can do it. $20k is what it would cost you or me, but it's just free for them.
I don't even hate AI but it's kinda sickening the way they overstate the capabilities. But let me tell you how excited the top leadership at my company is about this...
I had a meeting with my boss today about my AI usage. I said I tried using Claude 4.5, and I was ultimately unimpressed with the results, the code was heavy and inflexible. He assured me Claude 4.6 would solve that problem. I pointed out that I am already writing software faster than the rest of the team can review because we are short staffed. He suggested I use Claude to review my MRs.
One big problem with management is their inability to listen. Folks say shit over and over but management seems deaf because we're not people to be listened to. We're the help. And management acts like they know better.
If you were so smart you'd have wads of cash like them. They got where they are through sheer grit and bootstraps and a paltry $50 million from their family.
This is a major issue with capitalism. It is a massively inefficient way to organize society. The people with the most money do not necessarily make good decisions. They usually make selfish decisions.
This has been my life for the last nine months. I'm thinking of getting of software development all together for fear that no other place will be any different regarding AI.
next time, tell your boss that Claude should replace him, not you.
It's not like MIT and the Harvard business review have published studies that have shown that AI is actually best suited to replacing executives and management in order to flatten organizations. But unfortunately, management and executives make the decisions on who AI replaces, and they don't want to be replaced. Hell at the company I'm at right now they've been axing low level workers and bringing on or promoting the ladder climbers (read: AI sycophants who do the least work) to manager or department head roles, saying that us grunts can "10x" to fill the gaps, and that all we need is good and creative leadership to direct our AI use. I could go off on things this business is doing to shoot itself in the foot for hours, even without mentioning AI
Ralph Wiggum loop that shit
Numbers go up, Claude won’t bother you 👍🏻
Man, corporate layoffs kill productivity completely for me.
Once you do layoffs >50% of the job becomes performative bullshit to show you’re worth keeping, instead of building things the company actually needs to function and compete.
And the layoffs are random with a side helping of execs saving the people they have face time with.
Yeah, the sellout.
Who?
The original creator of Twitter and now creator of Bluesky and whatever this thing that's falling off the rails is.
Basically another billionaire living in his own little bubble and huffing his own farts too much.
He also had a lot to do with Nostr, early on.
Jack Dorsey, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023,[13][15] as well as a $10 million cash donation to a Nostr development collective in 2025.
he left Bluesky around 2 years ago
That must be why they are doing okay, haha.
Pretty much lol. If I remember correctly his reason for leaving was them adding moderation tools
Uhhh, Block is the the parent company of Square (formerly known as Square Up). This is actually a huge company, not some little side thing.
Yeah not sure why they think Block is “new” they just renamed because they have a bunch of businesses beyond Square now.
They renamed it to try to ride the blockchain hype. 🙄