No.
bignose
So, it was gambling, then.
Revenue going up, hiring going down, layoffs every quarter and a big push for everyone to use AI. But at the same time basically no real success story from all this increased AI usage. Probably just me, but I just don’t get it.
No, you've got it: Revenue increases, short term, when personnel costs are cut, through layoffs and hiring freezes.
The story told (“workers must return to the office to sit on teleconference all day” prompting more of them to quit, or “your job can be done by robots”, or whatever) only needs to make enough sense that the stock holders are satisfied the executives have a sane explanation for sudden loss of workers. Otherwise it might look like the executives are panicking!
Stop trying to trap people inside “the app”. If “the app” is designed to keep people inside and not visit other sites, that's a reader-hostile pattern and a publisher-hostile pattern.
The founding model of using Reddit is “the front page of the internet”. That requires that the rest of the internet is still there, not that the rest of the internet gets sucked into Reddit.
10×s developers who could produce 0 code without it
Let me see; ten times nothin', add nothin', carry the nothin'…
The spec is so complex that it’s not even possible to know which regex to use
Yes. Almost like a regex is not the correct tool to use, and instead they should use a well-tested library function to validate email addresses.
Magit is what allowed me to finally commit to switching to Git full time.
It's such an excellent front-end for Git that I've known numerous workmates learn Emacs just to use Magit.
Now we are beginning to see agents: systems that aspire to greater autonomy and can work in “teams” or use tools to accomplish complex tasks.
Given that an “agent” can be assigned work and carry it out autonomously: no, we are not yet seeing any agents. Every one of these bots requires close attention by a human to weed out the huge quantity of mistakes it generates. That's not an “agent” by any useful definition:
Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks.
Right. So, it's a bot which even the vendor recommends you don't leave it to work autonomously. Not an agent.
In other news: “self driving” that requires continuous human monitoring and intervention, by multiple humans per vehicle, is not self driving.
Just because the hype marketing of tech corporations bleats a term into the media, does not mean they've got anything that actually does what they say it does.
No kidding there’s a bubble. When it pops - the tech’s not going anywhere. […] No need for a wall of text, over and over and over.
He's making the point that the entire tech economy is dominated by this bubble, and gargantuan amounts of money is tied up in this with no hope of getting any useful or profitable business.
Yet the mainstream press continues to coddle the egos of Musk and Altman and Zuckerberg and Nadella and Bezos and Pichai, as though their business use of this technology is worth the trillions of investment value they have attracted. It's a fantasy. While that continues, the message is not getting through and the bubble continues to inflate.
For as long as the bubble goes on inflating, yes there is urgent need to keep repeating that message until the mainstream tech and financial press starts accepting it as reality (because that's what investors read), so that people stop hooking our economies into that bubble.
Except worse: Confluence tries insanely hard to prevent anyone actually getting at the document source code. So you are expected to use the godawful interactive web editor to make any changes.
Despite their great value to society, open source projects are frequently understaffed and underresourced. That’s why GitHub has been advocating for a stronger focus on supporting, rather than regulating, open source projects.
What nice sentiments. Perhaps you, GitHub, could start by insisting that Microsoft cease the un-attributed, non-consensual shovelling of open-source software into their LLM training maw. And turn off the LLM that they're attempting to unilaterally sell based on all that uncompensated labour.
Or is your platitude of “supporting open source projects” fall short of actually respecting what we want and need?
For a system that has such high cost (to the environment, to the vendor, to the end user in the form of subscription), that's a damningly low level of reliability.
If my traditional code editor's code completion feature is even 0.001% unreliable – say it emits a name that just isn't in my code base – that feature is broken and needs to be fixed. If I have to start doubting whether the feature works every time I use it, that's not an acceptable tool to rely on.
Why would we accept far worse reliability in a tool that consumes gargantuan amounts of power, water, political effort, and comes with a high subscription fee?