this post was submitted on 03 Oct 2025
392 points (98.5% liked)
Fuck AI
4219 readers
688 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ed Zitron has a pretty good analysis of this, pointing out that Nvidia is “investing” in AI cloud providers who then go and buy Nvidia chips. So they’re essentially inventing their own biggest customers.
And then these providers take out big loans using the chips as collateral. So there’s really nothing to go back to Nvidia if they fail.
And none of these are even close to profitable. So they will probably fail.
And Nvidia makes up 7% of the S&P 500 at this point. So if they lose their biggest customers, and their investments, and the marketing hype they’re capitalizing on by propping up these zero-sum chip sales… there’s a potentially large fallout.
That’s not even counting all the other big tech companies that have placed huge bets on AI, and therefore Nvidia. The magnificent 7 all together are more than 30% of the S&P 500. And if Nvidia falls, they probably all fall too.
I wouldn't bet too hard against NVIDIA. Sure their margins/extortion pricing can go down with fewer customers, but LLMs are here to stay. Datacenter and mediocre models (Open AI) and getting good ROI from buying from NVIDIA is what is nearly impossible. US models tend to all concentrate on megalithic US military Skynet ambitions, and every release is a step towards Skynet. Open models, mostly from China, tend to be smaller (in GPU/memory requirements) but have better quality/cost ratios including use on accessible (non datacenter) hardware.
It's the datacenter gpu customers, and mediocre software/llms renting/owning them, that are the huge risk. At the same time, US empire bankster allies will invest for Skynet.
LLMs don't benefit from economies of scale. Usually, each successive generation of a technology is cheaper to produce, or stays the same but with much greater efficiency/power/efficacy/etc. For LLMs, each successive generation costs much more to produce for lesser and lesser benefits.
For training, compute and memory scale does matter, including networked large scale clusters (of GPUs). No money is made in training. Inference (where money is made/charged or benefits obtained), memory more important, but compute still extremely important. At Skynet level, models over 512gb are used. But consumer level, and every level smaller models are much faster. 16gb, 24gb, 32gb, 96gb, 128gb, and 512gb are each somewhat approachable. But each of these thresholds are some version of scale.
The roadmaps for GPU makers are, well for nvidia only for simplicity, Rubin will have 5 times the bandwidth, double the memory and at least double the compute. For what is likely 2x the cost, less than 2x the power. A big issue with bubble status is a fairly sharp depreciation in existing leading edge devices. Bigger memory alone is always a faster overall solution than networking/connections.
Bigger parameter models are slower for same training data sets than smaller parameter models. Skynet ambitions do involve ever larger parameters, and sure more training data is added rather than any removed. There is innovation in generations on the smaller/efficiency side too, though Skynet funding is for the former.
None of the AI providers actually make a profit on their searches and the marginal cost per user and search isn't dropping.