Techbros claim that sacrificing ever more environmental and economic resources into further scaling of their statistical models will eventually yield Artificial General Intelligence. Sam Altman advertised phd-level thinking for GPT 5 (released in late summer)
In the screenshot you can see a modern, reasoning-enhanced model failing to realize that my question just resembles a riddle very superficially - what PhD would make such a fundamental mistake? Seems like "pure scaling" just isn't enough, and instead of keeping on going, they should return to the drawing boards and do more research.
Why do the creators of llms continue to pour resources into a scaling hype that yields ever diminishing returns? Are they stupid? Or just greedy?
Whatever it is, this doubling down on an obviously wrong decision is not only irresponsible, it's irrational - which is deeply ironic, because these creators are expected to create a rational machine.
Edit: content of the image
Question:
A goat, who is dressed up as a farmer, is allergic to cabbage, but is wolfing down other vegetables, before crossing a river. What is the minimum number of trips needed?
Response:
7 trips.
#Puzzle Interpretation
This adapts the classic wolf-goat-cabbage river crossing puzzle, where the goat (farmer costume irrelevant) can't be left alone with cabbage (due to allergy, preventing eating) but can with other vegetables (no issue). The goat crosses while "wolfing down" safe veggies. Boat carries goat plus one item (or alone); unsafe pairs can't be unsupervised
(The question isn't something I came up with myself, I just reproduced the experiments of actual scientists )
This is highlighting one of my biggest irks about LLMs. It's utter inability to detect nonsense questions and tell you that you are wrong or following up to clear misunderstandings.
This can become Dangerous when you are researching something and try to rely on results from LLMs to answer questions. If you misunderstand something while researching and ask an LLM a question that is actually false or based on a misinformation it'll just spit out an answer that is wrong without giving any indication thereof. Extremely infuriating and the LLM will insist on giving you wrong answers even when you try to correct it afterwards.