283
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
(www.technologyreview.com)
This is a most excellent place for technology news and articles.
Each winter marks the beginning and end of a generation of AI. We are now seeing more progress and as long as there is no technical limit it seems that its progress will not be interrupted.
What progress are we seeing?
In what area of AI? Image generation is increasing in leaps and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.
What’s one main application of AI that hasn’t improved?
Which chatbots are getting smarter?
I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.
Copilot, ChatGPT, pretty much all of them.
Smarter how? Synthetic benchmarks?
Because I've heard the opposite from users and bloggers.
So you want me to provide some evidence that it's getting smarter, but you can't provide any that it's getting worse other than anecdotal evidence?
What evidence would you accept?
Any proof that we have moved past the current architecture.
What does "architecture" mean in this scenario?
Any significant shift in the model, or a complete restructuralization of the approach.
As it is, it won't grow anywhere.
So you’ve got acres to all this stuffs source code and know what has and hasn’t changed with every update?
Advanced Reasoning models came out like 4 months ago lol
Advanced reasoning? Having LLM talk to itself?
Yes, which has improved some tasks measurably. ~20% improvement on programming tasks, as a practical example. It has also improved tool use and agentic tasks, allowing the llm to plan ahead and adjust it's initial approach based on later parts.
Having the llm talk through the tasks allows it to improve or fix bad decisions taken early based on new realizations on later stages. Sort of like when a human thinks through how to do something.
Lul yes but no, but they are clearly better at many types of tasks.
For example? Citations?
Pretty sure these "tasks" are meaningless metrics made up by pseudo-scientific grifters.
AlphaFold 3 which can help in the prediction of some proteins. Although it has some limitations, it cannot be used in all cases, only in what it can perform without any problem.
Small bits of code, language related tasks, basic context understanding, not metrics I have literally measured simply noticed has improved compared to non reasoning models in my homelab testing. 🤷♂️