This is a way better explanation of Dynamic Caching than what most techtubers have vaguely mumbled about it with no clear understanding.
It said on-chip memory right on their slide which seemed to have been lost on most, which is SRAM and not DRAM, the layers of GPU caches/tile memory and fitting more as it's needed in there which maximizes GPU hit rates and utilization. Most people explained it in some hand wavey nature about Unified Memory and using less RAM.
I feel like we've heard this for the last couple of releases too, a re-focus on quality, but then the .0 and even .1 releases are as normally buggy as they've been for a while.
To not fall further behind on deploying generative AI makes sense, but after that I'd still really love to see what a full Snow Leopard year for all of their platforms can do, a complete focus on speed, bug fixes, and getting little things that aren't quite bugs but annoying little visual hitches and lags and things that leave you not knowing if they're working down to as near to zero as humanly possible.