The meme don't make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all...
Smoolak
I'm hosting a matrix server with a TURN server and it's fairly easy to selfhost. This sounds exaggerated.
One argument that hasn't been discussed here is the fact that Bethesda has been owned by Microsoft since 2021. It's likely that Microsoft had been planning to acquire the company for several years prior to the official purchase too.
This will make that device so much more usable.
On the contrary AUR seems to have a lot more binary packages than source packages in my experience. Tons of package also have a "-bin" version (e.g. yay).
Your "unsupported" comment is a bit weird. It's the AUR user community that supports Arch and makes AUR compatible with it. I don't know why somebody would contemplate the other way around. I mean, it's the while philosophy of the AUR.
I've been using it for the past 12 years and I rarely got any issues with it. I think you fear mongering quite a bit. Sure, you get over some abandoned packages from time to time and once in a blue moon you get a dependency that doesn't install properly. When that happen you post a comment on the AUR or flag the package and it's solved in a matter of days most of the time. It's surprising that such a system would work so well, but it does.
Most people I know just use Arch Linux and the AUR. It seems to be the easiest system around for maximal package support and it's well maintained.
I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.