Triangle Vomitorium.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
The computer that you stick into your other computer.
Strategic Computational Retro Offboard Turbo Encabulator
Parallel Processing Unit: PPU
heh, you said "PP".
I'm old enough to remember when these were called 'Accelerator Cards'.
How are your knees and back treating you, fellow old person?
F
GPUs are specialized to be able to very quickly manipulate vectors, by using a principle called Single Instruction Multiple Data (SIMD). Where a CPU would have to individually operate on each element of a vector, a GPU can operate on all the elements in one go.
So maybe you could call it a SIMD card or Vector Accelerator or something like that.
Ok you can call it a geometry coprocessor
Expensive card
Computational shotgun.
AIPU. Or “AI stinks” for short.
They are GPUs.
All of them, even the H100, B100, and MI300X all have texture units, pixel shaders, everything. They are graphics cards at a low level. Only the MI300X is missing ROPs, but the Nvidia cards have them (and can run realtime games on Linux), and they all can be used in Blender and such.
The compute programming languages they use are, fundamentally, hacked up abstractions to map to the same GPU hardware in consumer stuff.
That’s the whole point, they’re architected as GPUs so that they’re backwards compatible, as everything's built on the days when consumer gaming GPUs were hacked to be used for compute.
Are there more dedicated accelerators? Yes. They’re called ASICs, or application specific integrated circuits. This is technically a broad term, but mostly its connotation is very purpose made compute.
The 5090 is missing rops too
LMAO
Parallel compute accelerator.
Nobody is gonna say that in full, just like "graphics processing unit" becomes "GPU", so maybe "PCA".
Aka ones 'pecca'
Pissy, eh.
Thinky boi, or computy boi.
Thinky boi is the CPU. GPU are also thinky but they are in parallel so plural. Thinky bois.
Mathematical Image Creation Engine.
MICE.
Floating point processor.
Back in the day, you could slap a math coprocessor on your system so it could do floating point maths real gud.
Now, you slap in some card that does floating point maths even guder, but also in parallel in yuge vectors.
So my proposed name is "It's like an old Cray supercomputer but real tiny"
Massively Parallelized Floating-Point Computation Unit.
MPFPCU!
Hehehehehehehe!
Triangle makers
Floating point coprocessor
matrix multiplication unit
We already have MMU for Memory Management Unit. Maybe Matrix Multiplication Accelerator instead?
So MMA? Sounds sporty.
Matrix Accelerator coProcessor card, MAP card
Probably something like Tensor Processing Unit. That's a specific Google product, but something along those lines
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision)[3] with more input/output operations per joule, without hardware for rasterisation/texture mapping.
Carburator.
It mixes the fuel/air ratio, prepping it before it goes into the engine.
Similarly ram is holding data while it gets adjusted.
It's not a great analogy, but it's pretty much all there is
I think you need to add the exhaust or at least the catalysator to it because the RAM stores the results of the computations for further use.
It's mixing the data that goes in to get the result...
Well there's yer problem