rkd

joined 2 months ago
[–] rkd@sh.itjust.works 3 points 3 weeks ago

I can read minds and they're thinking "we better get some money around here, otherwise we're still blaming the immigrants".

[–] rkd@sh.itjust.works 11 points 1 month ago

If they're not great, it's your fault /thread 😅

[–] rkd@sh.itjust.works 1 points 1 month ago

I believe right now it's also valid to ditch NVIDIA given a certain budget. Let's see what can be done with large unified memory and maybe things will be different by the end of the year.

[–] rkd@sh.itjust.works 2 points 1 month ago

can't have both

[–] rkd@sh.itjust.works 5 points 1 month ago

somebody do something any day now

[–] rkd@sh.itjust.works 6 points 1 month ago

no more fokin ambushes

[–] rkd@sh.itjust.works 1 points 1 month ago

Trump has entered the chat

[–] rkd@sh.itjust.works 32 points 1 month ago (1 children)

His whole existence is a financial demand. I believe Bloomberg calls this "a transactional period". Put it plainly, y'all elected a corrupt president.

[–] rkd@sh.itjust.works 1 points 1 month ago* (last edited 1 month ago)

For some weird reason, in my country it's easier to order a Beelink or a Framework than an HP. They will sell everything else, except what you want to buy.

[–] rkd@sh.itjust.works 1 points 2 months ago (1 children)

Remind me of what are the downsides of possibly getting a framework desktop for christmas.

[–] rkd@sh.itjust.works 1 points 2 months ago

That's a good point, but it seems that there are several ways to make models fit in smaller memory hardware. But there aren't many options to compensate for not having the ML data types that allows NVIDIA to be like 8x faster sometimes.

 

Total noob to this space, correct me if I'm wrong. I'm looking at getting new hardware for inference and I'm open to AMD, NVIDIA or even Apple Silicon.

It feels like consumer hardware comparatively gives you more value generating images than trying to run chatbots. Like, the models you can run at home are just dumb to talk to. But they can generate images of comparable quality to online services if you're willing to wait a bit longer.

Like, GPT OSS 120b, assuming you can spare 80GB of memory, is still not GPT 5. But Flux Shnell is still Flux Shnel, right? So if diffusion is the thing, NVIDIA wins right now.

Other options might even be better for other uses, but chatbots are comparatively hard to justify. Maybe for more specific cases like code completion with zero latency or building a voice assistant, I guess.

Am I too off the mark?

view more: next ›