sverit

joined 1 year ago
[–] sverit@lemmy.ml 4 points 11 months ago

Yadda Yadda Yadda, q.e.d.

[–] sverit@lemmy.ml 9 points 11 months ago (2 children)

ITT people who seemingly haven't used an Android phone in ~10 years

[–] sverit@lemmy.ml 1 points 11 months ago

I lold at that, too :)

[–] sverit@lemmy.ml 33 points 11 months ago (2 children)

10-in-one from hair to car

[–] sverit@lemmy.ml 8 points 11 months ago (1 children)

Which model with how many parameters du you use in ollama? With 8GB you should only be able to use the smallest models, which ist faaaar from ideal:

You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

[–] sverit@lemmy.ml 5 points 11 months ago
[–] sverit@lemmy.ml 7 points 11 months ago

Amaretto soda

[–] sverit@lemmy.ml 23 points 11 months ago (2 children)

SET BLASTER=A220 I5 D1 H5 P330 T6

[–] sverit@lemmy.ml 1 points 11 months ago

Well, 2000€ for a "Pro" model of the Macbook 14" with only 8GB RAM sounds a bit strange, tbf. And +230€ for +8GB is straight up greedy.

They said "Actually, 8GB on an M3 MacBook Pro is probably analogous to 16GB on other systems" and well , that's definitely not the case for their upcoming AI usecases, because - and many people seem to overlook that - their RAM is shared RAM (or as they call it "unified memory"), which means that the GPU is limited by these 8GB of (V)RAM because it can only use what is left by the System usage.

view more: ‹ prev next ›