theterrasque

joined 2 years ago
[–] theterrasque 1 points 1 year ago

It's less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that's usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb's of RAM that's many times faster than the CPU's ram, which is the main reason it's faster for llm's.

Most tpu's don't have much ram, and especially cheap ones.

[–] theterrasque 1 points 1 year ago* (last edited 1 year ago)

Reasonable smart.. that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They're rather impressive for their size.

For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I'd say right half a gig to a gig of VRAM.

As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

So no, you're not loading all the notes directly, and you won't have a smart model.

For your hardware and use case.. try phi3-mini with a RAG system as a start.

[–] theterrasque 0 points 1 year ago (1 children)

I'm not saying it's broken, but it has some design choices and functions that makes even Whatsapp a better choice for privacy minded people. Like rolling their own crypto and not having e2ee as default.

[–] theterrasque 1 points 1 year ago

Llama3 70b is pretty good, and you can run that on 2x3090's. Not cheap, but doable.

You could also use something like runpod to test it out cheaply

[–] theterrasque 4 points 1 year ago (3 children)

Koboldcpp is way easier. Download exe, double click exe, open gguf file with the AI model, click start.

Then put on your robe and wizard hat

[–] theterrasque 8 points 1 year ago

So you're saying it's already feature complete with most json libraries out there?

[–] theterrasque -4 points 1 year ago (2 children)

You realise there is no algorithm behind Lemmy, right?

Of course there is. Even "sort by newest" is an algorithm, and the default view is more complicated than that.

You aren't being shoved controversial polarizing content subliminally here.

Neither are you on TikTok, unless you actively go looking for it

[–] theterrasque 4 points 1 year ago (3 children)

I've seen Skype do that. It was a weird folder name, but gallery found it and displayed the images.

Which is how I noticed it in the first place

[–] theterrasque 3 points 2 years ago

I wonder what bpm Moby's thousand starts at... Maybe it can reach both limits

[–] theterrasque 7 points 2 years ago (1 children)

Who are you?

What do you want?

Also, I think good and bad is a bit fluid there. It's just people with different agendas. Well, except emperor Cartagia. And perhaps Bester.

view more: ‹ prev next ›