this post was submitted on 13 Jul 2023
4 points (83.3% liked)

Free Open-Source Artificial Intelligence

3526 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

I successfully installed oobavooga on a PC, and by using only the CPU I can run a vicuna and a wizardlm model. I can't run any llama, opt, gpt-j which looks required when trying to train the model through Lora. Do you have any suggestion?

top 6 comments
sorted by: hot top controversial new old
[–] Kerfuffle@sh.itjust.works 6 points 2 years ago (2 children)

It's not really practical to train models on CPU. So basically your options are to get a GPU or to rent time on someone else's server that has the necessary compute. GPU VRAM seriously determines the sizes of model you can train, so renting a server is probably the more flexible option. It generally needs a lot more VRAM to train than to simply run a model.

Just an example of a place you can rent GPU time from: https://vast.ai/pricing

Even for a very powerful GPU like an A100 with 80GB VRAM the prices aren't that bad. It's like $2-3 per hour.

[–] abhibeckert@lemmy.world 2 points 2 years ago* (last edited 2 years ago)

While it won't help OP, there are GPUs that have plenty of VRAM but are lacking CUDA.

Apple sells some with 192GB of VRAM for example... and as a Mac user I'm pretty frustrated by all the dependencies on CUDA. The few tools that don't require it run great on my Mac (Stable Diffusion for example, runs in 30 seconds on my laptop with typical settings).

[–] Suoko@feddit.it 1 points 2 years ago (1 children)
[–] Kerfuffle@sh.itjust.works 2 points 2 years ago (1 children)

Thanks but I prefer CPUs

Do whatever makes you happy, but don't be too surprised if you find there aren't many options available to support your preference. GPUs are just better suited for the task.

It's kind of like saying "Thanks, but I prefer to use a colander to hold my drinking water". If you really, really want to use a colander to hold your drinking water you'll probably have to learn to construct colanders yourself since you won't really find a colander-style water bottle for sale. Of course, even if you do go to that extent, in the end you'll find all your effort produces inferior results compared to a super cheap water bottle made of some solid material.

[–] Suoko@feddit.it 1 points 2 years ago (1 children)

I know how it works but my GPU is weak (4gb) while my cpu and RAM are great. Theoretically I should just have some (maybe way) longest waiting times but in the end I should get the same results.

[–] Kerfuffle@sh.itjust.works 2 points 2 years ago* (last edited 2 years ago)

Well, I hope you succeed in finding a way to do it, but again, it's going to be hard to find tools and information to accomplish this. Since using the CPU for training is ineffective, most existing software is oriented toward GPU-based training.

You're not wrong that if you could find the software and if you were willing to wait long enough you'd get the same results but getting to that point isn't so easy. Also, it maybe not be an effective use of time/money: after all, if you could train your model in... I don't know, several weeks on CPU and it takes a lot of effort to develop/find the resources or you could spent $4 to rent an A100 for two hours why take the former approach? You might spend as much just in electricity running all threads on the CPU at 100% for an extended time.

load more comments
view more: next ›