this post was submitted on 24 Jun 2025
10 points (72.7% liked)
Ollama - Local LLMs for everyone!
194 readers
1 users here now
A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.
founded 1 week ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How have you been making those models? I have a 4070 and doing it locally has been a dependency hellscape, I’ve been tempted to rent cloud GPU time just to save the hassle.
I'm downloading pre-trained models. I had a bunch of dependency issues getting text-generation-webui to work and honestly I probably installed some useless crap in the process, but I did get it to work. LM Studio is much simpler, but less customization(or I just don't know how to use it all in lm studio). But yea, I'm just downloading pre-trained models and running them in these UI's (right now I just loaded up 'deepseek-r1-distill-qwen-7b' in LM Studio). I also have the nvidia app installed and I make sure my gpu drivers are always up to date.