this post was submitted on 06 Sep 2025
47 points (96.1% liked)

LocalLLaMA

3727 readers
1 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I'm curious about what the consensus is here for which models are used for general purpose stuff (coding assist, general experimentation, etc)

What do you consider the "best" model under ~30B parameters?

all 14 comments
sorted by: hot top controversial new old
[–] j4k3@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

Qwen 2.5 VL and Code. I have a VL doing image captions for LoRA training running now. A 14B is okay for basic code. A quantized 32B 6KL gguf of the same Qwen 2.5 code model runs on 16GB but at a third of the speed of the 14B in bits and bytes 4b. The latter is reasonably fast enough for a couple layers of agentic stuff in emacs with gptel and hits thinking or function calling out of a llama.cpp server better than 50% of the time.

I still haven't tried the new 20B out of Open AI yet.

[–] hendrik@palaver.p3x.de 4 points 1 month ago* (last edited 1 month ago)

I really liked Mistral-Nemo-Instruct for its allround capabilities. But it's old and hardly the best any more. But I feel lots of newer models are sycophants and tuned more for question answering and assistant stuff, and their ability to write long-form prose or role play as a Podcast host hasn't really improved substancially. These days I switch models. I'll use something more creative if I want that, or switch to a model dedicated to coding if I want autocomplete. But to be honest, coding isn't on my list of requirements any more. I've tried AI coding and it's not really helping with what I do. I regularly waste some extra 30%-100% of time if I do it with AI, and that's with the huge commercial services like AIstudio or ChatGPT.

[–] Smokeydope@lemmy.world 3 points 1 month ago

I'm a big fan of NousResearch their deephermes release was awesome and now I'm trying out Hermes 4. I have an 8gb 1070ti GPU was able to fully offload a medium quant of hermes 4 14b with an okay amount of context.

I'm a big fan of the hybrid reasoning models I like being able to turn thinking on or of depending on scenario.

I had a vision model document scanner + TTS going on with a finetune of qwen 2.5 vl and outetts.

If you care more about character emulation for writing and creativity then mistral 2407 and mistral NeMo are other models to check out.

[–] staph@sopuli.xyz 2 points 1 month ago

Qwen3-30B-A3B-2507 family is an absolute beast. The reasoning models are seriously chatty in their chain of thought, but the results speak for themselves. I’m running a Q4 on a 5090, and with a Q8 KV quant, I can run 60k token context entirely in vram, which gets me up to 200 tokens per second.

[–] MagicShel@lemmy.zip 2 points 1 month ago

I use Qwen coder 30B, testing Venice 24B, also going to play with Qwen embedding 8B and Qwen resorter(?) 8B. All with Q4.

They all run pretty well on the new MacBook I got for work. My Linux home desktop has far more modest capabilities, and I generally run 7B models, though gpt-oss-20B-Q4 runs decently. It's okay for a local model.

None of them really blow me away, though Cline running in VSCode with Qwen 30B is okay for certain tasks. Asking it to strip all of the irrelevant html out of a table to format as markdown or asciidoc had it thinking for about 10 minutes before asking specifically which one I wanted - my fault, I should've picked one. Wanted markdown but thought adoc would reproduce it with better fidelity (table had embedded code blocks) and so I left it open to interpretation.

By comparison, ChatGPT ingested it and popped an answer back out in seconds that was wrong. So Idk, nothing ventured, nothing gained. Emphasis on the latter.

[–] BaroqueInMind@piefed.social 1 points 1 month ago* (last edited 1 month ago) (3 children)

Unlike most of you here reading this, I don't allow a corporate executive/billionaire or a distant nation-state to tell me what I am permitted to say or what my model is allowed to output, so I use an uncensored general model from here (first uncheck "proprietary model" box).

[–] panda_abyss@lemmy.ca 3 points 1 month ago

That is awesome, thank you for that link!

[–] RheumatoidArthritis@mander.xyz 1 points 1 month ago (1 children)

This leaderboard is a gem! This should be a separate post, thank you!

[–] BaroqueInMind@piefed.social 1 points 1 month ago (1 children)

The votes here seem to disagree with you

[–] RheumatoidArthritis@mander.xyz 2 points 1 month ago

Oh no, I must be wrong then :(

[–] swelter_spark@reddthat.com 0 points 1 month ago

Not sure I want to name any names... 😂