this post was submitted on 12 Oct 2025
20 points (95.5% liked)

LocalLLaMA

3822 readers
10 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I found this project which is just a couple of small python scripts glueing various tools together: https://github.com/vndee/local-talking-llm

It's pretty basic, but couldn't find anything more polished. I did a little "vibe coding" to use a faster Chatterbox fork, stream the output back so I don't have to wait for the entire LLM to finish before it starts "talking," start recording on voice detection instead of the enter key, and allow interruption of the agent. But, like most vibe-coded stuff, it's buggy. Was curious if there was something better that already exists before I commit to actually fixing the problems and pushing a fork.

top 6 comments
sorted by: hot top controversial new old
[–] hendrik@palaver.p3x.de 2 points 3 weeks ago (1 children)

I got a bonus question... Is there a good end-to-end voice conversation solution? I'd like to try something which directly processes the audio and returns audio, rather than the whole pipeline with vad -> stt -> llm -> tts

[–] lynx@sh.itjust.works 4 points 3 weeks ago (1 children)

There are not many models that support any-to-any, currently the best seems to be Qwen3-Omni, the audio quality is not great and it is not supported by llama.cpp: https://github.com/ggml-org/llama.cpp/issues/16186

[–] hendrik@palaver.p3x.de 2 points 3 weeks ago

Thanks! if anyone has more (good) alternatives or something like a curated list, I'd have a look at that as well... always a bit complicated to stay up to date and go through the myriad of options myself...

[–] Smokeydope@lemmy.world 2 points 3 weeks ago

Kobold.CPP has pretty good TTS model integration I used OuteTTS model when I played around with it but theres also API integration with commercial ones like kokoro.

However, I'm no sure if its able to stream to a TTS model as the llm is generating when I tried it just waited till after output to send to voice model you may need to do some documentation reading to see if real time streaming is possible if you go that route.

[–] TheLeadenSea@sh.itjust.works 1 points 3 weeks ago

Alpaca on Flathub seems ok

[–] kata1yst@sh.itjust.works 1 points 3 weeks ago

OpenWebUI has TTS and STT.