this post was submitted on 18 Nov 2025
161 points (95.0% liked)

Linux

10174 readers
566 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

As Snowden told us, video and audio recording capabilities of your devices are NSA spying vectors. OSS/Linux is a safeguard against such capabilities. The massive datacenter investments in US will be used to classify us all into a patriotic (for Israel)/Oligarchist social credit score, and every mega tech company can increase profits through NSA cooperation, and are legally obligated to cooperate with all government orders.

Speech to text and speech automation are useful tech, though always listening state sponsored terrorists is a non-NSA targeted path for sweeping future social credit classifications of your past life.

Some small LLMs that can be used for speech to text: https://modal.com/blog/open-source-stt

top 36 comments
sorted by: hot top controversial new old
[–] grue@lemmy.world 79 points 1 day ago (1 children)

Home Assistant has been heavily working on that sort of functionality lately.

[–] 9point6@lemmy.world 23 points 1 day ago (1 children)

Home assistant continues to be fantastic, I remember it was what felt like fairly recently that all we had was OpenHAB and although it was fine, it was a bit of an uphill struggle to do anything.

[–] fonix232@fedia.io 7 points 1 day ago (1 children)

There were like, about two years between OpenHAB and HA being released. Former debuted in 2011, HA saw first release in 2013.

[–] 9point6@lemmy.world 3 points 1 day ago (1 children)

Oh really? I could have sworn HA was a fair bit later than that

I think I used OpenHAB between about 2013 and 2018, then switched to HA around then after discovering it and reading about it for a couple of weeks.

Must have just had my head in the sand then!

[–] fonix232@fedia.io 2 points 1 day ago

To be fair, in the early days HA wasn't too usable. Even around 2018-19, the integrations were limited and the core logic was quite wonky. I'd say around 2020 it became mature enough for daily use for non-tinkerers.

[–] data1701d@startrek.website 9 points 1 day ago

I need to play with HomeAssistant more. My last bit of hesitation was I was struggling to find a replacement for the announcement and intercom functionality, which is half of what my family uses Alexa for.

It looks like it got announcements with the "broadcast" intent in February; for the intercom, there may be a plugin. This seems like it might have me covered on the intercom front: https://github.com/JoeHogan/ha-intercom

Perhaps I'll mess around with it again once the semester's over; a lot of my family would really like to jump the Amazon ship and certainly be willing to try it if I give them the option.

[–] brucethemoose@lemmy.world 29 points 1 day ago* (last edited 1 day ago) (1 children)

I mean, there are many. TTS and self-hosted automation are huge in the local LLM scene.

We even have open source "omni" models now, that can ingest and output speech tokens directly (which means they get more semantic understanding from tone and such, they 'choose' the tone to reply with, and that it's streamable word-by-word). They support all sorts of tool calling.

...But they aren't easy to run. It's still in the realm of homelabs with at least an RTX 3060 + hacky python projects.


If you're mad, you can self-host Longcat Omni

https://huggingface.co/meituan-longcat/LongCat-Flash-Omni

And blow Alexa out of the water with a MIT-licensed model from, I kid you not, a Chinese food delivery company.


EDIT

For the curious, see:

Audio-text-to-text (and sometimes TTS): https://huggingface.co/models?pipeline_tag=audio-text-to-text&num_parameters=min%3A6B&sort=modified

TTS: https://huggingface.co/models?pipeline_tag=text-to-speech&num_parameters=min%3A6B&sort=modified

"Anything-to-anything," generally image/video/audio/text -> text/speech: https://huggingface.co/models?pipeline_tag=any-to-any&num_parameters=min%3A6B&sort=modified

Bigger than 6B to exclude toy/test models.

[–] fonix232@fedia.io 3 points 1 day ago (1 children)

I do wish there was a smaller LongCat model available. My current AI node has a hard 16GB VRAM limit (yay AMD UMA limitations), so 27B can't really fit. An 8B dynamically loaded model would fit, and run much better.

[–] brucethemoose@lemmy.world 3 points 23 hours ago* (last edited 23 hours ago) (1 children)

You can do hybrid inference of Qwen 30B omni for sure. Or fully offload inference of Vibevoice Large (9B). Or really a huge array of models.

...The limiting factor is free time, TBH. Just sifting through the sea of models, seeing if they work at all, testing if quantization works and such is a huge timesink, especially if you are trying to load stuff with rocm.

[–] fonix232@fedia.io 2 points 23 hours ago (1 children)

And I am on ROCm - specifically on an 8945HS, which is advertised as a Ryzen AI APU yet is completely unsupported as a target with major issues around queuing and more complex models (although the new 7.0 betas have been promising but TheRock's flip-flopping with their Docker images has been making me go crazy...).

[–] brucethemoose@lemmy.world 2 points 22 hours ago* (last edited 22 hours ago) (1 children)

Ah. On an 8000 APU, to be blunt, you're likely better off with Vulkan + whatever omni models GGML supports these days. Last I checked, TG is faster and prompt processing is close to rocm.

...And yeah, that was total misadvertisement on AMD's part. They've completely diluted the term kinda like TV makers did with 'HDR'

[–] fonix232@fedia.io 1 points 22 hours ago (1 children)

The thing is, if AMD actually added proper support for it, given it has a somewhat powerful NPU as well... For the total TDP of the package it's still one of the best perf per watt APU, just the damn software support isn't there.

Feckin AMD.

[–] brucethemoose@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

The IGP is more powerful than the NPU on these things anyway. The NPU us more for 'background' tasks, like Teams audio processing or whatever its used for on Windows.

Yeah, in hindsight, AMD should have tasked (and still should task) a few engineers on popular projects (and pushed NPU support harder), but GGML support is good these days. It's gonna be pretty close to RAM speed-bound for text generation.

[–] fonix232@fedia.io 1 points 21 hours ago (1 children)

Aye, I was actually hoping to use the NPU for TTS/STT while keeping the LLM systems GPU bound.

[–] brucethemoose@lemmy.world 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

It still uses memory bandwidth, unfortunately. There's no way around that, though NPU TTS would still be neat.

...Also, generally, STT responses can't be streamed, so you mind as well use the iGPU anyway. TTS can be chunked I guess, but do the major implementations do that?

[–] fonix232@fedia.io 2 points 21 hours ago (2 children)

Piper does chunking for TTS, and could utilise the NPU with the right drivers.

And the idea of running them on the NPU is not about memory usage but hardware capacity/parallelism. Although I guess it would have some benefits when I don't have to constantly load/unload GPU models.

[–] brucethemoose@lemmy.world 2 points 15 hours ago (1 children)

Oh, I forgot!

You should check out Lemonade:

https://github.com/lemonade-sdk/lemonade

It's supports Ryzen NPUs via 2 different runtimes... though apparently not the 8000 series yet?

[–] fonix232@fedia.io 1 points 15 hours ago (1 children)

I've actually been eyeing lemonade, but the lack of Dockerisation is still an issue... guess I'll just DIY it at one point.

[–] brucethemoose@lemmy.world 1 points 10 hours ago* (last edited 10 hours ago) (1 children)

It's all C++ now, so it doesn't really need docker! I don't use docker for any ML stuff, just pip/uv venvs.

You might consider Arch (dockerless) ROCM soon; it looks like 7.1 is in the staging repo right now.

[–] fonix232@fedia.io 2 points 3 hours ago

Due to the fact I am running UnRaid on the node in question, I kinda do need Docker. I want to avoid messing with the core OS as much as possible, plus a Dockerised app is always easier to restore.

[–] brucethemoose@lemmy.world 2 points 20 hours ago* (last edited 20 hours ago)

Yeah... Even if the LLM is RAM speed constrained, simply using another device to not to interrupt it would be good.

Honestly AMD's software dev efforts are baffling. They've focused on a few on libraries precisely no-one uses, like this: https://github.com/amd/Quark

While ignoring issues holding back entire sectors (like broken flash-attention) with devs screaming about it at the top of their lungs.

Intel suffers from corporate Game of Thrones, but at least they have meaningful contributions in the open source space here, like the SYCL/AMX llama.cpp code or the OpenVINO efforts.

[–] moodwrench@lemmy.world 14 points 1 day ago (2 children)

It's not lack of software, it's lack of hardware. Home assistant is ready as are others, but there's no good cheap mic/speaker/esp in a box hardware

[–] fonix232@fedia.io 8 points 1 day ago (1 children)

The HA Voice Preview is a pretty solid device, but you're right, there isn't really any ready made Echo/Google Home Mini replacement device - primarily because all those devices are generally sold at a loss, or at cost at best, and subsidised by your data being sold.

You won't be able to make a Google Home Mini contender for below $50, and at that price most people will opt for the former. Good quality speakers, microphones, local processing (like the XMOS chip in the Voice Preview) all cost money, and there's no subsidy to be made. Some older Echo devices are rootable, but the hardware tends to be somewhat exotic (meaning no open source support for specialised components), and there's little ongoing third party support (focus has been on the display-equipped models, and to run Android on them).

All in all, "cheap" and "fully local open source voice assistant" don't really coexist.

[–] TechLich@lemmy.world 1 points 1 day ago (1 children)

The issue with that is there isn't an expensive option either. The only thing close is the home assistant voice preview and it's still very "preview". There's not really any way to do it well at any price point right now.

[–] fonix232@fedia.io 1 points 1 day ago

Well yeah, the availability of these more advanced hardware bits is pretty new - for example, all the older GH Minis and Echo devices were running a quite pared down Linux distro with software processing for e.g. wake words.

Transplanting all that to MCUs takes time, but now we have a solid base, a handful of devices/boards that utilise the various XMOS chips, and soon we will be seeing more and more consumer level devices - but again that takes time when there's no big megacorp behind the project pushing it to completion with bottomless finances and hundreds of engineers.

But you're not exactly correct on there being no other options. There's the Satellite1 smart speaker which might be a DIY kit but it does exist. Then there's the Seeed Studio Respeaker Lite w/ ESP32-S3 to which you can slap a speaker (either directly or a powered speaker through the audio jack). In fact the Respeaker lineup has a handful more options for smart speakers all utilising the various XMOS chips.

Just keep in mind that these speakers are DIY mainly for two reasons:

  • the technology is pretty new
  • there's no big corpo push behind it to deliver profitable (in some way) consumer products

There WILL be consumer products (hopefully soon) on the market, but again, this is being done by volunteers and small startups with just a handful of people, it takes more time to get them on the market than it does for companies the size of Amazon or Google.

[–] Beacon@fedia.io -3 points 1 day ago (3 children)

No, home assistant very much is not ready to replace an Alexa device. Home assistant mainly only does automation of smart devices, and as far as i can see from their website it does nothing else. One of the main things people use Alexa for is to play music from services like Spotify, and home assistant doesn't appear to do that.

[–] tyler@programming.dev 2 points 1 day ago

You very clearly dont understand home assistant.

[–] moodwrench@lemmy.world 4 points 1 day ago (1 children)

Sorry.. my experience has been trying to move my google home to something open with no cloud... it's not been perfect for me after moving. Definitely things missing, but lots of things are better. Spotify does work with home assistant.. maybe look again or send a pr

[–] Beacon@fedia.io -1 points 1 day ago (1 children)

It isn't listed anywhere on their homepage or example demos or anywhere listing its capabilities, so i did a web search to find it and I found that it sorta just kinda can do Spotify, but (1.) that isn't listed anywhere on the home assistant abilities listing pages, which shows just how not ready for the mass market it is, and (2.) takes a ridiculous amount of very techie setup just to get it to work

https://www.home-assistant.io/integrations/spotify/

And also, out of the box can i ask it to:

  • tell me the weather?

  • set a timer?

  • set an alarm?

I don't see anything on the website that says it can do these things. And even if it can (which doesn't appear to be the case from their website) then the fact that the website doesn't say it can do these things is a problem in itself that shows it isn't ready for the mass market

Just look at the webpage for Alexa vs. Home Assistant and it's clear that Alexa has a very wide variety of abilities and is designed to be easy to use by anyone, while the home assistant website only shows it doing smart device automation and looks like it's not for regular folks

https://www.amazon.com/dp/B0DCCNHWV5

https://www.home-assistant.io/

I would LOVE to replace my Alexa devices with a local FOSS system, but unfortunately home assistant isn't close to being able to do that yet

[–] fonix232@fedia.io 3 points 1 day ago

I'm sorry, what?

Googling "home assistant Spotify" results in the very link you've provided.

And you can hardly expect a project like Home Assistant, with THOUSANDS of first party integrations, to cater to your specific needs, or to provide preferential treatment to companies like Spotify, who provide absolutely no support to the project.

It also doesn't require a "techie setup", but following a quite straightforward guide, that culminates in clicking about maybe a dozen buttons (most of them being "I accept" to various terms and policies), then copying a handful of readily provided strings into the right fields. It's simple enough that even my tech illiterate father can do it.

Home Assistant at the end of the day is NOT an Alexa (or other voice assistant) replacement, but a smarthome control hub OS. That it provides a voice assistant interface is quite secondary to its main mission.

[–] possiblylinux127@lemmy.zip 2 points 1 day ago (1 children)

Home assistant has a voice assistant feature

[–] Beacon@fedia.io 0 points 1 day ago

It does, but it still has the same inabilities as the screen interface has

[–] Integrate777@discuss.online 2 points 1 day ago* (last edited 1 day ago)

We've got local llm models, we've got local text to speech, isn't it a matter of time until someone puts in the work to build one? It shouldn't be surprising. I'm surprised there weren't more of them.

If you follow programming communities, the most popular thing beginners say they want to build these days is "local AI chat assistant" or some variant of the concept.

[–] thatradomguy@lemmy.world 5 points 1 day ago (1 children)

There also used to be an open source Alexa-like kind of smart speaker that went by Mycroft AI. They were doing crowd funding I believe but that didn't go anywhere and so they eventually stopped working on it. You can still find their stuff on YouTube though: https://www.youtube.com/@MycroftAIForEveryone/videos

[–] fruitycoder@sh.itjust.works 1 points 22 hours ago

I have one! It was a really cool project! https://www.openvoiceos.org/ is the community fork carrying it forward

[–] rimu@piefed.social 4 points 1 day ago

Time to get a mic for my home server!