this post was submitted on 20 Oct 2025
25 points (93.1% liked)

LocalLLaMA

3820 readers
20 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 
top 8 comments
sorted by: hot top controversial new old
[–] mindbleach@sh.itjust.works 5 points 1 week ago (2 children)

This is the real future of neural networks. Trained on supercomputers - runs on a Game Boy. Even in comically large models, the majority of weights are negligible, and local video generation will eventually be taken for granted.

Probably after the crash. Let's not pretend that's far off. The big players in this industry have frankly silly expectations. Ballooning these projects to the largest sizes money can buy has been illustrative, but DeepSeek already proved LLMs can be dirt cheap. Video's more demanding... but what you get out of ten billion weights nowadays is drastically different from a six months ago. A year to date ago, video models barely existed. A year to date from now, the push toward training on less and running on less will presumably be a lot more pressing.

[–] jwmgregory@lemmy.dbzer0.com 1 points 6 days ago* (last edited 6 days ago) (1 children)

The bubble popping will be a good thing. Henry Ford didn’t come around until after the electrification bubble popped, after all. Bezos didn’t come around until the dotcom bubble burst.

It’s after all bubbles burst - when the genuinely useful things are most salient and apparent, that the true innovations happen.

[–] mindbleach@sh.itjust.works 1 points 6 days ago

The bubble continuing ensures the current paradigm soldiers on, meaning hideously expensive projects shove local models into people's hands for free, because everyone else is doing that.

And once it bursts, there's gonna be an insulating layer of dipshits repeating "guess it was nothing!" over the next decade of incremental wizardry. For now, tolerating the techbro cult's grand promises of obvious bullshit means the unwashed masses are interpersonally receptive to cool things happening.

Already the big boys are pivoted toward efficiency instead of raw speed at all costs. The closer they get toward a toaster matching current tech with a model trained for five bucks, the better. I'd love for VCs to burn money on experimentation instead of scale.

[–] ThorrJo@lemmy.sdf.org 2 points 1 week ago

I'm very interested in this approach because I'm heavily constrained by money. So I am gonna be looking (in non appliance contexts) to develop workflows where genAI can be useful when limited to small models running on constrained hardware. I suspect some creativity can yield useful tools with these limits, but I am just starting out.

[–] hendrik@palaver.p3x.de 3 points 1 week ago* (last edited 1 week ago) (1 children)

The network aspect makes smart appliances smart. For example I can program the washing machine to be ready with the laundry when I get home... I don't think it's super useful to give it a microphone, computer and several gigabytes of RAM, just so I can go down to the basement and talk to the darn thing... It already has a bunch of LEDs to tell me the status and I think I'm perfectly fine with that.

I think I'm more for edge-computing. Have one smart home hub with the hardware to do voice and AI compute (with a slightly larger AI model on it), and then have one protocol for the appliances to communicate within my own Wifi. Something like Home Assistant does, just more towards general edge-compute. I think that's more useful than spend extra money on every item in the household just to replace the touchscreen with a speaker/mic, and fit the cheapest conversation agent which is in the budget for that specific device.

AI chips are welcome, though. And I'm sure we have some useful applications for them. 1 TOPS isn't much compared to a computer or graphics card. But certainly not bad either. I think we have several single board computers with similar specs. Just not the Raspberry Pi, that comes without a NPU.

[–] fonix232@fedia.io 2 points 1 week ago (1 children)

Doing a level of local computing on certain devices (especially ones you directly interact with and voice interfacing can matter, say, like, a TV) is useful.

I think the best approach is connected edge computing - combining some local computing and the hub of edge computing, and changing which side takes care of business depending on the needs of the task.

Say, having the ability to turn off the oven when you can smell smoke (or remembering you haven't set a timer and the food is ready), simply by talking to your washing machine while you're loading it, is a useful perk. Sure, an edge case, but the moment it becomes needed, even just once, you'll appreciate it.

[–] hendrik@palaver.p3x.de 1 points 1 week ago* (last edited 1 week ago)

Sure. I'm all for the usual system design strategy with strong cohesion within one component and loose coupling on the outside to interconnect all of that. Every single household appliance should be perfectly functional on its own. Without any hubs or other stuff needed.

For self-contained products or ones without elaborate features, I kind of hate these external dependencies. I don't want to miss my NAS and how I can access my files from my phone, computer or TV. But other than that I think the TV and all other electronics should work without being connected to other things.

I mean edge computing is mainly to save cost and power. It doesn't make sense to fit each of the devices with a high end computer and maybe half a graphics card to all do AI inference. That's expensive and you can't have battery-powered devices that way. If they need internet anyway (and that's the important requirement) just buy one GPU and let them all use that. They'll fail without the network connection anyway, so it doesn't matter, and this is easier to maintain and upgrade, probably faster and cheaper.

A bit like me buying one NAS instead of one 10TB harddisk for the laptop, one for the phone, one for the TV... And then I can't listen to the song on the stereo because it was sent to my phone.

But my premise is that the voice stuff and AI features are optional. If they're essential, my suggestion wouldn't really work. I rarely see the need. I mean in your example the smoke alarm could trigger and Home Assistant would send me a push notification on the phone. I'd whip it out and have an entire screen with status information and buttons to deal with the situation. I think that'd be superior to talking to the washing machine. I don't have a good solution for the timer. One day my phone will do that as well. But mind your solution also needs the devices to communicate via one protocol and be connected. The washing machine would need to get informed by the kitchen, be clever enough to know what to do about it, also need to tell the dryer next to it to shut up... So we'd need to design a smart home system. If the devices all connect to a coordinator, perfect. That could be the edge computing "edge". If not it'd be some sort of decentral system. And I'm not aware of any in existence. It'd be challenging to design and implement. And they tend to be problematic with innovation because everything needs to stay compatible, pretty much indefinitely. It'd be nice, though. And I can see some benefits if arbitrary things just connect, or stay seperate and there's not an entire buying into some ecosystem involved.

[–] naeap@sopuli.xyz 1 points 1 week ago* (last edited 1 week ago)

Well, the investments are not only about the massive power consumption of AI needed for all the users, but mostly needed for the research and training

If you just take the trained model and run it on some capable hardware of course it will produce results

I don't even know how this can be a comparison