this post was submitted on 23 Nov 2024
526 points (95.8% liked)
Technology
71623 readers
3166 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have read the comments here and all I understand from my small brain is that, because we are using bigger models which are online, for simple tasks, this huge unnecessary power consumption is happening.
So, can the on-device NPUs we are getting on flagship mobile phones solve these problems, as we can do most of those simple tasks offline on-device?
I’ve run an LLM on my desktop GPU and gotten decent results, albeit not nearly as good as what ChatGPT will get you.
Probably used less than 0.1Wh per response.
Is this for inferencing only? Do you include training?
Inference only. I’m looking into doing some fine tuning. Training from scratch is another story.