this post was submitted on 27 Oct 2023
484 points (94.8% liked)

Technology

71842 readers
4065 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] isolatedscotch@discuss.tchncs.de 10 points 2 years ago (16 children)

by run his own models he means locally running a text generation ai on his computer, because sending all that data to openai is a privacy nightmare, especially if you use it for sensitive stuff

[–] XTornado@lemmy.ml 3 points 2 years ago* (last edited 2 years ago) (11 children)

But that's still confusing because we already can. Yeah you might need a little bit more of hardware but... not that crazy. Plus some simpler models can be run with more normal hardware.

Might not be easy to setup that is true.

[–] Communist@lemmy.ml 5 points 2 years ago (7 children)

For large context models the hardware is prohibitively expensive.

[–] supert@lemmy.sdfeu.org 1 points 2 years ago (2 children)

I can run 4bit quantised llama 70B on a pair of 3090s. Or rent gpu server time. It's expensive but not prohibitive.

[–] Communist@lemmy.ml 1 points 2 years ago (1 children)

How many tokens can you run it for?

[–] supert@lemmy.sdfeu.org 1 points 2 years ago

3k?Can't recall exactly, and I'm getting hardwarestability issues.

[–] anotherandrew@lemmy.mixdown.ca 1 points 2 years ago

I’m trying to get to the point where I can locally run a (slow) LLM that I’ve fed my huge ebook collection too and can ask where to find info on $subject, getting title/page info back. The pdfs that are searchable aren’t too bad but finding a way to ocr the older TIFF scan pdfs and getting it to “see” graphs/images are areas I’m stuck on.

load more comments (4 replies)
load more comments (7 replies)
load more comments (11 replies)