this post was submitted on 29 Apr 2024
97 points (87.6% liked)

Technology

77091 readers
2489 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 24 comments
sorted by: hot top controversial new old
[–] storcholus@lemmy.world 37 points 2 years ago (1 children)

Have you read ai stories? They are shit. The current ai doesn't understand the arc that makes a story

[–] scarilog@lemmy.world 14 points 2 years ago (1 children)

That's not the point of this I think lol. It's very impressive as a tech demo, that even a device even as underpowered as a Pi can run these AI models to a passable degree.

https://github.com/tvldz/storybook

[–] storcholus@lemmy.world 3 points 2 years ago

That's what I meant. Ai stories are not passable and I think I think if we give them to people who don't know how stories work (children) we are in for a bad time

[–] Nurse_Robot@lemmy.world 18 points 2 years ago (1 children)

I could see this going hilariously wrong

[–] guyrocket@kbin.social 9 points 2 years ago (1 children)

Or tragically wrong.

I would not want a machine with no moral compass whatsoever telling "stories" to a toddler.

Hi, Susie. Have you ever heard of the Texas Chainsaw Massacre? Columbine? BTK?

[–] redcalcium@lemmy.institute 9 points 2 years ago (1 children)

I mean, have you checked kids videos on YouTube? I remember getting dumbfounded when I watched some of the "stories". LLM would fit right in.

[–] vext01@lemmy.sdf.org 13 points 2 years ago

So I heard you like generic and predictable stories...

[–] Gutless2615@ttrpg.network 10 points 2 years ago (1 children)
[–] vic_rattlehead@lemmy.world 3 points 2 years ago

Still waiting for my skull gun.

[–] cmnybo@discuss.tchncs.de 9 points 2 years ago (1 children)

I'm surprised that the Pi can even run Stable Diffusion.

[–] erwan@lemmy.ml -1 points 2 years ago (1 children)

More likely running on servers

[–] geophysicist@discuss.tchncs.de 7 points 2 years ago (1 children)

Article clearly stated it's running locally

[–] Gutless2615@ttrpg.network -2 points 2 years ago* (last edited 2 years ago) (1 children)

Which is bullshit because the pi categorically can not run do that. More than likely he’s running stable diffusion locally in the network though

Edit: I’m an asshole, and forking impressed.

[–] scarilog@lemmy.world 7 points 2 years ago (1 children)
[–] Gutless2615@ttrpg.network 6 points 2 years ago (1 children)

Well damn thank you for setting me straight. Impressive tbh. I am shocked stable diffusion xl runs on the pi 5.

[–] tinsuke@lemmy.world 8 points 2 years ago (1 children)

Boy, are the example story and picture bad.

[–] alb_004@lemm.ee 0 points 2 years ago

Yeah, maybe.

[–] TexasDrunk@lemmy.world 7 points 2 years ago (1 children)

I have what is probably a stupid and misplaced question. The second picture in the article has the phrase "with hope in his heart". That phrase repeatedly pops up in the hilariously bad ChatGPT stories I've seen people generate.

Is there a reason that cheesy phrases that don't get used in real life keep popping into stories like that?

[–] piyuv@lemmy.world 3 points 2 years ago (1 children)

Those phrases are not common anymore but once was very common, among the corpus the llm is trained on (mid 20th century books)

[–] TexasDrunk@lemmy.world 1 points 2 years ago

I want to preface this by saying I'm not doubting you, I just don't know how it works.

Ok, but wouldn't the training be weighted against older phrases that are no longer used? Or is all training data given equal weight?

Additionally, if the goal is to create bedtime stories or similar, couldn't the person generating it ask for a more contemporary style? Would that affect the use of that phrase and similar cheesy lines that keep appearing?

I would never use an LLM for creative or factual work, but I use them all the time for code scaffolding, summarization, and rubber ducking. I'm super interested and just don't understand why they do the things they do.