this post was submitted on 12 May 2025
89 points (97.8% liked)

A Boring Dystopia

12206 readers
88 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
 

A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

top 7 comments
sorted by: hot top controversial new old
[–] toy_boat_toy_boat@lemmy.world 19 points 1 week ago (1 children)

LLMs are specifically and exclusively designed to appeal to investors. once you accept that as fact, the rest just all falls into place.

[–] lordnikon@lemmy.world 9 points 1 week ago

Yeah Gen AI is a great demo with very limited real world applications. It's like showing a website with pretty graphs and playholder text. It converts potential but in that state has very limited functionality to real people.

[–] auraithx@lemmy.dbzer0.com 3 points 1 week ago (1 children)

Yes this was a specific problem with Gemini. They obviously tried to over correct for hallucinations and being too gullible, but it ended up making it certain of its hallucinations.

Hallucination rate for their latest model is 0.7%

https://github.com/vectara/hallucination-leaderboard

Should be <0.1% within a year

[–] db0@lemmy.dbzer0.com 3 points 1 week ago

Hallucinations when summarizing are significantly lower than when generating code (since the original document would be in context)

[–] Proprietary_Blend@lemmy.world -1 points 1 week ago

Just don't use it. Duh.

[–] smee@poeng.link -2 points 1 week ago (1 children)

It's no more a conman than the average person. The problem is that people consider it an oracle of truth and get shocked when they discover it can be just as deceitful as the next person.

All it takes for people is to run the same question by different AI models get conflicting answers to see the difference and understand that at least one of the answers is wrong.

But alas...

[–] baggachipz@sh.itjust.works 1 points 1 week ago

The problem is that people consider it an oracle of truth

Because that’s how it is presented by the con men getting rich off this con.