this post was submitted on 30 Dec 2025
880 points (98.7% liked)

Technology

78584 readers
4271 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] U7826391786239@lemmy.zip 188 points 1 week ago* (last edited 1 week ago) (7 children)

i don't think it's emphasized enough that AI isn't just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be "real," but the source itself is complete AI slop bullshit

https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

the actual danger of it all should be apparent, especially in any field related to health science research

and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

[–] BreadstickNinja@lemmy.world 77 points 1 week ago (1 children)

It's a shit ouroboros, Randy!

load more comments (1 replies)
load more comments (5 replies)
[–] brsrklf@jlai.lu 134 points 1 week ago (9 children)

Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

Arthur C. Clarke was not wrong but he didn't go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

[–] clay_pidgin@sh.itjust.works 45 points 1 week ago (4 children)

I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

[–] mushroommunk@lemmy.today 51 points 1 week ago (9 children)

I don't think most people know there's built in instructions. I think to them it's legitimately a magic box.

load more comments (9 replies)
load more comments (3 replies)
[–] InternetCitizen2@lemmy.world 21 points 1 week ago* (last edited 1 week ago)

Grok, enhance this image

(•_•)
( •_•)>⌐■-■
(⌐■_■)

[–] Wlm@lemmy.zip 10 points 1 week ago (1 children)

Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

[–] NikkiDimes@lemmy.world 15 points 1 week ago (6 children)

That's more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

load more comments (6 replies)
load more comments (6 replies)
[–] nulluser@lemmy.world 124 points 1 week ago (1 children)

Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

No, no, apparently not everyone, or this wouldn't be a problem.

[–] FlashMobOfOne@lemmy.world 30 points 1 week ago

In hindsight, I'm really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

[–] SleeplessCityLights@programming.dev 92 points 1 week ago (16 children)

I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how "smart" a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

[–] hardcoreufo@lemmy.world 26 points 1 week ago (11 children)

Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I'm asking for better than a search engine. The rest of the time it runs me in circles that don't work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

[–] MrScottyTay@sh.itjust.works 11 points 1 week ago (1 children)

It's fucking awful isn't it. Summer day soon when i can be arsed I'll have to give one of the paid search engines a go.

I'm currently on qwant but I've already noticed a degradation in its results since i started using it at the start of the year.

load more comments (1 replies)
load more comments (10 replies)
[–] markovs_gun@lemmy.world 13 points 1 week ago (2 children)

I legitimately don't understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it's some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don't bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don't know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they're extremely prolific writers, but it's simply wrong to reply "Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark" because this view hasn't been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash'ari Sunni Muslims and not the general scholarly consensus.

load more comments (2 replies)
[–] SocialMediaRefugee@lemmy.world 12 points 1 week ago

I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. "Notice how it never says where it is taking place? Notice how they never give any specific names?" Fortunately she eventually agrees with me but I feel like I'm teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

load more comments (13 replies)
[–] b_tr3e@feddit.org 57 points 1 week ago* (last edited 1 week ago) (5 children)

No AI needed for that. These bloody librarians wouldn't let us have the Necronomicon either. Selfish bastards...

[–] Naevermix@lemmy.world 14 points 1 week ago (4 children)

I swear, librarians are the only thing standing between humanity and true greatness!

load more comments (4 replies)
[–] smh@slrpnk.net 14 points 1 week ago (4 children)
load more comments (4 replies)
[–] RalfWausE@feddit.org 10 points 1 week ago (1 children)

This one is on you. MY copy of the necronomicon firmly sits in my library in the west wing...

load more comments (1 replies)
load more comments (2 replies)
[–] pHr34kY@lemmy.world 52 points 1 week ago* (last edited 1 week ago) (4 children)

There's an old Monty Python sketch from 1967 that comes to mind when people ask a librarian for a book that doesn't exist.

They predicted the future.

[–] palordrolap@fedia.io 19 points 1 week ago

Are you sure that's not pre-Python? Maybe one of David Frost's shows like At Last the 1948 Show or The Frost Report.

Marty Feldman (the customer) wasn't one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

load more comments (3 replies)
[–] MountingSuspicion@reddthat.com 44 points 1 week ago (1 children)

I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that "the following information has no relation to reality" or some other thing. The other person kept insisting it was not needed. I'm not saying it would stop all of these events, but it couldn't hurt.

[–] glitchdx@lemmy.world 31 points 1 week ago (2 children)

https://www.explainxkcd.com/wiki/index.php/2501:_Average_Familiarity

People who understand the technology forget that normies don't understand the technology.

[–] TubularTittyFrog@lemmy.world 10 points 1 week ago* (last edited 1 week ago) (2 children)

and normies think you're an asshole if you try to explain the technology to them, and cling to their ignorance of it basic it's more 'fun' to believe in magic

load more comments (2 replies)
[–] eli@lemmy.world 9 points 1 week ago (1 children)

TIL there is a whole ass mediawiki for explaining XKCD comics.

load more comments (1 replies)
[–] zanzo@lemmy.world 32 points 1 week ago (1 children)

Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.

load more comments (1 replies)
[–] SocialMediaRefugee@lemmy.world 27 points 1 week ago (1 children)

Every time I think people have reached maximum stupidity they prove me wrong.

[–] PetteriSkaffari@lemmy.world 15 points 1 week ago

"Two things are infinite: the universe and human stupidity; and I'm not sure about the universe."

Albert Einstein (supposedly)

[–] panda_abyss@lemmy.ca 18 points 1 week ago* (last edited 1 week ago) (4 children)

I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

So it’s not really much better.

Hallucinations become a bigger problem the more info they have (that you now have to double check)

load more comments (4 replies)
[–] SethTaylor@lemmy.world 18 points 1 week ago

I guess Thomas Fullman was right: "When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle". That's from Automating the Mind. One of his best.

[–] Lucidlethargy@sh.itjust.works 16 points 1 week ago (3 children)

Wait, are you guys saying "Of Mice And Men: Lennie's back" isn't real? I will LOSE MY SHIT if anyone confirms this!! 1!! 2.!

load more comments (3 replies)
[–] Blackmist@feddit.uk 14 points 1 week ago (6 children)

Luckily, the future will provide not only AI titles, but the contents of said books as well.

Given the amount of utter drivel people are watching and reading of late, we're probably already most of the way there.

load more comments (6 replies)
load more comments
view more: next ›