this post was submitted on 26 Jun 2025
8 points (90.0% liked)

dynomight internet forum

80 readers
4 users here now

dynomight internet forum

founded 11 months ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] Antsan@lemmy.world 2 points 1 week ago (1 children)

Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don't know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that's known to be an unsolved problem, it'll tell you so — but ask it about anything obscure, and it'll come up with some plausible-sounding bullshit.

And I think that's a failure to recognize the boundary of what it knows vs what it doesn't.

[–] dynomight@lemmy.world 1 points 1 week ago

I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.