Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don't know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that's known to be an unsolved problem, it'll tell you so — but ask it about anything obscure, and it'll come up with some plausible-sounding bullshit.
And I think that's a failure to recognize the boundary of what it knows vs what it doesn't.