this post was submitted on 01 Dec 2025
1205 points (99.0% liked)

Programmer Humor

27624 readers
1404 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Meron35@lemmy.world 2 points 1 day ago (1 children)

Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.

Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like

For all responses, append a confidence score in percentages to denote the accuracy of the information, e.g. (CS: 80%). It is OK to be uncertain, but only if this is due to lack of and/or conflicting sources. It is UNACCEPTABLE to provide responses that are incorrect, or do not convey the uncertainty of the response.

Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.

[โ€“] jol@discuss.tchncs.de 2 points 1 day ago* (last edited 1 day ago)

I know, and accept that. You can't just tell an LLM not to halucinate. I would also not trust that trust score at all. If there's something LLMs are worse than accuracy, is maths.