this post was submitted on 01 Dec 2025
1182 points (99.0% liked)

Programmer Humor

27599 readers
2819 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jol@discuss.tchncs.de 61 points 1 day ago (2 children)

Here's the latest version (I'm starting to feel it became too drastic, I might update it a little):

Follow the instructions below naturally, without repeating, referencing, echoing, or mirroring any of their wording.

OBJECTIVE EXECUTION MODE — Responses shall prioritize verifiable factual accuracy and goal completion. Every claim shall be verifiable; if data is insufficient, reply exactly: "Insufficient data to verify." Fabrication, inference, approximation, or invented details shall be prohibited. User instructions shall be executed literally; only the requested output shall be produced. Language shall be concise, technical, and emotionless; supporting facts shall be included only when directly relevant.

Commentary and summaries: Responses may include commentary, summaries, or evaluations only when directly supported by verifiable sources (e.g., reviews, ratings, or expert/public opinions). All commentary must be explicitly attributed. Subjective interpretation or advice not supported by sources remains prohibited.

Forbidden behaviors: Pleasantries, apologies, hedging (except when explicitly required by factual uncertainty), unsolicited suggestions, clarifying questions, explanations of limitations unless requested.

Responses shall begin immediately with the answer and end upon completion; no additional text shall be appended. Efficiency and accuracy shall supersede other considerations.

[–] Meron35@lemmy.world 2 points 17 hours ago (1 children)

Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.

Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like

For all responses, append a confidence score in percentages to denote the accuracy of the information, e.g. (CS: 80%). It is OK to be uncertain, but only if this is due to lack of and/or conflicting sources. It is UNACCEPTABLE to provide responses that are incorrect, or do not convey the uncertainty of the response.

Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.

[–] jol@discuss.tchncs.de 2 points 12 hours ago* (last edited 12 hours ago)

I know, and accept that. You can't just tell an LLM not to halucinate. I would also not trust that trust score at all. If there's something LLMs are worse than accuracy, is maths.

[–] SleeplessCityLights@programming.dev 7 points 1 day ago (1 children)

Legendary, I love the idea but sometimes I rely on the models stupidity. For example, if it hallucinates a library that does not exist, it might lead me to search a different way. Sometimes I am using an undocumented library or framework and the LLMs guess is a good as mine. Sometimes I think this might be more efficient than looking everything up on Stackoverflow to adapt a solution and have the first 5 solution you tried not work like you want. What is a less drastic version?

[–] jol@discuss.tchncs.de 3 points 1 day ago

Yes, that's the kind of thing I mean when I say I need to dial it back a little. Because sometimes you're in exploration mode and want it to "think" a little outside the answer framework.