this post was submitted on 01 Dec 2025
1185 points (99.0% liked)
Programmer Humor
27599 readers
2894 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.
What prompt do you give it/them?
Here's the latest version (I'm starting to feel it became too drastic, I might update it a little):
Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.
Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like
Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.
I know, and accept that. You can't just tell an LLM not to halucinate. I would also not trust that trust score at all. If there's something LLMs are worse than accuracy, is maths.
Legendary, I love the idea but sometimes I rely on the models stupidity. For example, if it hallucinates a library that does not exist, it might lead me to search a different way. Sometimes I am using an undocumented library or framework and the LLMs guess is a good as mine. Sometimes I think this might be more efficient than looking everything up on Stackoverflow to adapt a solution and have the first 5 solution you tried not work like you want. What is a less drastic version?
Yes, that's the kind of thing I mean when I say I need to dial it back a little. Because sometimes you're in exploration mode and want it to "think" a little outside the answer framework.
You just post this:
There was a wonderful post on Reddit, with a prompt that disabled all attempts at buddy-buddying whatsoever, and made ChatGPT answer extremely concisely with just the relevant information. Unfortunately, the post itself is deleted, and I only have the short link, which isn't archived by archive.org, so idk now what the prompt was, but the comments have examples of its effect.
Edit: I searched the web for ‘ChatGPT absolute mode’, here's the prompt:
Would be interested aswell
See my comment above
Care to share? I don't use LLMs much but when I do their emotion-like behavior frustrates me