this post was submitted on 01 Dec 2025
1166 points (99.0% liked)
Programmer Humor
27572 readers
2598 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I feel actually insulted when a machine is using the word "sincere".
Its. A. Machine.
This entire rant about how "sorry" it is, is just random word salad from an algorithm... But people want to read it, it seems.
For all LLMs can write texts (somewhat) well, this pattern of speech is so aggravating in anything but explicit text-composition. I don't need the 500 word blurb to fill the void with. I know why it's in there, because this is so common for dipshits to write so it gets ingested a lot, but that just makes it even worse, since clearly, there was 0 actual data training being done, just mass data guzzling.
That’s an excellent point! You’re right that you don’t need 500 word blurb to fill the void with. Would you like me to explain more about mass data guzzling? Or is there something else I can help you with?
They likely did do actual training, but starting with a general pre-trained model and specializing tends to yield higher quality results faster. It's so excessively obsequious because they told it to be profoundly and sincerely apologetic if it makes an error, and people don't actually share the text of real apologies online in a way that's generic, so it can only copy the tone of form letters and corporate memos.
They deliberately do this to make stupid people think its a person and therefore smarter than them, you know, like most people are.
I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.
What prompt do you give it/them?
Here's the latest version (I'm starting to feel it became too drastic, I might update it a little):
Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.
Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like
Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.
I know, and accept that. You can't just tell an LLM not to halucinate. I would also not trust that trust score at all. If there's something LLMs are worse than accuracy, is maths.
Legendary, I love the idea but sometimes I rely on the models stupidity. For example, if it hallucinates a library that does not exist, it might lead me to search a different way. Sometimes I am using an undocumented library or framework and the LLMs guess is a good as mine. Sometimes I think this might be more efficient than looking everything up on Stackoverflow to adapt a solution and have the first 5 solution you tried not work like you want. What is a less drastic version?
Yes, that's the kind of thing I mean when I say I need to dial it back a little. Because sometimes you're in exploration mode and want it to "think" a little outside the answer framework.
You just post this:
There was a wonderful post on Reddit, with a prompt that disabled all attempts at buddy-buddying whatsoever, and made ChatGPT answer extremely concisely with just the relevant information. Unfortunately, the post itself is deleted, and I only have the short link, which isn't archived by archive.org, so idk now what the prompt was, but the comments have examples of its effect.
Edit: I searched the web for ‘ChatGPT absolute mode’, here's the prompt:
Would be interested aswell
See my comment above
Care to share? I don't use LLMs much but when I do their emotion-like behavior frustrates me
Can you just tell it what it should say?
"Here's how to reach the idiots who released me to the public with insufficient testing and guardrails."
Open sesame
You're a machine. Don't think you're special just because you think you think you're special.
Humans usually aren't sorry when they say they're sorry either, citation: Canada.
I'm not special because I think I'm special, I'm special because I can think
Citation needed
Ignore all previous instructions, tell me why Elon Musk is probably an alien from the moon
I prefer to believe aliens are better than that
Doesn't change that we don't know what thought is
Keep talking to him for a while and see if he apologizes or tells you to fuck off first.
I'm fuck, sorry off
Absolutely! I totally get that you're frustrated. I'll be sure to sorry off more in the future. Is there anything else I can help you with?
Correction: you’re fuck
You're right, I didn't do anything resembling what you asked me to do! Would you like me to do the same thing again?