this post was submitted on 23 Feb 2026
39 points (86.8% liked)
Programming
25758 readers
284 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Asking questions about Chinese politics and/or Tiananmen Square stops most China based AI models, like Qwen and whatever is on Huawei phones. They aren't that high traffic yet, but are certainly in the list of "all ai models"
Also, you might want to research this Heretic project, which aims to remove safeguards from local models as those might be similar to what's in the larger versions. Figuring out the phrases they test the safeguards with might have some decent results.
Is there likewise something for American AIs?
From my other comment it looks like this dataset contains various strings that trigger refusal: https://huggingface.co/datasets/mlabonne/harmful_behaviors
In similar vein, asking questions about suicide methods might stop most AI models.
Considering how many people have been led to suicide BY AI models that seem to encourage it, doubtful on this one.
I checked Google and ChatGPT. Both refused to answer.
The websites have different (more) safeguards than the APIs do, so bots will operate on different rules.
As a non-AI I would refuse as well.
Boo
No AI has perfect safeguards, but all the mainstream models will generally refuse requests for information about comitting suicide. They might encourage it thru indirect means or a question may avoid the safeguards, though, so it can only be described in general terms - generally they will not answer.