Oh man, I hate the use of all the scary language around jailbreaking.
This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.
“What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime,“ he warned.
"Hackers make uncensored AI... only BAD people would want to do this, to use it to do BAD CRIMINAL things."
God forbid I want to jailbreak AI or run uncensored models on my own hardware. I'm just like those BAD CRIMINAL guys.