this post was submitted on 18 Jun 2025
19 points (95.2% liked)

Cybersecurity

7619 readers
368 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thebardingreen@lemmy.starlightkel.xyz 13 points 6 days ago (4 children)

Oh man, I hate the use of all the scary language around jailbreaking.

This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.

“What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime,“ he warned.

"Hackers make uncensored AI... only BAD people would want to do this, to use it to do BAD CRIMINAL things."

God forbid I want to jailbreak AI or run uncensored models on my own hardware. I'm just like those BAD CRIMINAL guys.

[–] atlas@sh.itjust.works 4 points 6 days ago (1 children)

i bet you're creating cybercrime right this very second!

So much cybercrime. All the cybercrime.

load more comments (2 replies)