0xCBE

joined 2 years ago
MODERATOR OF
4
GCP Pentesting Guide (slashparity.com)
submitted 2 years ago by 0xCBE to c/cloudsecurity
5
submitted 2 years ago by 0xCBE to c/cloudsecurity
6
submitted 2 years ago by 0xCBE to c/ai_infosec
 

Not really technical, but gives some pointers to wrap your head around the problem

 

"Toyota said it had no evidence the data had been misused, and that it discovered the misconfigured cloud system while performing a wider investigation of Toyota Connected Corporation's (TC) cloud systems.

TC was also the site of two previous Toyota cloud security failures: one identified in September 2022, and another in mid-May of 2023.

As was the case with the previous two cloud exposures, this latest misconfiguration was only discovered years after the fact. Toyota admitted in this instance that records for around 260,000 domestic Japanese service incidents had been exposed to the web since 2015. The data lately exposed was innocuous if you believe Toyota – just vehicle device IDs and some map data update files were included. "

4
AI Risk Database (airisk.io)
submitted 2 years ago by 0xCBE to c/ai_infosec
 

"database [...] specifically designed for organizations that rely on AI for their operations, providing them with a comprehensive and up-to-date overview of the risks and vulnerabilities associated with publicly available models."

12
Container security fundamentals series (securitylabs.datadoghq.com)
submitted 2 years ago by 0xCBE to c/cloudsecurity
 

This is an excellent series on container security fundamentals by Rory McCune who is a bit of an authority in this field:

6
submitted 2 years ago by 0xCBE to c/cloudsecurity
 

Very useful collection of security incidents involving public clouds

 

(I am not fond on vendor's blogs as the signal to noise ratio is very low, since they are written to please search engines more than engineers... but Scott Piper gets a pass.)

I found this insightful, access keys are such a liability that is better to tame as early as possible. Fixing the problem a scale is a lot more challenging.

7
Growing infosec.pub (self.infosecpub)
submitted 2 years ago by 0xCBE to c/infosecpub
 

@jerry@infosec.pub I took the liberty to promote this instance a bit here, the post is this one.

I'd like to help growing a community, is there anything we could do?

[–] 0xCBE 4 points 2 years ago

ahah thank you, we shall all yell together then

[–] 0xCBE 4 points 2 years ago (1 children)

This stuff is fascinating to think about.

What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

Let's say we can't validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs... which is a castration of the whole AI vision?

I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?

[–] 0xCBE 2 points 2 years ago

Ah-a TIL 😄 thank you, fixed

[–] 0xCBE 3 points 2 years ago

to post within a community

(let me edit the post so it's clear)

[–] 0xCBE 5 points 2 years ago

👋 infra sec blue team lead for a large tech company

view more: ‹ prev next ›