This doesn't really solve the problem. Security through obscurity has always been the laziest and least effective method to secure your code
Opensource
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
Reduced staffing always leads to this. Less code review happens no matter what so it's a lose lose.
im very confused, can any developers comment?
isnt this literally the reason to be open source? that vulnerabilities can be scanned and fixed publicly.
that there is now tooling that can lightspeed accelerate the detection of bugs shouldnt change that fact... or am i missing something?
Cal.com hasn’t been truly open source for awhile despite the marketing. Just the barest minimum of the app is open source. I imagine this is a way to absolve themselves of maintenance responsibility on the public repo.
AI being used to scan for vulnerabilities should not make a difference. The same AI that black hatters can use can be used bobwhite hat security.
Sure, it makes finding vulnerabilities easier in the short term but the concept of open source makes vulnerabilities less likely in the long term.
This is a bad call which will likely be damaging for them and for the community.
If AI scanning code for vulns is the problem, why don't the developers have AI scan their code for vulns before release?
They do give a clue as to a reason/excuse why not in the article:
Each [AI security] platform surfaces different vulnerabilities, making it difficult to establish a single, reliable source of truth for what is actually secure.
Also, they come up with so many false positives that it's a huge job to check over the reports for something usable.
That's literally just pen testing, though. You search through tons of holes just to find the tunnel you were going down was blocked and not an issue.