this post was submitted on 18 Apr 2026
26 points (100.0% liked)

Opensource

5977 readers
165 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 2 years ago
MODERATORS
 

After five years as open source champions, Cal.com is going closed source. This wasn’t an easy decision, but in the age of AI-driven security threats, protecting customer data has to come first. Cal.diy will continue as an open option for hobbyists.

top 8 comments
sorted by: hot top controversial new old
[–] Ledivin@lemmy.world 31 points 3 days ago (1 children)

This doesn't really solve the problem. Security through obscurity has always been the laziest and least effective method to secure your code

[–] Bakkoda@lemmy.world 6 points 3 days ago

Reduced staffing always leads to this. Less code review happens no matter what so it's a lose lose.

[–] originalucifer@moist.catsweat.com 32 points 3 days ago (1 children)

im very confused, can any developers comment?

isnt this literally the reason to be open source? that vulnerabilities can be scanned and fixed publicly.

that there is now tooling that can lightspeed accelerate the detection of bugs shouldnt change that fact... or am i missing something?

[–] TacoEvent@lemmy.zip 28 points 3 days ago

Cal.com hasn’t been truly open source for awhile despite the marketing. Just the barest minimum of the app is open source. I imagine this is a way to absolve themselves of maintenance responsibility on the public repo.

[–] hitmyspot@aussie.zone 6 points 2 days ago* (last edited 2 days ago)

AI being used to scan for vulnerabilities should not make a difference. The same AI that black hatters can use can be used bobwhite hat security.

Sure, it makes finding vulnerabilities easier in the short term but the concept of open source makes vulnerabilities less likely in the long term.

This is a bad call which will likely be damaging for them and for the community.

[–] the_crotch@sh.itjust.works 13 points 3 days ago (1 children)

If AI scanning code for vulns is the problem, why don't the developers have AI scan their code for vulns before release?

[–] Deebster 10 points 3 days ago (1 children)

They do give a clue as to a reason/excuse why not in the article:

Each [AI security] platform surfaces different vulnerabilities, making it difficult to establish a single, reliable source of truth for what is actually secure.

Also, they come up with so many false positives that it's a huge job to check over the reports for something usable.

[–] Ledivin@lemmy.world 8 points 3 days ago* (last edited 3 days ago)

That's literally just pen testing, though. You search through tons of holes just to find the tunnel you were going down was blocked and not an issue.