Open Source Isn't Dead (strix.ai)

by bearsyankees 186 comments 356 points
Read article View on HN

186 comments

[−] tananaev 30d ago
I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.
[−] lelanthran 30d ago

> Closed source software won't receive any reports, but it will be exploited with AI.

What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

It's closed to the public, it's not closed to them!

[−] 440bx 30d ago
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.
[−] sdoering 29d ago
Seconded.

Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."

There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.

[−] valeriozen 29d ago
Yea, its fundamentally an issue of asymmetric economics.

Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that

But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.

[−] njyx 29d ago
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.

There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!

[−] ValentineC 29d ago
That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month.
[−] lelanthran 29d ago
Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?

I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.

I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.

[−] theshrike79 28d ago
If a company specifically doesn't have a dedicated security team (or even a person), this will never get done.

Most software companies sadly don't hire a dedicated (software) security expert.

[−] sevenzero 29d ago
Yup, closed source software is a huge pile of shit with good marketing teams. Always was.
[−] baileypumfleet 30d ago
As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.
[−] lelanthran 29d ago

> As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.

Yeah, but with closed source it's cheaper for the defender than for the attacker - the defender can scan their sources and their PRs as well as the compiled output. The attacker can only scan the compiled output, and they have to perform repeated scans.

[−] topopopo 29d ago
I think it makes it all the more apparent that writing EAL4 code with as little design competence as possible was taking advantage of some strange scarcity economics.. It's now even easier to make something with endless technical debt and security vs backwards compatibility liability but is anyone going to keep paying for things that aren't correct and to the point if some market participants structure their agent usage toward verifiable quality and don't actually have more cost any more?
[−] dspillett 29d ago
> What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

How many companies take the time to use penetration testing tools, that have been available for many years, to verify their software (or pay a penetration testing company to do a more thorough job than they have the experience to do internally)?

Some, certainly. Many, possibly. Most, I would wager not.

[−] necheffa 29d ago
The economic motivation just simply isn't there. I'm sure we could cherry pick a few examples of companies where things like quality and security really are part of the culture, not just feel-good lip services. The reality is that companies are in business to make money and corner cutting is the easiest way to pad the margins.
[−] ihaveajob 30d ago
More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.
[−] phendrenad2 30d ago
With enough copies of GPT printing out the same bulleted list, all bugs are

1. shallow

2. hollow

3. flat

...

[−] cyanydeez 29d ago
Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.
[−] LunicLynx 30d ago
Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.
[−] bluebarbet 30d ago
Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.

Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.

[−] lelanthran 29d ago

> Same tools A, B and C, but minus tools D, E and F,

Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?

[−] bluebarbet 29d ago
The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.
[−] lelanthran 29d ago
There's no more "eyes" though, it's all models, and they are all converging pretty damn fast.
[−] bluebarbet 29d ago
If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored. After all, even open-source software is private until it is released.
[−] lelanthran 29d ago

> If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored.

I'm struggling to see how it is a level playing field:

1. Closed-source: defender runs llms to check the sources for vulns, runs llms on each PR, runs llm on deployment of the compiled output. Attacker runs llm only on compiled output.

2. Open-source: both attacker and defender runs llms on source, on PRs and on compiled output.

[−] LunicLynx 29d ago
Fair enough
[−] suhputt 29d ago
[dead]
[−] Aurornis 30d ago

> Closed source software won't receive any reports

Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.

Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.

[−] switchbak 30d ago
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.

That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).

[−] tananaev 30d ago
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
[−] LunicLynx 30d ago
But also tools that might not be nice and report security vulnerabilities, but exploit them.

There is no guarantee that open means that they will be discovered.

[−] baileypumfleet 30d ago
That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.
[−] bearsyankees 30d ago
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
[−] 0x457 29d ago

> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates

So just like a pre-AI or worse?

[−] shakna 29d ago
[−] bmurphy1976 30d ago
You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.
[−] rd 30d ago
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.

This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.

[−] bigbadfeline 30d ago

> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.

Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.

> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits

That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.

> any open-source business stands to lose way more

That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?

You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.

In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.

[−] sureMan6 30d ago
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
[−] hardsnow 30d ago
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.

If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.

There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.

[−] baileypumfleet 30d ago
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.
[−] giancarlostoro 30d ago

> Closed source software won't receive any reports, but it will be exploited with AI.

This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.

Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.

[−] charcircuit 30d ago
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
[−] baq 30d ago
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore
[−] devstatic 30d ago
i agree with his too,

but with cal.com i dont think this is about security lol

open source will always be an advantage just you need to decide wether it aligns with you business needs

[−] cm2187 30d ago

>

Closed source software won't receive any reports, but it will be exploited with AI

How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.

But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.

[−] kirubakaran 30d ago
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
[−] CodesInChaos 30d ago

> The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale,

That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.

[−] pradn 30d ago
Brilliant piece of content marketing:

1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).

2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.

3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.

4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)

This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!

[−] keeda 30d ago
>Security through obscurity is a losing bet against automation

Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic, with its primary function being imposing higher costs on the attacker.

As such if, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is an even more valid strategy to impose asymmetric costs on the attacker.

"With enough AI-balls (heheh) all bugs are shallow."

From a security perspective, the basic calculus of open versus closed comes down to which you expect to be case for your project: the attention donated by the community outweighs the attention (lowered by openness) invested by attackers, or, the attention from your internal processes outweighs the attention costs (increased by obscurity) on attackers. The only change is that the attention from AI is multifold more effective than from humans, otherwise the calculus is the same.

[−] JoshTriplett 30d ago
I wonder whether cal actually has concerns about security (in which case, they're wrong, this argument was false when people made it decades ago), or whether they just took a convenient excuse to do something they wanted to do anyway because Open Source SaaS businesses are hard.
[−] dom96 30d ago
Isn’t the real danger now not the ability to find security vulnerabilities, but rather, the ability of anyone to ask an LLM agent to rewrite your open source project in another language and thus work around whatever license your project has?
[−] janalsncm 30d ago
Reading between the lines, it seems like they were working with cal.com and used red team bots to find vulnerabilities in cal.com’s code. And they probably found bugs a lot faster than cal.com could fix them. So the CEO balked at the estimated cost of fixing and took his ball home.

This article is effectively an announcement that cal.com is riddled with vulnerabilities, which should be easy to find in an archive of their code.

[−] jongjong 30d ago
I decided to not open source my latest project but it has nothing to do with security concerns. My code is perfectly secure and bug-free.

My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.

Also, I'm not really interested in humans anymore. I have human fatigue.

[−] victorbjorklund 29d ago
I don’t believe for a second that the real reason is security by obscurity. They probably believe they can make more money not being open source and this sounds like a better excuse than ”we wanna make more money”.
[−] misiti3780 30d ago
I have a large open source project and noticed the number of LLM generate PR is making it unmanageable. Every two weeks, I go in, kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.

is anyone else seeing this / fixed this problem ?

[−] ChrisArchitect 30d ago
Related:

Cal.com is going closed source

https://news.ycombinator.com/item?id=47780456

[−] cadamsdotcom 30d ago

> Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.

This feels like the core of the article, but it doesn’t prove the need for open source.

[−] erelong 30d ago
I'll admit that I agree with a lot of the post but that I can't fully wrap myself around the cybersecurity situation today, is it basically:

-if code is open source or closed source, AI bots can still look for exploits

-so we need to use AI to develop a checklist program regardless to check for currently known and unknown exploits given our current state of AI tools

-we have to just keep running AI tools looking for more security issues as AI models become more powerful, which empowers AI bots attacking but also then AI bots to defensively find exploits and mitigate them

-so it's an ongoing effort to work on

I understand the logic of closing the source to prevent AI bot scans of the code but also fundamentally people won't trust your closed source code because it could contain harmful code, thus forcing it to be open source

Edit: Another thing that comes to mind is people are often dunking here on "vibe coding" however can't we just develop "standards / tools" to "harden" vibe coded software and also help guide well for decisions related to architecture of the program, and so on?

[−] dangus 30d ago
First we blamed AI for layoffs, next we are blaming AI for the AI bait and switch.

It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.

[−] tonymet 29d ago

> Closing your source code does not stop an AI from probing your API or finding an authorization bypass in your webhooks.

I see this trope a lot in security discussions. “Obscurity isn’t security” or “since you can’t protect against X you may as well do Y”.

This is a harmful trope, which discourages perfectly good protections. Sure, closing source is not a perfect protection, but it is a defense against a large band of attacks.

Think of the entire field of potential vulnerability probes attackers have. Closing the source closes many of them off, likely most of them.

A pen-tester model with implementation will be loads more effective than one with only a black box. And that will give cal.com time to run the pen testing model on the source and address the vulns , hopefully before they are exploited.

I tested this myself, first using black box model attacks, secondly using the source code. The model with the source found and exploited the vulns instantly . The model without failed.

The lesson is: obscurity is not security ALONE, but it is a component of security.

[−] dang 29d ago
Related ongoing threads:

Cal.com is going closed source - https://news.ycombinator.com/item?id=47780456

Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089

[−] the_af 30d ago
I'm pro FOSS, militantly so. FSF-style.

But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?

Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.

[−] agentifysh 30d ago
Pretty overreaching claim about another company's internal decisions and open source in general. There is a lot of incentive to stop open source these days.

One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.

The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.

Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.

[−] poorcedural 30d ago
The idea of tying source code to sustenance will soon be history. We will all remember the days when adding some few thousand smart lines of code meant you could gain notoriety and through cheap viral copy expand those traits to wealth and worth. But software has always just been zeros and ones, the value only happens when interpreted.

The future is sharing, you may not believe because your income is tied to being clever. Long term we are all more clever because of the sharing, and your contribution sometimes does not add to your personal success. Asking a company or its individuals to forego their success will not make them add more to our future. But they will add to our future nonetheless, because they all feel like we all do, that adding is what we are all meant to do.

[−] linuxhansl 30d ago
So Cal.com favors security through obscurity.

Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.

As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.

[−] pixel_popping 30d ago
At the same time, I heavily support open-source and contribute a lot, but I can't necessarily agree that security-through-obfuscation doesn't play a major role in slowing down attacks. Cloudflare have based its whole security being closed-source (for example on its anti-bot mechanism) to be hard to reverse engineer, and they remain leaders as of today with few serious security breaches.

Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.

If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.

[−] themafia 29d ago

> In the past, exploiting an application required a highly skilled hacker with years of experience and a significant investment of time to find and exploit vulnerabilities. The reality is that humans don’t have the time, attention, or patience to find everything.

I read this as:

"We figured no one was looking so we just shipped unsafe garbage for years. We never once did an internal audit, never once paid a hacker to try to exploit our product, never thought we'd get caught with our substandard products."

If a guy in his basement with $200 dollars can ruin your company then you were trading on vapor the entire time. I'm sorry you had to find out this way.

[−] Prunkton 30d ago
I'm hopeful the article is right about its prediction, although I'm under the impression the attacker/defender dynamic is asymmetric and the defender on the loosing end. I hope someone can proof me wrong though...

Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.

Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?

[−] Divs2890 30d ago
Closing your source doesn't close your attack surface,it just closes the community that would have helped you defend it. Security through obscurity is a kind of tradeoff, not a strategy.. i mean that's what I feel.
[−] shay_ker 30d ago
It's a good question - is blackbox hacking as effective as whitebox hacking, for AI agents? I've gotta assume someone at Anthropic is putting together an eval as we speak.
[−] Talderigi 30d ago
feels like people are arguing the wrong axis tbh

- it’s not open vs closed anymore, it’s more like bug finding going a few devs poking around to basically infinite parallel scanners

- so now you don’t get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didn’t change though

- closing the repo doesn’t really save you, it just switches from white-box to black-box… and that’s getting pretty damn good anyway

real problem is: vuln discovery scaled, patching didn’t. now everything is a backlog game

[−] RRRA 30d ago
How long before LLM perform perfect disassembly exploitation...
[−] edmondx 29d ago
I’m afraid the balance of open source wasn’t broken today. It happened quite a long time ago. It’s just not something people usually talk about. Companies have been using open source code for years to build paid products without giving anything back. Take PHP as an example: a language widely used across the internet, yet with a very limited budget.
[−] 6thbit 30d ago
Great PR piece by Strix, but I find mixed messages.

Cal.com folks are getting a red team for free, wouldn't that further convince them their closed source software is strong enough?

Isn't Strix's business companies paying for scans regardless of whether the software scanned is open source or closed?

[−] bzmrgonz 30d ago
Strix was so close to being the hero we deserve. I think these blue torches like strix should offer their services for free to open source ships out at sea. There are 3 wins here, GLOBAL GOOD WILL, testimonial and reviews, and market loyalty reward.
[−] phkahler 30d ago
Can any of the AI systems read binary yet? Perhaps generate source code from object file? Is so, that would make access to source redundant for that type of analysis.
[−] simonreiff 30d ago
Is there any recent research on whether open or closed-source projects are more secure? I am genuinely curious if anyone has studied the question.
[−] skal9606 30d ago
Seems like flimsy reasoning from the Cal.com CEO. How should we think about Strix vs. foundational model releases like Mythos?
[−] cold_tom 29d ago
feels like the real shift is not open vs closed, but reaction time AI attackers don’t need perfect access anymore, just enough surface and time. So the question becomes: can you detect+respond faster than they can iterate in that sense, open source might even help -more eyes reduces time to fix, not just time to find
[−] harlequinetcie 29d ago
We need to be more bullish as a community.

So many people discussing things like UBI, however we selfishly create our own little projects all the time.

We need to center our shared efforts, send open source is a step on that.

Nowadays, every closed source solution should be seen as 'you are the product' type of deal.

[−] yc-kraln 29d ago
every line of code is a liability. open, closed, doesn't matter. companies will have to treating it that way--which means actual engineering--or they will get burnt, and hard.
[−] dzonga 30d ago
a lot of the vulnerabilities in web-apps are people trying to be too smart for their own good.

use battle-tested frameworks such as Rails, Django then you won't make rookie security mistakes.

[−] Bridged7756 30d ago
It's just an excuse. Classic open source rug pull here.
[−] reenorap 30d ago
All content is going to go behind paywalls.

There is zero incentive or reason for content creators to let AI slurp their content for free and distribute it and get all the money from it.

Everything new will be licensed and if AI companies want access to it, they will need to pay for it, just like we will.

[−] wg0 30d ago

> Today, Cal.com announced they are transitioning their core codebase away from open source. The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale, making code scanning and exploitation "near zero-cost". In this new world, they argue, "transparency becomes exposure."

Laughable and hilarious. Extremely short sighted. I can show code generated by Claude Opus 4.6 at the highest compute intensity that lacks even basic checks in input validation that was clearly provided in the spec.

There's no point in arguing with crypto and AI bros. They are the same tribe. AI crowd however might learn their lessons sooner because the universe isn't forgiving or flexible.

Note: I use AI code generators all the time but I take them as very very dumb transpilers no matter how expensive their input/output pricing it and I learned that hard way.

PS: Edit to fix typos.

[−] bobkb 29d ago
Security by obscurity is flawed.
[−] themafia 30d ago

> The real solution: fight fire with fire

Which works if you assume that AI can find 100% of your bugs.

It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.

You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.

[−] shevy-java 30d ago
"Open Source Isn't Dead."

Well ...

Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...

"Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "

There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).

I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.

[−] julianozen 30d ago
There is another product I use that has a freemium model. They hope to monetize a paid tier for users who use the product a lot.

In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself

Unfortunately, I think it isn’t going to be tenable for systems to be fully open sourced going forward.

[−] avivo 29d ago
Open source is one of the main reasons I recommended cal.com to everyone — I just did so yesterday again in fact!

I'm disappointed to hear this especially since I don't think the rationale makes sense, from what I understand of the security landscape, and it also makes me a little more skeptical of cal.com in general.

[−] funvill 30d ago
This is just an excuse to close source their project while blaming AI. Spineless bullshit excuse instead of owning your choices.

Shame

[−] daytonix 30d ago
I can't believe we still have people out there buying this baby-brain idea of "If muh code is open than people will find vulns!!" This has been disproven for 20+ years catch up.

AI generated bullshit PRs are clearly the bigger issue in the OSS space.