This is a perfect illustration of what cracks me up about the hyperbolic reactions to Mythos. Yes, increased automation of cutting-edge vulnerability discovery will shake things up a bit. No, it's nowhere near the top of what should be keeping you awake at night if you're working in infosec.
We've built our existing tech stacks and corporate governance structures for a different era. If you want to credit one specific development for making things dramatically worse, it's cryptocurrencies, not AI. They've turned the cottage industry of malicious hacking into a multi-billion-dollar enterprise that's attractive even to rogue nations such as North Korea. And with this much at stake, they can afford to simply buy your software dependencies, or to offer one of your employees some retirement money in exchange for making a "mistake".
We know how to write software with very few bugs (although we often choose not to). We have no good plan for keeping big enterprises secure in this reality. Autonomous LLM agents will be used by ransomware gangs and similar operations, but they don't need FreeBSD exploit-writing capabilities for that.
> We know how to write software with very few bugs (although we often choose not to)
Do we, really? Because a week doesn’t go by when I don’t run into bugs of some sort.
Be it in PrimeVue (even now the components occasionally have bugs, seems like they’re putting out new major versions but none are truly stable and bug free) or Vue (their SFC did not play nicely with complex TS types), or the greater npm ecosystem, or Spring Boot or Java in general, or Oracle drivers, or whatever unlucky thread pooling solution has to manage those Oracle connections, or kswapd acting up in RHEL compatible distros and eating CPU to a degree to freeze the whole system instead of just doing OOM kills, or Ansible failing to make systed service definitions be reloaded, or llama.cpp speculative decoding not working for no good reason, or Nvidia driver updates bringing the whole VM down after a restart, or Django having issues with MariaDB or just general weirdness around Celery and task management and a million different things.
No matter where I look, up and down the stack, across different OSes and tech stacks, there are bugs. If there is truly bug free code (or as close to that as possible) then it must be in planes or spacecraft, cause when it comes to the kind of development that I do, bug free code might as well be a myth. I don't think everyone made a choice like that - most are straight up unable to write code without bugs, often due to factors outside of their control.
> And with this much at stake, they can afford to simply buy your software dependencies, or to offer one of your employees some retirement money in exchange for making a "mistake".
LAPSUS$ was prolific by just bribing employees with admin access. This is far from theoretical. Just imagine the kind of money your average nation state has laying around to bribe someone with internal access.
"It resolved its C2 domain through an Ethereum smart contract, querying public blockchain RPC endpoints. Traditional domain takedowns would not work because the attacker could update the smart contract to point to a new domain at any time."
Does this mean firewalls now have to block all Ethereum endpoints?
> but they don't need FreeBSD exploit-writing capabilities for that.
That's a solid point. There was a piece the other day in the Register [1] that studying supply chains for cost-benefit-risk analysis is how some of them increasingly operate. And, well, why wouldn't they if they're rational (an assumption that is debatable, of course)?
Yeah I tend to agree. For me Mythos' principal risk in my mind is saturation through being able to do bad things faster. Vulnerabilities are found and fixed - that's life. What is a problem is identifying and prioritising vulnerabilities. A miscategorisation or misidentification may lead to an extended attack window of a vulnerability. If a cloud provider, or multiple cloud providers are open to something there then everyone is in trouble. That's a pretty big nightmare scenario for me where I currently am.
> This is a perfect illustration of what cracks me up about the hyperbolic reactions to Mythos.
The hyperbole was press released and consciously engineered. It consists entirely of the company who made Mythos, the usual captured media outlets who follow the leader, and the usual suspects from social media.
The reaction to it as if it is meaningful just fluffs it up more.
These are unprofitable companies trying to suck up maximum possible investment until they become something that the government can justify bailing out with tax money when they fail. Once you've crossed that line, you've won.
Some model that is super good at finding vulnerabilities will be run against software by the people trying to close those vulnerabilities far more often than by anyone trying to exploit them.
Whenever I look at a web project, it starts with "npm install" and literally dozens of libraries get downloaded.
The project authors probably don't even know what libraries their project requires, because many of them are transitive dependencies. There is zero chance that they have checked those libraries for supply chain attacks.
The supply chain attack surface in WordPress plugins has always been particularly dangerous because the ecosystem encourages users to install many small single-purpose plugins from individual developers, most of whom aren't security-focused organizations. Buying out an established plugin with a large install base is a clever approach because you inherit years of user trust that took the original developer a long time to build.
The deeper structural issue is that plugin update notifications function as an implicit trust signal. Users see "update available" and click without questioning whether the author is still the same person. A package signing and transfer transparency system similar to what npm has been working toward would help here, but the WordPress ecosystem has historically moved slowly on security infrastructure.
FAIR has a very interesting architecture, inspired by atproto, that I think has the potential to mitigate some of the supply-chain attacks we've seen recently.
In FAIR, there's no central package repository. Anyone can run one, like an atproto PDS. Packages have DIDs, routable across all repositories. There are aggregators that provide search, front-ends, etc. And like Bluesky, there are "labelers", separate from repositories and front-ends. So organizations like Socket, etc can label packages with their analysis in a first class way, visible to the whole ecosystem.
So you could set up your installer to ban packages flagged by Socket, or ones that recently published by a new DID, etc. You could run your own labeler with AI security analysis on the packages you care about. A specific community could build their own lint rules and label based on that (like e18e in the npm ecosystem.
Not perfect, but far better than centralized package managers that only get the features their owner decides to pay for.
I think the main problem here is the ideology of software updating. Updates represent a tradeoff: On one hand there might be security vulnerabilities that need an update to fix, and developers don't want to receive bug reports or maintain server infrastructure for obsolete versions. On the other hand, the developer might make decisions users don't want, or turn evil temporarily (as in a supply chain attack) or permanently (as in selling off control of a Wordpress extension).
In the case of small Wordpress extensions from individual developers, I think the tradeoff is such that you should basically never allow auto-updating. Unfortunately wordpress.org runs a Wordpress extension marketplace that doesn't work that way, and worse. I think that other than a small number of high-visibility long-established extensions, you should basically never install anything from there, and if you want a Wordpress extension you should download its source code and install it manually as an unpacked extension.
Hear me out. Mergers and acquisitions that substantially lesson market competition can be blocked by governments, or even require approval in certain jurisdictions.
https://en.wikipedia.org/wiki/Mergers_and_acquisitions
Maybe mergers or acquisitions that substantially impact security should require approval by marketplaces (industry governance), and notification and approval by even governments?
If the plugins were bought for six figures, then it must be incredibly lucrative. How on earth could they be making it back? Is injecting spam into Google results THAT lucrative?
It seems obvious to me that there should now be a concerted and open effort to detect malware in supply chains based on AI-based scanning. Sure, there will be an arms race in malware obfuscation, but that was coming anyway. Manual review is useless at this scale - it is just not happening.
This is probably a controversial opinion but this case is yet another example of why it should be prohibited to sell repositories and storefronts. If you want to take over someone else’s user base you should be forced to display a message to the users and actively ask them whether they trust the new owner as well. Simply passing the whole thing on to someone else in secret who could then compromise the WordPress plugin, a browser extension or something similar should not be allowed.
A tale as old as time. And hard to defend against. Did the sellers know their plugins were going to be abused? Is there some kind of seller liability in cases like this?
Personally, I've found that nowadays the README.md file of most projects is more useful than the code. With the code I inherit their dependency chain and all of that. But with an LLM I can rewrite most of these things myself. This is not yet to the degree of universality. For instance I still use ratatui, but I also don't use a worktree manager or a Claude coordinator from other people - I just have my own. I also don't use OpenClaw - I have my own.
Looking at the list of plugins, I'd probably write accordion-and-accordion-slider and so on myself (meaning Claude Code and Codex would do most of the work). I think the future of software is like that: there is no reason to use most dependencies and so we'll likely tend towards our own library of software, with the web of trust unnecessary because all we need are other people's ideas, not their software.
All my sites got pwned through this. Attempts to restore from backup just got pwned again in minutes. Ended up using Claude to create static sites from the database and the assets.
I'm never using Wordpress again and I strongly suggest nobody else does either.
I wrote a deeper breakdown of this including the Smart Slider 3 Pro attack that hit the same week, different vector, same structural gap. If anyone wants more context
I can foresee a modern code-signing regimen with paid gatekeepers coming to mitigate the risk of supply chain attacks. Imagine the purported strength of mythos automating scans of PRs or releases with some manner of indelible and traceable certification. There's some industrious company - a modern verisign of old - that will attempt to drop in a layer of $250-500 per year fees for that service, capture the app stores to require it. Call me a cynical bastard, but "I was there, Gandalf".
One interesting note is the plugins were acquired on Flippa, which is a general marketplace to buy/sell software businesses, not limited to WP plugins.
What I worry about are the long tail of indie apps/extensions/plugins that can get acquired under good intentions and then weaponized. These apps are probably worth more to a threat actor than someone who wants to operate the business genuinely.
This looks to be more than just a security bug and rather an incentive problem because you can buy trust with plugin installs numbers and reputation but there’s no mechanism to reprice that trust after the ownership gets changed so the attackers just buy the distribution and monetize it later and that makes this kind of attack economically rational, so it gets reproduced often
It's been a while, but what struck me about Wordpress plugins is how many have almost no value add over the "manual" way, even ignoring the security aspect. Like wrappers around Stripe.
Deno can whitelist outbound connections to certain hosts or refuse them altogether. If the average backend service is locked down this way, will the supply chain economy survive?
In browser plugins and mobile apps (and maybe WordPress plugins?), it's pretty well known that malware attackers buying those is a frequent thing, and a serious threat. So:
1. So is there an argument to be made that a developer/publisher/marketplace selling such software, after it has established a reputation and an installed base, may have an obligation to make some level of effort not to sell out their users to malware/criminals?
2. Do we already have some parties developing software with the intention of selling it to malware/criminals, planning that selling it will insulate them from being considered a co-conspirator or accessory?
I see a future where there are LLM vetted repos for Java, Python, Go, etc... And it will cost $1 to submit a release candidate (even for open source)
edit: The idea is the $1 goes towards the tokens required to scan the source code by an LLM, not simply cost a dollar for no other reason that raising the bar.
First submission is full code scan, incremental releases the scanner focuses on the diffs.
341 comments
We've built our existing tech stacks and corporate governance structures for a different era. If you want to credit one specific development for making things dramatically worse, it's cryptocurrencies, not AI. They've turned the cottage industry of malicious hacking into a multi-billion-dollar enterprise that's attractive even to rogue nations such as North Korea. And with this much at stake, they can afford to simply buy your software dependencies, or to offer one of your employees some retirement money in exchange for making a "mistake".
We know how to write software with very few bugs (although we often choose not to). We have no good plan for keeping big enterprises secure in this reality. Autonomous LLM agents will be used by ransomware gangs and similar operations, but they don't need FreeBSD exploit-writing capabilities for that.
> We know how to write software with very few bugs (although we often choose not to)
Do we, really? Because a week doesn’t go by when I don’t run into bugs of some sort.
Be it in PrimeVue (even now the components occasionally have bugs, seems like they’re putting out new major versions but none are truly stable and bug free) or Vue (their SFC did not play nicely with complex TS types), or the greater npm ecosystem, or Spring Boot or Java in general, or Oracle drivers, or whatever unlucky thread pooling solution has to manage those Oracle connections, or kswapd acting up in RHEL compatible distros and eating CPU to a degree to freeze the whole system instead of just doing OOM kills, or Ansible failing to make systed service definitions be reloaded, or llama.cpp speculative decoding not working for no good reason, or Nvidia driver updates bringing the whole VM down after a restart, or Django having issues with MariaDB or just general weirdness around Celery and task management and a million different things.
No matter where I look, up and down the stack, across different OSes and tech stacks, there are bugs. If there is truly bug free code (or as close to that as possible) then it must be in planes or spacecraft, cause when it comes to the kind of development that I do, bug free code might as well be a myth. I don't think everyone made a choice like that - most are straight up unable to write code without bugs, often due to factors outside of their control.
> And with this much at stake, they can afford to simply buy your software dependencies, or to offer one of your employees some retirement money in exchange for making a "mistake".
LAPSUS$ was prolific by just bribing employees with admin access. This is far from theoretical. Just imagine the kind of money your average nation state has laying around to bribe someone with internal access.
Does this mean firewalls now have to block all Ethereum endpoints?
> but they don't need FreeBSD exploit-writing capabilities for that.
That's a solid point. There was a piece the other day in the Register [1] that studying supply chains for cost-benefit-risk analysis is how some of them increasingly operate. And, well, why wouldn't they if they're rational (an assumption that is debatable, of course)?
[1] https://www.theregister.com/2026/04/11/trivy_axios_supply_ch...
> They've turned the cottage industry of malicious hacking into a multi-billion-dollar enterprise
Thank you for this insight! Crypto truly is the financialization of crime.
> This is a perfect illustration of what cracks me up about the hyperbolic reactions to Mythos.
The hyperbole was press released and consciously engineered. It consists entirely of the company who made Mythos, the usual captured media outlets who follow the leader, and the usual suspects from social media.
The reaction to it as if it is meaningful just fluffs it up more.
These are unprofitable companies trying to suck up maximum possible investment until they become something that the government can justify bailing out with tax money when they fail. Once you've crossed that line, you've won.
Some model that is super good at finding vulnerabilities will be run against software by the people trying to close those vulnerabilities far more often than by anyone trying to exploit them.
The project authors probably don't even know what libraries their project requires, because many of them are transitive dependencies. There is zero chance that they have checked those libraries for supply chain attacks.
The deeper structural issue is that plugin update notifications function as an implicit trust signal. Users see "update available" and click without questioning whether the author is still the same person. A package signing and transfer transparency system similar to what npm has been working toward would help here, but the WordPress ecosystem has historically moved slowly on security infrastructure.
https://fair.pm/
FAIR has a very interesting architecture, inspired by atproto, that I think has the potential to mitigate some of the supply-chain attacks we've seen recently.
In FAIR, there's no central package repository. Anyone can run one, like an atproto PDS. Packages have DIDs, routable across all repositories. There are aggregators that provide search, front-ends, etc. And like Bluesky, there are "labelers", separate from repositories and front-ends. So organizations like Socket, etc can label packages with their analysis in a first class way, visible to the whole ecosystem.
So you could set up your installer to ban packages flagged by Socket, or ones that recently published by a new DID, etc. You could run your own labeler with AI security analysis on the packages you care about. A specific community could build their own lint rules and label based on that (like e18e in the npm ecosystem.
Not perfect, but far better than centralized package managers that only get the features their owner decides to pay for.
In the case of small Wordpress extensions from individual developers, I think the tradeoff is such that you should basically never allow auto-updating. Unfortunately wordpress.org runs a Wordpress extension marketplace that doesn't work that way, and worse. I think that other than a small number of high-visibility long-established extensions, you should basically never install anything from there, and if you want a Wordpress extension you should download its source code and install it manually as an unpacked extension.
(This is a comment that I wrote about Chrome extensions, where I replaced Chrome with Wordpress, deleted one sentence about Google, and it was all still true. https://news.ycombinator.com/item?id=47721946#47724474 )
Maybe mergers or acquisitions that substantially impact security should require approval by marketplaces (industry governance), and notification and approval by even governments?
> In 2017, a buyer using the alias “Daley Tias” purchased the Display Widgets plugin (200,000 installs) for $15,000 and injected payday loan spam.
Is that it? Going through all that trouble just for some spam? Surely more lucrative criminal actions can be imagined with a compromised WP plugin?
Ban crypto and both industries will become way, way smaller.
https://news.ycombinator.com/item?id=41821336
Looking at the list of plugins, I'd probably write accordion-and-accordion-slider and so on myself (meaning Claude Code and Codex would do most of the work). I think the future of software is like that: there is no reason to use most dependencies and so we'll likely tend towards our own library of software, with the web of trust unnecessary because all we need are other people's ideas, not their software.
WordPress is now a dangerous ecosystem because of the plugins and their current security model.
I moved to Hugo and encourage others to do so - https://ashishb.net/tech/wordpress-to-hugo/
I'm never using Wordpress again and I strongly suggest nobody else does either.
https://decodedreport.substack.com/p/43-of-the-internet-runs...
What I worry about are the long tail of indie apps/extensions/plugins that can get acquired under good intentions and then weaponized. These apps are probably worth more to a threat actor than someone who wants to operate the business genuinely.
In browser plugins and mobile apps (and maybe WordPress plugins?), it's pretty well known that malware attackers buying those is a frequent thing, and a serious threat. So:
1. So is there an argument to be made that a developer/publisher/marketplace selling such software, after it has established a reputation and an installed base, may have an obligation to make some level of effort not to sell out their users to malware/criminals?
2. Do we already have some parties developing software with the intention of selling it to malware/criminals, planning that selling it will insulate them from being considered a co-conspirator or accessory?
edit: The idea is the $1 goes towards the tokens required to scan the source code by an LLM, not simply cost a dollar for no other reason that raising the bar.
First submission is full code scan, incremental releases the scanner focuses on the diffs.