For CVE-2026-0755, that's a vulnerability in gemini-mcp-tool. gemini-mcp-tool's Github repo says "This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google." but this list shows the Google logo next to the vulnerability.
Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.
The first link claims the 6-hour outage wiped 99% of order volume. I went to the "source" and found an (AI generated?) ad by a company that wants to sell a product, where I cannot find the 99% number.
This whole website and everything around it are almost ironic.
This site, especially if you look at all the previous posts from this domain, is almost assuredly AI generated.
One of the "fun" hallmarks of many of these LLM assisted websites is that they seem to completely disregard basic accessibility (especially Web Content Accessibility Guidelines [1]). That small dark gray subtext on a black background is just horrific.
Yea, I was about to comment the same thing. I have noticed a lot of people weaponizing people's hatred of AI/slop and using rage baiting to drive views. No doubt someone would have looked at that entry of "Amazon lost 6M orders due to slop!" at face value and come away thinking it was true.
The same reason some use crime committed by illegal immigrants to push action, while ignoring the fact that citizens are more likely percentage-wise to commit those same crimes. It's confirmation bias at the least, and intellectual dishonesty at the worst, but either way, they want their worldview to be validated.
I know this is extremely off topic, but illegal immigrants are far more likely to commit crimes than citizens, not that this has anything to do with software bugs...
The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.
First of all, the link you provided mixes illegal migration with legal migration, a classic trick trying to downplay the effects of illegal immigration.
Second, it compares murder rates only, in the state of Texas, a state well known to have extreme amounts of legal guns. You can hardly generalise from this data.
> First of all, the link you provided mixes illegal migration with legal migration
No it doesn't. I chose that article specifically because it provides figures for native-born citizens, legal immigrants and illegal immigrants:
> Over the 10-year period from 2013 to 2022, the homicide conviction rate in Texas for illegal immigrants was 2.2 per 100,000, compared to 3.0 per 100,000 for native-born Americans. The homicide conviction rate for legal immigrants in Texas was 1.2 per 100,000.
I accept that the figures in other countries may not work out the same way as figures in the USA.
I probably won't comment further, since as you said this is very off-topic (I only meant to draw out an analogy as to why discussions about AI tend to be ideologically skewed), but every statistic I've seen shows far lower crime rates among illegal immigrants versus citizens (aside from the statutory crime of being in the country illegally).
Why is the LiteLLM incident on there? The linked article for that one is a 404.
I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.
It seems like blogspam. It's curated according to an author's comment, but it treats ones verified by a security organization like Vite's just the same as ones like the blog post about Claude calling a Terraform command. And this is on a site which appears to sell other AI generated content for a subscription.
Edit: it appears the traditional content is free. What is paid is an AI interview pack, which is basically content with some tokens in order to present the content. They could be cheap Haiku tokens. Also it isn't a subscription, it's one-time purchase of packs. My bad.
-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.
Thought experiment here: What about the bugs that humans have wrote. (I'm not excusing or justifying to say AI Coding is better). At one point we shamed companies for producing and being sloppy with their engineering practices. All of the sudden in the last 10 years, we accepted company's excuses of "of well we don't care and we're garbage." (A lot of Amazon tone death documentation/surprise bugs/google's head scratching disconnect to the user, etc behaviors).
But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.
I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.
I agree with you. However, business individuals have decided that they're "a better judge" of our practices and they've used financial, legal, and coercion to get their way.
“Vibe coded”? I doubt that there is the documentary evidence that the code in these systems was never touched by a human. At best this is a list of code where AI tools were used in development. To be honest if you just created a list of all outages in all companies and systems you’d probably have a better list since AI tools are ubiquitous.
Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.
Have you used a state of the art tool (e.g. Claude Code) in the past 6 months? If you only tried free tools, or only tried 1 year ago last, you really need to check again.
AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.
I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.
Isn't it odd that you wrote your comment with AI then!?
Ha, gotcha, AI slop poster!
I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.
Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.
Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.
A lot of bad software today is attributed to "vibecoding" even though these trends have been existing since before LLMs. Like, people have been complaining about Windows for decades before AI came on the scene, except these days the same issues are attributed to vibecoding.
I feel people are just lumping two things they don't like together because they are plausibly related, but without any real proven causality between them. Is this site any different?
How often does software fail in production with human-written code? How many times has a production failure been avoided because an LLM didn't make a typo or mistake that a human would have?
This is pushing an agenda. It's not measuring anything meaningful.
Half this list is bad attribution. LiteLLM was a supply chain attack — stolen PyPI credentials, nothing to do with vibe coding. The Amazon outage number comes from a vendor blog pushing their own product. Nobody else reported it.
But the "where's your control group" take bugs me too. It's not that AI writes buggier code line for line. The gaps are just in different places. Devs who've shipped real apps add rate limiting, auth middleware, proper CORS — because they got burned before. AI skips all of it because nobody prompted for it.
I read through about 80 AI-generated repos a few weeks ago. Code looked decent. The missing stuff was always the same list — no auth on admin routes, API keys hardcoded in client JS, CORS wide open, debug endpoints still live in prod. Over and over.
Nothing there makes a wall of shame. Nothing's exploded yet. But it's the kind of stuff that does.
This is definitely the right question. A list of failures without any baseline won't tell you anything. You would need the same exercise for human-written code at a comparable scale before drawing any conclusions at all. Without it, it's just confirmation bias.
I love dunking on vibe coding as much as the next guy but is there actual evidence for most of the entries that such is the case? IMO that will make the point even stronger.
In my experience over the last couple years, lists like this won’t move the needle at all. The AI zealots reject anything that calls into question the AI stuff, usually appealing to “just wait, better models/agents/guardrails imminent” and claiming that anecdotal productivity gains are worth the risk. The people concerned about AI already are concerned and just fall back to “I told you so”. Unfortunately the decision makers seem to still be following the zealots promising wondrous productivity, profit, and a future full of flying cars.
'vibe coding' is too loose a term. Everything will be generated by AI in the very near future, and it will range from 'fancy auto complete' to 'entirely autonomously generated' with many nuances and subtleties in between.
So this is a list of incidents where random people on the internet speculated about rumors that AI was to blame. The companies typically deny it. Insiders who know the details are generally unable to comment due to how large companies manage PR.
AI might have been an opportunity to take engineer hubris down a knotch. Perhaps to reassess the excesses (bad performance, bad UX, poor reliability, costly development & operations, etc) . Instead of reflection, we decided to shame AI as vibe coding .
How much abysmal code and products have we all shipped? Exploitative, clumsy , dangerous, vulnerable? What was our excuse?
I find the entire anti-vibe coding movement to be terribly tacky and judgmental.
We have an incredible tool that could 10-100x productivity. We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.
78 comments
Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.
https://github.com/jamubc/gemini-mcp-tool
Disclosure: I work at Google, but not on anything related to this.
This whole website and everything around it are almost ironic.
One of the "fun" hallmarks of many of these LLM assisted websites is that they seem to completely disregard basic accessibility (especially Web Content Accessibility Guidelines [1]). That small dark gray subtext on a black background is just horrific.
[1] - https://webaim.org/resources/contrastchecker
IDK why people act as if vibe coding invented software bugs that lead to vulnerabilities, as if those weren't already a thing by human programmers.
Here's one set of numbers from the CATO institute: https://www.cato.org/policy-analysis/illegal-immigrant-murde...
The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.
Second, it compares murder rates only, in the state of Texas, a state well known to have extreme amounts of legal guns. You can hardly generalise from this data.
Here is some interesting data. https://en.wikipedia.org/wiki/Crime_in_Denmark
FWIW I don’t live in the USA.
> First of all, the link you provided mixes illegal migration with legal migration
No it doesn't. I chose that article specifically because it provides figures for native-born citizens, legal immigrants and illegal immigrants:
> Over the 10-year period from 2013 to 2022, the homicide conviction rate in Texas for illegal immigrants was 2.2 per 100,000, compared to 3.0 per 100,000 for native-born Americans. The homicide conviction rate for legal immigrants in Texas was 1.2 per 100,000.
I accept that the figures in other countries may not work out the same way as figures in the USA.
I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.
Plus the linked article for the Amazon outage is https://d3security.com/blog/amazon-lost-6-million-orders-vib... which appears to be some other vendor promoting their product without providing any details on what happened at Amazon.
Edit: it appears the traditional content is free. What is paid is an AI interview pack, which is basically content with some tokens in order to present the content. They could be cheap Haiku tokens. Also it isn't a subscription, it's one-time purchase of packs. My bad.
> Why is the LiteLLM incident on there? The linked article for that one is a 404.
-> [Endor Labs] https://www.endorlabs.com/learn/teampcp-isnt-done
-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.
Barely anything on the site makes sense if you look at them closely.
We call that "slop", the last time I checked.
But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.
I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.
You shouldn't blame them for things where their environment is bad, and that makes it hard to qualify when blame is unjustified.
> AI tools are ubiquitous.
Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.
> Only among people who don't value the quality of their output.
I value the quality of my output and I make extensive use of AI tools.
That's why the original definition of "vibe coding" is useful: creating code with AI tools without reviewing or caring about the quality of that code.
It's also possible to use AI tools as part of a responsible engineering process that is intended to produce production quality software.
AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.
I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.
Ha, gotcha, AI slop poster!
I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.
Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.
I feel people are just lumping two things they don't like together because they are plausibly related, but without any real proven causality between them. Is this site any different?
How often does software fail in production with human-written code? How many times has a production failure been avoided because an LLM didn't make a typo or mistake that a human would have?
This is pushing an agenda. It's not measuring anything meaningful.
But the "where's your control group" take bugs me too. It's not that AI writes buggier code line for line. The gaps are just in different places. Devs who've shipped real apps add rate limiting, auth middleware, proper CORS — because they got burned before. AI skips all of it because nobody prompted for it.
I read through about 80 AI-generated repos a few weeks ago. Code looked decent. The missing stuff was always the same list — no auth on admin routes, API keys hardcoded in client JS, CORS wide open, debug endpoints still live in prod. Over and over.
Nothing there makes a wall of shame. Nothing's exploded yet. But it's the kind of stuff that does.
So basically Reddit.
How much abysmal code and products have we all shipped? Exploitative, clumsy , dangerous, vulnerable? What was our excuse?
I find the entire anti-vibe coding movement to be terribly tacky and judgmental.
We have an incredible tool that could 10-100x productivity. We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.
Sad, really.