Already enough comments about base rate fallacy, so instead I'll say I'm worried for the future of GitHub.
Its business is underpinned by pre-AI assumptions about usage that, based on its recent instability, I suspect is being invalidated by surges in AI-produced code and commits.
I'm worried, at some point, they'll be forced to take an unpopular stance and either restrict free usage tiers or restrict AI somehow. I'm unsure how they'll evolve.
Do people really put weight in stars? It seems completely unrelated to anything but, well, popularity. Even when I modify other peoples' code I fork to a private repo and maintain my changes separately, and I'm fairly certain I have never starred a repo.
Just to clarify as OP, the point here is not that Claude is not contributing to serious work, just that the dashboard suggests a lot of usage in public GitHub repos seems to be tied to low attention, high LOC repos. This is at least something to keep in mind when considering the composition of coding agent usage, and when assessing the sustainability of current trends.
In hindsight the headline was a bit more sensational than I meant it to be!
The base rate argument here is the right one. I maintain a solo project with 3,800+ tests and 92% coverage — zero stars for months because I never promoted it. Stars measure marketing, not quality.
What's more interesting to me is that Claude dramatically lowers the barrier to _testing_, not just writing code. I can mass-generate edge case tests that I'd never bother writing manually. The result is higher-quality solo repos that look "abandoned" by star count.
Is anyone tracking test coverage or CI pass rates for AI-assisted repos vs traditional ones? That seems like a much more useful signal than stars.
I have many GH repos, most have no stars. Probably because most of what I write is not very useful to other people due to quality or use case. I would say this is true of most fully human-created repos on GitHub.
It looks like my one-star repository [1] came close to making this person's leaderboard for number of commits (currently 5,524 since January, all by Claude Code). I'm not sure what that means, though. Only a small percentage of those commits are code. The vast majority are entries for a Japanese-English dictionary being written by Claude under my supervision. I'm using Github for this personal project because it turned out to be more convenient than doing it on my local computer.
I'm one of those zero star repos. I've been using Claude Code for some weeks now and built a personal knowledge graph with a reasoning engine, belief revision, link prediction. None of it is designed for stars, its designed for me. The repo exists because git is the right tool for versioning a system.. that evolves every day.
The framing assumes github repos are supposed to be products.
I used Claude code to build a custom notes application for my specific requirements.
It’s not perfect, but I barely invested 10 hours in it and it does almost everything I could have asked for, plus some really cool stuff that mostly just works after one iteration. I’ll probably open source the code at some point, and I fully expect the project to have less than two stars.
Still, I have my application.
For anyone that’s interested in taking a look, my terrible landing page is at rayvroberts.com
Auto updates don’t work quite right just yet. You have to manually close the app after the update downloads, because it is still sandboxed from when I planned to distribute via the Mac App Store. Rejected in review because users bring their own Claude key.
I cannot understate how much of an improvement that is. If I had a dollar for all the shit I made myself, the old fashioned way, that got 0 attention at all? I'd have enough for a month or two of claude
Maybe because people are using claude to to write code for themselves, to scratch their own itch, and upload it to the world just because. The value of code can't be measured in star counts.
One downstream effect of "agents can publish code" is that the trust signals weve relied on for years (stars, maintainer reputation, issue history...etc) got noisier. I don't think that means the ecosystem collapses, but it could mean we need to separate provenance from popularity.
If an automated system is going to generate and then publish artifacts at scale, you gonna want a verifiable chain of custody. Like which principle authorized the pub, what policy constraints applied (I mean like license scanning, dependency allowlist...etc), an then what checks passed (tests, static analysis, supply-chain provenance). Without this the default consumer posture becomes "treat everything as untrusted," whidh is expensive and slow adoption of legitimate work too.
I suspect we end up with something like "signed built receipts" becoming normal for small projects as well, not because everyone loves ceremony, but becauses the alternative is an arms race of spam and counterfeit maintainers.
The idea with Claude writing code for most part is that everyone can write software that they need. Software for the audience of one. GitHub is just a place for them to live beyond my computer.
I think the value right now in LLM code assist tools is in small projects: small reusable libraries or proof of concept “I want this app even though almost no one else does” types of projects.
For libraries: still probably mostly useful for personal code bases, but for developers with enough experience to modularize development efforts even for personal or niche projects.
I am bothered by huge vibe coded projects, for example like OpenClaw, that have huge amounts of code that has not undergone serious review, refactoring, etc.
At a glance this may read as “most of this code isn’t valuable to others” but reality is probably complected with “this type of code is reducing the need for shared libraries”.
Even if that stat were compared directly to the base rate (human output), it could easily be explained by correlating strongly with Claude usage skewing towards new repos.
I have 90 Github repos since way over 10 years ago, one of them has over 5 stars (50 stars and 30 forks), since it's a semi-popular niche application with a complicated install path.
Two have over 2 stars, one of which is vibe-engineered, the other is older than my children and the service it's an API for hasn't existed in half a decade.
I hate everything about this headline and metric. As a lifelong graphics programmer from Pentium U/V pipeline assembly optimisation days: so fucking what.
I have never cared about LinkedIn or GitHub stars or any of those bullshit metrics (obviously because I don't score very highly in them), and am enjoying exploring a million things at the speed of thought; get left outside, if it suits you. Smart and flexible people have no trouble using it, and it's amazing.
Rather measure how much I've learnt and created recently compared to before, and get ready for some sobering shit because us experienced old dudes can judge good code from bad pretty well.
This is just base rate neglect though. Something like 98% of all GitHub repos have <2 stars regardless of how they were made. If 90% of Claude repos have <2 stars that actually means they're outperforming the baseline...
Yeah, but all these internal and not so internal tools I baked with it are great - they solve my own problems - and without LLMs I would never have a chance to implement even 20% of that.
How long does it normally take projects to get stars though? You're not going to have a project with 100+ stars overnight or even within a month, you have to promote the project?
Stars ceased to be relevant a long time ago, around the time Github went from a beloved pillar of the open-source community to just another facet of the Microsoft behemoth.
I mean, most of the code that I have written to Github with normal human intelligence also goes to Github repos will less than two stars. They're usually repos that I create and no one else touches.
The HN headline is at least misleading, because I suspect a majority of Claude usage is at the enterprise level (deep pockets), which goes to private GitHub repos.
Is this surprising in any way? People who let Claude Code attribute commits to itself are probably vibe coders who delegate all the work. It's expected that there will be a growing number of new projects.
222 comments
What percentage of GitHub activity goes to GitHub repos with less than 2 stars? I would guess it's close to the same number.
Its business is underpinned by pre-AI assumptions about usage that, based on its recent instability, I suspect is being invalidated by surges in AI-produced code and commits.
I'm worried, at some point, they'll be forced to take an unpopular stance and either restrict free usage tiers or restrict AI somehow. I'm unsure how they'll evolve.
In hindsight the headline was a bit more sensational than I meant it to be!
What's more interesting to me is that Claude dramatically lowers the barrier to _testing_, not just writing code. I can mass-generate edge case tests that I'd never bother writing manually. The result is higher-quality solo repos that look "abandoned" by star count.
Is anyone tracking test coverage or CI pass rates for AI-assisted repos vs traditional ones? That seems like a much more useful signal than stars.
At 2mo old - nearly a 1GB repo, 24M loc, 52K commits
https://github.com/thomaspryor/Broadwayscore
Polished site:https://broadwayscorecard.com/
[1] https://github.com/tkgally/je-dict-1
The framing assumes github repos are supposed to be products.
I used Claude code to build a custom notes application for my specific requirements.
It’s not perfect, but I barely invested 10 hours in it and it does almost everything I could have asked for, plus some really cool stuff that mostly just works after one iteration. I’ll probably open source the code at some point, and I fully expect the project to have less than two stars.
Still, I have my application.
For anyone that’s interested in taking a look, my terrible landing page is at rayvroberts.com
Auto updates don’t work quite right just yet. You have to manually close the app after the update downloads, because it is still sandboxed from when I planned to distribute via the Mac App Store. Rejected in review because users bring their own Claude key.
If an automated system is going to generate and then publish artifacts at scale, you gonna want a verifiable chain of custody. Like which principle authorized the pub, what policy constraints applied (I mean like license scanning, dependency allowlist...etc), an then what checks passed (tests, static analysis, supply-chain provenance). Without this the default consumer posture becomes "treat everything as untrusted," whidh is expensive and slow adoption of legitimate work too.
I suspect we end up with something like "signed built receipts" becoming normal for small projects as well, not because everyone loves ceremony, but becauses the alternative is an arms race of spam and counterfeit maintainers.
The idea with Claude writing code for most part is that everyone can write software that they need. Software for the audience of one. GitHub is just a place for them to live beyond my computer.
Why will I want to promote it or get stars?
For libraries: still probably mostly useful for personal code bases, but for developers with enough experience to modularize development efforts even for personal or niche projects.
I am bothered by huge vibe coded projects, for example like OpenClaw, that have huge amounts of code that has not undergone serious review, refactoring, etc.
I asked him, how many people are using any of them? He told me it's just him.
- 98% of human's repos have <2 stars
Claude is 5 times smarter than humans!
The math is a bit of a stretch, but the correlation still holds up.
Two have over 2 stars, one of which is vibe-engineered, the other is older than my children and the service it's an API for hasn't existed in half a decade.
Am I an AI?
It is interesting to see a flip in attitude toward GitHub.
I have never cared about LinkedIn or GitHub stars or any of those bullshit metrics (obviously because I don't score very highly in them), and am enjoying exploring a million things at the speed of thought; get left outside, if it suits you. Smart and flexible people have no trouble using it, and it's amazing.
Rather measure how much I've learnt and created recently compared to before, and get ready for some sobering shit because us experienced old dudes can judge good code from bad pretty well.
I disabled all the attribution. I find it noisy and I'm not blaming claude, I'm blaming someone if something is broken.
GitHub stars are very much the textbook example of where you'd expect to find a Pareto distribution.