What CI looks like at a 100-person team (PostHog) (mendral.com)

by shad42 30 comments 56 points
Read article View on HN

30 comments

[−] sd9 60d ago
It just seems weird to me to throw all these stats together. Putting 75GB of logs in the same category as managing the compute for this many parallel workflows and so on seem like problems on totally different scales.

Unfortunately I didn’t really get the point of the article after being bombarded with stats, expect that the authors have an AI tool to sell.

[−] joncrane 60d ago
We get it! They have 22,477 tests with a 99.98% pass rate, ship 65 commits to main daily, and keep 98 engineers productive on a single monorepo.

I thought the repetition of these statitics was a little tired, but overall that's an impressive solution. Also totally get that the hardest part is log ingestion and indexing.

[−] Havoc 60d ago
To me that reads more like monorepo is a central point of failure and they’re scrambling to bandaid the consequence of that decision. And the bandaids aren’t gonna scale to 1000 people

I guess they’re missing whatever Google has to make their monorepo scale

[−] dpark 60d ago
Problems don’t go away with fractured repos. They just change shape. Many repos maybe get you more reliable CI, but you pay for it with increased cost of integrating dependencies and increased complexity with debugging breaks in production (assuming many repos mean many services).

In my experience, multiple small repos don’t even have better CI reliability than a mono repo as less is invested because it affects fewer people. 10 person repos regularly have flaky tests that never get addressed because “we’ll deal with it later”. The tolerance for flakiness goes up when you can attribute it to a close teammate you know is heads down on something critical instead of it feeling like a random test you don’t even care about.

[−] Havoc 60d ago

> Problems don’t go away with fractured repos.

Not the problems but the part where broken CI cause everything to stop.

Fractured repos have their own downsides but less chance of literally everyone sitting and waiting is greatly reduced

[−] shad42 60d ago
Mendral co-founder here. What happens at PostHog is not uncommon. While building Mendral, we talked to hundreds of team and they all have a similar situation. Initially they come to us to make their CI pipelines faster. But as the agent dives in, the urgency becomes keeping all pipelines reliable. It comes from growing a code base with a test suite. Of course it has to change eventually: splitting the test suite, running specific part of the CI depending on the code, etc... But the situation described in the article is widespread with a product that grows quickly.
[−] simianwords 60d ago
interesting that they have an agent that is triggered on flaky CI failures. but it seems far too specific -- you can have pull request on many other triggers.

there doesn't seem to be any upside on having it only for flaky tests because the workflow is really agnostic to the context.

[−] SirensOfTitan 60d ago
I don't really think this is at all at the quality bar for posts here. This is obviously AI-slop -- why should I invest more time reading your slop than you took to write it?

Even so, at what point do we consider the LLM-ification of all of tech a hazard? I've seen Claude go and lazily fix a test by loosening invariants. AI writes your code, AI writes your tests. Where is your human judgment?

Someone is going to lose money or get hurt by this level of automation. If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.

[−] jofzar 60d ago

> These are not the numbers of a team with a CI problem. These are the numbers of a team that moves extremely fast and takes testing seriously.

Please no AI slop, write your own bloody blog posts.

[−] IshKebab 60d ago

> Every commit to main triggers an average of 221 parallel jobs

Jesus, this is why Bazel was invented.

[−] elteto 60d ago
I think this is the first article that truly gave me “slop nausea”. So many “It’s not X. It’s Y.” Do people not realize how awful this reads? It’s not a novel either, just a few thousand words, just fucking write it and edit it yourself.
[−] zeristor 60d ago
I'm guessing they have a workflow for blog posts, with 100k workflows I was wondering something seems a bit weird.
[−] Heer_J 59d ago
[dead]
[−] zX41ZdbW 60d ago
Two problematic statements in this article:

1. Test pass rate is 99.98% is not good - the only acceptable rate is 100%.

2. Tests should not be quarantined or disabled. Every flaky test deserves attention.

[−] lab14 60d ago
a test pass rate of 100% is a fairy tale. maybe achievable on toy or dormant projects, but real world applications that make money are a bit more messy than that.
[−] alkonaut 60d ago
I definitely have 100% pass rate on our tests for most of the time (in master, of course). By "most of the time" I mean that on any given day, you should be able to run the CI pipeline 1000 times and it would succeed all of them, never finding a flaky test in one or more runs.

In the rare case that one is flaky, it's addressed. During the days when there is a flaky test, of course you don't have 100% pass rate, but on those days it's a top priority to fix.

But importantly: this is library and thick client code. It should be deterministic. There are no DB locks, docker containers, network timeouts or similar involved. I imagine that in tiered application tests you always run the risk of various layers not cooperating. Even worse if you involve any automation/ui in the mix.

Obviously there are systems it depends on (Source control, package servers) which can fail, failing the build. But that's not a _test_ failure.

If the build it fails, it should be because a CI machine or a service the build depends on failed, not because an individually test randomly failed due to a race condition, timeout, test run order issue or similar

[−] salomonk_mur 60d ago
If one is flaky, then you are below 100% friend.
[−] lab14 60d ago
"most of the time" != 100% pass rate