The interesting question to me is not whether the system can generate a plausible PR-time test, but whether the useful ones survive after the PR is gone. If Canary catches a real regression, how often can that check be promoted into a stable long-lived regression test without turning into a flaky, environment-coupled browser script? That conversion rate feels closer to the real moat than the generation demo.
Good point. To keep the regression tests reliable as the app evolves, we run a reliability cascade. First, we generate and execute deterministic Playwright from the codebase. If execution fails then we fall back to DOM and aria tree. If that still fails, we fall back to vision agents that verify what the user actually sees before flagging a drift in the application behavior
If that's what you guys are bringing, you should put that more up front; focus on making it clear you're providing ingredients that Claude et al will not be providing on their own without Real Actual Software to do it.
The system focuses on going beyond the happy path and generating edge case tests that try to break the application. For example, a Grafana PR added visual drag feedback to query cards. The system came up with an edge case like - does drag feedback still work when there's only one card in the list, with nothing to reorder against?
We see this as different from review. The system generates tests to catch second-order effects and executes them against the live application to expose bugs
26 comments
- what is your differentiator?