Agent-to-agent pair programming (axeldelafosse.com)

by axldelafosse 60 comments 135 points
Read article View on HN

60 comments

[−] yesensm 50d ago
I’m curious whether anyone has measured this systematically. Right now most of the evidence for multi-agent setups still feels anecdotal.
[−] not_ai 50d ago
And expensive, exactly the way a pay per use product would push its customers…

“It’s not working well enough!” We tell them. They respond with “Have you tried using it more?”

[−] 3yr-i-frew-up 50d ago
Back in 2024 I read a study saying: "Ask 4 LLMs the same question, if they all give you the same answer there is some 95-99% chance its correct"

Soooo... Its not just greed. There is something there.

[−] axldelafosse 50d ago
Yes exactly. I’m talking about this in the article. I found out that when Claude and Codex both review the same PR and both find the same issue, our team fixes it 100% of the time.
[−] zombot 49d ago
What's the point of pair programming then if they both have the same opinions?
[−] shafyy 50d ago
Haha yeah... Wait until they start jacking up the subscription prices
[−] stackgrid 50d ago
Completely with you on this! But then we need to define the cirteria for comparison. Might not be that easy unfortunately
[−] viktorianer 48d ago
[flagged]
[−] edf13 50d ago
Nice - I do something similar in a semi manual way.

I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.

It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.

[−] torginus 49d ago
I think they're trying to implement every management fad with AI agents and see if improves performance.

Personally, I have tried pair programming, and it hasn't really felt like something that works, for various reasons - the main one is that I (and my partner) have complex thought processes in my head, that is difficult and cumbersome to articulate, and to an onlooker, it looks like I'm randomly changing code.

[−] cadamsdotcom 50d ago
The vibes are great. But there’s a need for more science on this multi agent thing.
[−] alienreborn 50d ago
I have been trying a similar setup since last week using https://rjcorwin.github.io/cook/
[−] dgb23 50d ago
If this approach turns out to be valuable, it's unlikely that it has anything to do with having multiple actual agents, but rather that it's valuable to have 2 configurations (system prompt, model, temp, context pruning, toolset etc.) of inside the same agent being swapped back and forth.
[−] sibtain1997 50d ago
The PLAN.md question is the one worth pulling on. Once the plan lives in git or the PR it's already downstream of intent and whoever defined what to build has already handed off. The harder problem is giving agents access to the original intent, not just the implementation plan derived from it. When there's drift between what was planned and what got built, a git-resident PLAN.md makes it hard to trace back to why the decision was made in the first place.
[−] divan 50d ago
You can also create a skill for reviewing (which calls gemini/codex as a command line tool) and set instructions on how and when to use. Very flexible.
[−] ramon156 50d ago
I've always wondered what it would be like if we reversed the roles. I remember people claiming they had gotten better results if an agent started asking the questions.

What if we had an agent-to-agent network that contacted the human as a source of truth whenever they needed it. Keep a list of employees that are experts in said skill, then let them answer 1-2 questions.

Or are we speeding up our replacement like this?

[−] vessenes 50d ago
I prefer claude for generation / creativity, codex for bull-headed, accurate complaining and audit. Very rarely claude just doesn't "get it" and it makes sense to have codex direct edit. But generally I think it's happiest and best used complaining.
[−] rsafaya 50d ago
I think the A2A space is wide open. Great to see this approach using App Server and Channels. I tried built something similar (at a high level) for a more B2C use case for OpenClaw https://github.com/agentlink-dev/agentlink users. Currently I think the major Agents have not fully owned the "wake the Agent" use case fully. Regardless this is a very cool approach. All the best.
[−] jedisct1 50d ago
I systematically use reviewers agents in Swival: https://swival.dev/pages/reviews.html

Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.

[−] etothet 50d ago
"Letting the agents loop can result in more changes than expected, which are usually welcome..."

If "more changes than expected" means "out of scope", then I disagree. Those types of changes are exactly one of the things that's best to avoid whether code is being written by a person or an LLM.

[−] woadwarrior01 50d ago
This is very reminiscent of the review-loop Claude Code plugin.

https://github.com/hamelsmu/claude-review-loop

[−] bradfox2 50d ago
Multi turn review of code written by cc reviewed by codex works pretty well. Been one of the only ways to be able to deliver larger scoped features without constant bugs. I've seen them do 10-15 rounds of fix and review until complete.

Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.

[−] zombot 49d ago
Is there a prize yet for the most absurd application of AI? Pair programming seems a fair first step in the quest for this holiest of grails. How about an agentic implementation of the House of AI Lords?
[−] dude250711 50d ago
The circle of slop.
[−] reachsmith 50d ago
[dead]
[−] chattermate 50d ago
[dead]
[−] maxbeech 50d ago
[dead]
[−] hikaru_ai 50d ago
[dead]
[−] kevinbaiv 50d ago
[flagged]
[−] mergeshield 50d ago
[flagged]
[−] shreyssh 50d ago
This is interesting for code, but I'm curious about agent-to-agent coordination for ops tasks — like one agent detecting a database anomaly and another auto-remediating it
[−] elicohen1000 50d ago
[dead]