The thing about how merges are presented seems orthogonal to how to represent history. I also hate the default in git, but that is why I just use p4merge as a merge tool and get a proper 4-pane merge tool (left, right, common base, merged result) which shows everything needed to figure out why there is a conflict and how to resolve it. I don't understand why you need to switch out the VCS to fix that issue.
<<<<<<< left
||||||| base
def calculate(x):
a = x * 2
b = a + 1
return b
=======
def calculate(x):
a = x * 2
logger.debug(f"a={a}")
b = a + 1
return b
>>>>>>> right
With this configuration, a developer reading the raw conflict markers could infer the same information provided by Manyana’s conflict markers: that the right side added the logging line.
You can't use CRDTs for version control, having conflicts is the whole point of version control. Sometimes two developers will make changes that fundamentally tries to change the code in two different ways, a merge conflict then leaves it up to the developer who is merging/rebasing to make a choice about the semantics of the program they want to keep. A CRDT would just produce garbage code, its fundamentally the wrong solution. If you want better developer UX for merge conflicts then there are both a bunch of tooling on top of Git, as well as other version control systems, that try to present it in a better way; but that has very little to do with the underlaying datastructure. The very fact that cherry-picking and reverting becomes difficult with this approach should show you that its the wrong approach! Those are really easy operations to do in Git.
Is it a good thing to have merges that never fail? Often a merge failure indicates a semantic conflict, not just "two changes in the same place". You want to be aware of and forced to manually deal with such cases.
I assume the proposed system addresses it somehow but I don't see it in my quick read of this.
The semantic problem with conflicts exists either way. You get a consistent outcome and a slightly better description of the conflict, but in a way that possibly interleaves changes, which I don't think is an improvement at all.
I am completely rebase-pilled. I believe merge commits should be avoided at all costs, every commit should be a fast forward commit, and a unit of work that can be rolled back in isolation. And also all commits should be small. Gitflow is an anti-pattern and should be avoided. Long-running branches are for patch releases, not for feature development.
I don't think this is the future of VCS.
Jujutsu (and Gerrit) solves a real git problem - multiple revisions of a change. That's one that creates pain in git when you have a chain of commits you need to rebase based on feedback.
This is sort of a revival and elaboration of some of Bram’s ideas from Codeville, an earlier effort that dates back to the early 2000s Cambrian explosion of DVCS.
Codeville also used a weave for storage and merge, a concept that originated with SCCS (and thence into Teamware and BitKeeper).
Codeville predates the introduction of CRDTs by almost a decade, and at least on the face of it the two concepts seem like a natural fit.
It was always kind of difficult to argue that weaves produced unambiguously better merge results (and more limited conflicts) than the more heuristically driven approaches of git, Mercurial, et al, because the edit histories required to produce test cases were difficult (at least for me) to reason about.
I like that Bram hasn’t let go of the problem, and is still trying out new ideas in the space.
In case the name doesn't jump out at you, this is Bram Cohen, inventory of Bittorrent. And Chia proof-of-storage (probably better descriptions available) cryptocurrency. https://en.wikipedia.org/wiki/Bram_Cohen
It's not the same as capturing it, but I would also note that there are a wide wide variety of ways to get 3-way merges / 3 way diffs from git too. One semi-recent submission (2022 discussing a 2017) discussed diff3 and has some excellent comments (https://news.ycombinator.com/item?id=31075608), including a fantastic incredibly wide ranging round up of merge tools (https://www.eseth.org/2020/mergetools.html).
This thing is really short. https://github.com/bramcohen/manyana/blob/main/manyana.py is 473 lines of dependency-free Python (that file only imports difflib, itertools and inspect) and of that ~240 lines are implementation and the rest are tests.
I think something like this needs to be born out of analysis of gradations of scales of teams using version control systems.
- What kind of problems do 1 person, 10 person, 100 person, 1k (etc) teams really run into with managing merge conflicts?
- What do teams of 1, 10, 100, 1k, etc care the most about?
- How does the modern "agent explosion" potentially affect this?
For example, my experience working in the 1-100 regime tells me that, for the most part, the kind of merge conflict being presented here is resolved by assigning subtrees of code to specific teams. For the large part, merge conflicts don't happen, because teams coordinate (in sprints) to make orthogonal changes, and long-running stale branches are discouraged.
However, if we start to mix in agents, a 100 person team could quickly jump into a 1000 person team, esp if each person is using subagents making micro commits.
It's an interesting idea definitely, but without real-world data, it kind of feels like this is just delivering a solution without a clear problem to assign it to. Like, yes merge-conflicts are a bummer, but they happen infrequently enough that it doesn't break your heart.
At this point if your VCS isn't a layer above git plumbing, nobody gonna waste time using it. Especially if the improvements are minor enough that it could be reasonably just a wrapper and still have 90% of the improvements.
> Two opaque blobs. You have to mentally reconstruct what actually happened.
Did you not discover what git diff does ? It's clearer than the presented improvement !
Plenty of 3 way merge tools supported by git too, sure, it's external tool but it's adding one tool rather than upending the workflow
> Conflicts are informative, not blocking. The merge always produces a result. Conflicts are surfaced for review when concurrent edits happen “too near” each other, but they never block the merge itself. And because the algorithm tracks what each side did rather than just showing the two outcomes, the conflict presentation is genuinely useful.
Git merge cache (git rerere) is good enough. Only problem is that it isn't shared but that could be possibly done within git format itself if someone really wanted to
I'm struggling to understand the problem this solves for me. I can see in the abstract why this might be useful, but in practice I don't see the problems.
For me, jj represents a massive step forward from git in terms of usability, usefulness, and solving problems I actually have.
I think the next step forward for version control would be something that works at a lower level, such as the AST. I'd love to see an exploration of what versioning looks like when we don't have files and directories, and a piece of software is one whole tree that can be edited at any level. Things like LightTable and Dark have tried bits of this, it would be good to see a VCS demo of that sort of thing.
Interesting idea. While conflicts can be improved, I personally don't see it as a critical challenge with VCS.
What I do think is the critical challenge (particularly with Git) is scalability.
Size of repository & rate of change of repositories are starting to push limits of git, and I think this needs revisited across the server, client & wire protocols.
What exactly, I don't know. :). But I do know that in my current role (mid-size well-known tech company) is hitting these limits today.
> the key insight is that changes should be flagged as conflicting when they touch each other
Not really. Changes should be flagged as conflicting when they conflict semantically, not when they touch the same lines. A rename of a variable shouldn't conflict with a refactor that touches the same lines, and a change that renames a function should conflict with a change that uses the function's old name in a new place. I don't think I would bother switching to a new VCS that didn't provide some kind of semantic understanding like this.
I made foo.py in ymywkkys, the base change. I added the logging line in ystzrmlq. For the other branch, I ran jj edit ym and changed the function body to just return 42 in vxuxqtnu. Finally, I generate the merge commit with jj new ym vx. The graph looks like this:
@ qttvouvl gcr@hackerne.ws 2026-03-24 10:02:56 (conflict)
├─╮ (empty) Some merge commit
│ ○ ystzrmlq gcr@hackerne.ws 2026-03-24 10:02:54
│ │ Add logging line
○ │ vxuxqtnu gcr@hackerne.ws 2026-03-24 10:02:54
├─╯ Just return 42
○ ymywkkys gcr@hackerne.ws 2026-03-24 10:02:49
│ Base function
◆ zzzzzzzz root() 00000000
In jj, merges and rebases always succeed, but they may generate conflicts which are first-class objects in the repository alongside files, changes, directories, and so on. Having a structured way of representing conflicts allows for a more structured vocabulary. For instance, "conflict markers" don't live in the file itself, they're just rendered out to the working copy whenever the working copy gets updated.
I personally find this diff harder to read than the proposed format in the post, but the same information is there:
def calculate(x):
<<<<<<< conflict 1 of 1
+++++++ vxuxqtnu "Just return 42"
return 42
%%%%%%% diff from: ymywkkys "Base function"
\\\\\\\ to: qttvouvl "Some merge commit" (rebased revision)
a = x * 2
+ logger.debug(f"a={a}")
b = a + 1
return b
>>>>>>> conflict 1 of 1 ends
What CRDT's solve is conflicts at the system level. Not at the semantic level. 2 or more engineers setting a var to a different value cannot be handled by a CRDT.
Engineer A intended value = 1
Engineer B intended value = 2
CRDT picks 2
The outcome could be semantically wrong. It doesn't reflect the intent.
I think the primary issue with git and every other version control is the terrible names for everything. pull, push, merge, fast forward, stash, squash, rebase, theirs, ours, origin, upstream and that's just a subset. And the GUI's. They're all very confusing even to engineers who have been doing this for a decade. On top of this, conflict resolution is confusing because you don't have any prior warnings.
It would be incredibly useful if before you were about to edit a file, the version control system would warn you that someone else has made changes to it already or are actively working on it. In large teams, this sort of automation would reduce conflicts, as long as humans agree to not touch the same file. This would also reduce the amount of quality regressions that result from bad conflict resolutions.
Shameless self plug: I am trying to solve both issues with a simpler UI around git that automates some of this and it's free. https://www.satishmaha.com/BetterGit
CRDTs actually have a long history in version control.
- The original 1977 version control system, SCCS, was a CRDT: https://braid.org/meeting-60/sccs-is-a-time-collapse
- It called its data structure a 'weave"
- Brahm's old project "Codeville" used a weave for version control
- But then git blew up in popularity.
- The project "DARCS" tried to make a robust "theory of patches," and eventually led to the development of Pijul
- Pijul is a VCS that is a CRDT: https://pijul.org
starts with “based on the fundamentally sound approach of using CRDTs for version control”. How on earth is crdt a sound base for a version control system? This makes no sense fundamentally, you need to reach a consistent state that is what you intended not what some crdt decided and jj shows you can do that also without blocking on merges but with first level conflicts that need to be resolved. ai and language aware merge drivers are helping so much here i really wonder if the world these “replace version control” projects were made for still exists at all.
Bram Cohen is awesome, but this feels a little bare. I've put much more thought into version control ([1]), including the use of CRDTs (search for "# History Model" and read through the "Implementing CRDTs" section).
This seems like an excellent idea. I'm sure a lot of us have been idly wondering why CRDTs aren't used for VCS for some time, so it's really cool to see someone take a stab at it! We really do need an improvement over git; the question is how to overcome network effects.
This is just CRDT merges and better diffs?? I think the future of version control is much, much weirder than this. Like if you have CRDTs why not have ephemeral branches with real-time collaborative editing and live CI as you type
This is cool and i keep thinking about CRDTs as a baseline for version control, but CRDTs has some major issues, mainly the fact that most of them are strict and "magic" in the way they actually converge(like the joke: CRDTs always converge, but to what).
i didn't read if he's using some special CRDT that might solve for that, but i think that for agentic work especially this is very interesting
I think there are still strong advantages to the centralized locking style of collaboration. The challenge is that it seems to work best in a setting where everyone is in the same physical location while they are working. You can break a lock in 30 seconds with your voice. Locking across time zones and date lines is a nonstarter by comparison.
I'm confused about what this solves. They give the example of someone editing a function and someone deleting the same function and claim that the merge never fails and then go on to demonstrate that indeed rightly the merges still fails. There are still merge markers in the sources. What is the improvement exactly?
See vim-mergetool[1]. I use it to manage merge conflicts and it's quite intuitive. I've resolved conflicts that other people didn't even want to touch.
I used to think the future of version control was semantic: E.g. I renamed a method, while someone else concurrently added another call to that (now differently named) method. Git doesn't catch this, nor would this new system. The solution seems obvious to a human: Use the new name at the new call-site too. But it requires operating at the level of the semantic meaning of a change, and not just the dumb textual changes. I used to think this would require a new version control system that encodes the semantics of the changes in the commits, in order to have them available at merge-time. But these days, it seems much more realistic to stick to git, but loop in LLMs when merging, to re-create the semantics from the textual changes.
Disagree. We all are — or should be — Linux kernel developers. What's more, we should align to a specific and singular VCS worldview informed by BitKeeper, which no longer exists, whether or not we used it. Therefore Git. Thank you for your attention to this matter!
Jujutsu honestly is the future IMO, it already does what you have outlined but solved in a different way with merges, it'll let you merge but outline you have conflicts that need to be resolved for instance.
It's been amazing watching it grow over the last few years.
A suggestion : is there any info to provide in diffs that is faster to parse than "left" and "right" ? Can the system have enough data to print "bob@foo.bar changed this" ?
I recently found a project called sem[1] that does git diffs but is aware of the language itself, giving feedback like "function validateToken added", "variable xyzzy removed", ...
i think that's where version control is going. especially useful with agents and CI
I've tested out jj a bit, and doesn't it solve the issues presented at the link already? I don't work on a team where I need VC better than git, so I just stick with it for my own private use, but I did test jj out of curiosity, and I could've sworn this is basically the same pitch as switching to jj (but for the CRDT under the hood).
> Conventional rebase creates a fictional history where your commits happened on top of the latest main
This is not fiction though. If someone added a param to the functions you’re modifying on your branch, rebasing forces you to resolve that conflict and makes the dependency on that explicit and obvious.
My issue with git is handling non-text files, which is a common issue with game development. git-lfs is okay but it has some tricky quirks, and you end up with lots of bloat, and you can't merge. I don't really have an answer to how to improve it, but it would be nice if there was some innovation in that area too.
doesn't the side by side view in github diff solve this?
conflict free merging sounds cool, but doesn't that just mean that that a human review step is replaced by "changes become intervals rather than collections of lines" and "last set of intervals always wins"? seems like it makes sense when the conflicts are resolved instantaneously during live editing but does it still make sense with one shot code merges over long intervals of time? today's systems are "get the patch right" and then "get the merge right"... can automatic intervalization be trusted?
edit: actually really interesting if you think about it. crdts have been proven with character at a time edits and use of the mouse select tool.... these are inherently intervalized (select) or easy (character at a time). how does it work for larger patches can have loads of small edits?
I don't quite understand how CRDTs should help with merges. The difficult thing about merges is not that two changes touch the same part of the code; the difficult thing is that two changes can touch different parts of the code and still break each other - right?
This is a bad idea. I spent a lot of time thinking about git’s snapshot system vs. merge-based system that were promoted by functional programming fans. Auto merging systems are bad for a good reason: because we care about features, which are a property of snapshots not diffs.
If you have a diff that adds a button and a diff that turns existing button blue, the merge of those diffs doesn’t add a button and have all button blue. Because it may not make the new button blue.
Features like “all buttons are blue” are properties of snapshots. Snapshot based revision control, like git, it better for that reason.
People are still having a problem with distributed version control, because some people want to force ”the server’s” history down the throats of all coworkers.
This can not be solved with tech, it’s a people problem.
Conflicts between branches is only a symptom of conflicts between people. Some want individual freedom to manage branches in whatever way (and these people are usually very open to other people managing branches in another way), but some people are against this freedom and thinks branches should be managed centrally by an authority (such people usually have a problem working on their own).
380 comments
I assume the proposed system addresses it somehow but I don't see it in my quick read of this.
The semantic problem with conflicts exists either way. You get a consistent outcome and a slightly better description of the conflict, but in a way that possibly interleaves changes, which I don't think is an improvement at all.
I am completely rebase-pilled. I believe merge commits should be avoided at all costs, every commit should be a fast forward commit, and a unit of work that can be rolled back in isolation. And also all commits should be small. Gitflow is an anti-pattern and should be avoided. Long-running branches are for patch releases, not for feature development.
I don't think this is the future of VCS.
Jujutsu (and Gerrit) solves a real git problem - multiple revisions of a change. That's one that creates pain in git when you have a chain of commits you need to rebase based on feedback.
Codeville also used a weave for storage and merge, a concept that originated with SCCS (and thence into Teamware and BitKeeper).
Codeville predates the introduction of CRDTs by almost a decade, and at least on the face of it the two concepts seem like a natural fit.
It was always kind of difficult to argue that weaves produced unambiguously better merge results (and more limited conflicts) than the more heuristically driven approaches of git, Mercurial, et al, because the edit histories required to produce test cases were difficult (at least for me) to reason about.
I like that Bram hasn’t let go of the problem, and is still trying out new ideas in the space.
It's not the same as capturing it, but I would also note that there are a wide wide variety of ways to get 3-way merges / 3 way diffs from git too. One semi-recent submission (2022 discussing a 2017) discussed diff3 and has some excellent comments (https://news.ycombinator.com/item?id=31075608), including a fantastic incredibly wide ranging round up of merge tools (https://www.eseth.org/2020/mergetools.html).
However/alas git 2.35's (2022) fabulous zdiff3 doesn't seems to have any big discussions. Other links welcome but perhaps https://neg4n.dev/blog/understanding-zealous-diff3-style-git...? It works excellently for me; enthusiastically recommended!
> ... CRDTs for version control, which is long overdue but hasn’t happened yet
Pijul happened and it has hundreds - perhaps thousands - of hours of real expert developer's toil put in it.
Not that Bram is not one of those, but the post reads like you all know what.
- What kind of problems do 1 person, 10 person, 100 person, 1k (etc) teams really run into with managing merge conflicts?
- What do teams of 1, 10, 100, 1k, etc care the most about?
- How does the modern "agent explosion" potentially affect this?
For example, my experience working in the 1-100 regime tells me that, for the most part, the kind of merge conflict being presented here is resolved by assigning subtrees of code to specific teams. For the large part, merge conflicts don't happen, because teams coordinate (in sprints) to make orthogonal changes, and long-running stale branches are discouraged.
However, if we start to mix in agents, a 100 person team could quickly jump into a 1000 person team, esp if each person is using subagents making micro commits.
It's an interesting idea definitely, but without real-world data, it kind of feels like this is just delivering a solution without a clear problem to assign it to. Like, yes merge-conflicts are a bummer, but they happen infrequently enough that it doesn't break your heart.
> Two opaque blobs. You have to mentally reconstruct what actually happened.
Did you not discover what git diff does ? It's clearer than the presented improvement !
Plenty of 3 way merge tools supported by git too, sure, it's external tool but it's adding one tool rather than upending the workflow
> Conflicts are informative, not blocking. The merge always produces a result. Conflicts are surfaced for review when concurrent edits happen “too near” each other, but they never block the merge itself. And because the algorithm tracks what each side did rather than just showing the two outcomes, the conflict presentation is genuinely useful.
Git merge cache (git rerere) is good enough. Only problem is that it isn't shared but that could be possibly done within git format itself if someone really wanted to
For me, jj represents a massive step forward from git in terms of usability, usefulness, and solving problems I actually have.
I think the next step forward for version control would be something that works at a lower level, such as the AST. I'd love to see an exploration of what versioning looks like when we don't have files and directories, and a piece of software is one whole tree that can be edited at any level. Things like LightTable and Dark have tried bits of this, it would be good to see a VCS demo of that sort of thing.
Of significance here because the resolution strategy from merges was deeply at the disagreement between Bram and Linus.
https://web.archive.org/web/20110728005409/http://www.wincen...
What I do think is the critical challenge (particularly with Git) is scalability.
Size of repository & rate of change of repositories are starting to push limits of git, and I think this needs revisited across the server, client & wire protocols.
What exactly, I don't know. :). But I do know that in my current role (mid-size well-known tech company) is hitting these limits today.
> the key insight is that changes should be flagged as conflicting when they touch each other
Not really. Changes should be flagged as conflicting when they conflict semantically, not when they touch the same lines. A rename of a variable shouldn't conflict with a refactor that touches the same lines, and a change that renames a function should conflict with a change that uses the function's old name in a new place. I don't think I would bother switching to a new VCS that didn't provide some kind of semantic understanding like this.
I made
In jj, merges and rebases always succeed, but they may generate conflicts which are first-class objects in the repository alongside files, changes, directories, and so on. Having a structured way of representing conflicts allows for a more structured vocabulary. For instance, "conflict markers" don't live in the file itself, they're just rendered out to the working copy whenever the working copy gets updated.foo.pyin ymywkkys, the base change. I added the logging line in ystzrmlq. For the other branch, I ranjj edit ymand changed the function body to justreturn 42in vxuxqtnu. Finally, I generate the merge commit withjj new ym vx. The graph looks like this:I personally find this diff harder to read than the proposed format in the post, but the same information is there:
Engineer A intended value = 1
Engineer B intended value = 2
CRDT picks 2
The outcome could be semantically wrong. It doesn't reflect the intent.
I think the primary issue with git and every other version control is the terrible names for everything. pull, push, merge, fast forward, stash, squash, rebase, theirs, ours, origin, upstream and that's just a subset. And the GUI's. They're all very confusing even to engineers who have been doing this for a decade. On top of this, conflict resolution is confusing because you don't have any prior warnings.
It would be incredibly useful if before you were about to edit a file, the version control system would warn you that someone else has made changes to it already or are actively working on it. In large teams, this sort of automation would reduce conflicts, as long as humans agree to not touch the same file. This would also reduce the amount of quality regressions that result from bad conflict resolutions.
Shameless self plug: I am trying to solve both issues with a simpler UI around git that automates some of this and it's free. https://www.satishmaha.com/BetterGit
[1]: https://gavinhoward.com/uploads/designs/yore.md
[1]: https://mergiraf.org/
It’s an awesome weekend project, you can have fun visualizing commits in different ways (I’m experimenting with shaders), and importantly:
This is the way forward. So much software is a wrapper around S3 etc. now is your chance to make your own toolset.
I imagine this appeals more to DIYer types (I use Pulsar IDE lol)
[1]: https://github.com/samoshkin/vim-mergetool
> [CRDT] This means merges don’t need to find a common ancestor or traverse the DAG. Two states go in, one state comes out, and it’s always correct.
Well, isn't that what the CRDT does in its own data structure ?
Also keep in mind that syntactic correctness doesn't mean functional correctness.
It's been amazing watching it grow over the last few years.
i think that's where version control is going. especially useful with agents and CI
[1] https://ataraxy-labs.github.io/sem/
Just a trivial example here:
> Conventional rebase creates a fictional history where your commits happened on top of the latest main
This is not fiction though. If someone added a param to the functions you’re modifying on your branch, rebasing forces you to resolve that conflict and makes the dependency on that explicit and obvious.
conflict free merging sounds cool, but doesn't that just mean that that a human review step is replaced by "changes become intervals rather than collections of lines" and "last set of intervals always wins"? seems like it makes sense when the conflicts are resolved instantaneously during live editing but does it still make sense with one shot code merges over long intervals of time? today's systems are "get the patch right" and then "get the merge right"... can automatic intervalization be trusted?
edit: actually really interesting if you think about it. crdts have been proven with character at a time edits and use of the mouse select tool.... these are inherently intervalized (select) or easy (character at a time). how does it work for larger patches can have loads of small edits?
> [CRDT] This means merges don’t need to find a common ancestor or traverse the DAG. Two states go in, one state comes out, and it’s always correct.
Funny, there was just a post a couple of days ago how this is false.
https://news.ycombinator.com/item?id=47359712
What counts as advertise vs spam? They seem like nearly identical posts and both projects really exist, separate authors.
Why are random posts marked Dead on this platform? Seems like outright censorship
If you have a diff that adds a button and a diff that turns existing button blue, the merge of those diffs doesn’t add a button and have all button blue. Because it may not make the new button blue.
Features like “all buttons are blue” are properties of snapshots. Snapshot based revision control, like git, it better for that reason.
This can not be solved with tech, it’s a people problem.
Conflicts between branches is only a symptom of conflicts between people. Some want individual freedom to manage branches in whatever way (and these people are usually very open to other people managing branches in another way), but some people are against this freedom and thinks branches should be managed centrally by an authority (such people usually have a problem working on their own).
Why aren't AI companies touting "zero-shot"ing huge merge conflicts being resolved by LLMs..