> the application of E1 to E2 attaches E2 to the root of E1 on the right.
It’s completely unclear to me what this means. The literal meaning is obviously wrong, because attaching a tree to a root that already has two child nodes would result in a ternary node, but apparently all trees in tree calculus are binary.
This link below gives a better description of it, along with the definitions of the reduction rules. (which I got from further down in this thread)
But what I believe was meant by the above was: "delta E1 E1" creates a new "reduction tree" (my own made up term) with E1 being the left child of this new root node, and E2 being the right child of this new root node - and which then begins applying the reduction on this newly constructed tree.
Overall the concept seems pretty interesting - and it's nice to see someone come up with something both novel in the space and at the same time seemly "applicable".
I'm actually going to reply to myself with my understanding of what's written here as the more I've looked at it, the cooler I think it is.
I happened to read this link [1], so that's what I'm basing it off of. Also note, I don't know this at all so I may be flat our wrong, but hopefully this gives someone a good intuition. I would recommend glancing at the image at the top of that link (https://olydis.medium.com/a-visual-introduction-to-tree-calc...) for reference to follow along with my interpretation.
To me it overall seems like similar to a lisp, but focused on binary trees. There are rules 0a, 0b, 1, 2, 3a, 3b, 3c. And the concept is actually pretty simple. Rules 0 are construction rules. Rules 1 & 2 are combinators. And rule 3 is pattern matching which allows for reflection / acting on the shape of code (according to the docs).
First I think of there are 3 main constructions (and via analogy I'll use lisp terms even though this clearly is not lisp and will not directly translate, but I think that will make communication easier rather than using unfamiliar terms).
Being a binary tree there are 3 main basic shapes. There is nil/false/empty that is represented by a single node. There is a "boxed expression" (a single root node with a left child) and there is a "paired expression" (a root node with left and right children). (in the diagram on the link w,x,y,z ... represent "unknown" expressions or maybe arguments if you want to think of them that way). '@' is used as the "apply" operator, but I'm going to call it "cons" even though it is not the "cons" from lisp.
Here is my view:
Construction Rules:
0a - If I cons false/nil with an unknown expression what do I get? i.e. what is "(cons 'nil z)", where z is an unknown expression? The answer is '(z) (or equivalently (z . 'nil)) What I'll call a boxed expression.
0b - If I cons a "boxed expression" with an unknown expression what do I get? What is "(cons (y . 'nil) 'nil) z)"? Answer: a boxed pair: (y . z).
1 - [Note this rule does double duty and starts a new pattern that's kinda cool]. So continuing the pattern, if I cons a boxed pair with an unknown expression (cons (x . y) z) what do I get? Well it depends on the shape of the boxed pair. Specifically what is the shape of x? If it's 'nil then apply rule 1.
(cons ('nil . y) z) results in y.
Detour, it turns out (looking from the the perspective of the y and z, and 'cons' being a operator/combinator that this just happens to also be the 'K' combinator.
2 - So in this same boxed pair from above (cons (x . y) z), we said apply rule 1 if x is 'nil. But we've had consistent pattern (nil, boxed expr, paired expr) here so let's follow it now on the shape of x. So if x isn't 'nil, but is instead a boxed expression (x . 'nil) then use rule 2. Or to be explicit:
(cons ((x . 'nil) . y) z) reduces to (cons (cons x z) (cons y z)). Here it is helpful to think of cons as "apply" as they define it and not my made-up 'cons', because then it's easy to see that this is the S combinator. Ie (apply (apply x z) (apply y z)).
3 - Ok so we have one stage left of our pattern for (cons (x . y) z). We said if x is 'nil you get rule 1. If x is a boxed expression, you get rule 2. And so if x is a paired expression, what do you get? Well you get a flavor of rule 3. Which flavor of rule 3? Well it depends on the shape of z. And if you've been following there are 3 choices for the shape of z. The same 3: nil, boxed expression and paired expression. You'll also note that this is the only time we have transformations depend on anything in z (the second argument to cons). Up until now z has always been a blackbox and had no affect on our rules -- the first arg to cons always decided what we did. If you think of z as the data argument to cons and the first arg to cons being the code, this is allowing the code we execute to structurally depend on the data/argument. Ie this is your functional pattern matching behavior. Again, thinking of 'cons' as 'apply' here helps.
So let's walk through them, but where we left off we said in (cons (x . y) z), x could be nil and we get rule 1. x could be a boxed expression and we get rule 2. And if x is a paired expression we get rule 3. So a paired expression replacing the unknown x above looks like (cons ((w . x) . y) z).
[A note you can skip: To be consistent with the image in the link I've reused the letter 'x', but the x in (cons ((w . x) . y) z) is not the x in
(cons (x . y) z). The 'x' in (cons (x . y) z) is equivalent to (w . x) in the (cons ((w . x) . y) z). Think hygienic syntax replacement for my own convenience. You can also ignore this whole bracketed aside if it didn't make sense.]
Continuing rule 3, a paired expression replacing the unknown x above looks like (cons ((w . x) . y) z). Whether you apply rule 3a, 3b or 3c depends on the shape of the "argument" z. If z is nil (or a base case like 0), you get the fixed expression 'w'. If z is a boxed expression (u . 'nil), you get (cons x u), and if it's a paired expression (u . v) you get (cons (cons y u) v). Ignoring the details, it will conditionally apply the "code" w, x or y to the argument z, depending on the shape of z. In my view that is essentially equivalent to functional programming style pattern matching - they describe it as reflection.
Overall it's a pretty cool system. 3 "basic shapes". A few basic construction rules, 2 combinators and a case for pattern matching. In my view they have an operator that combines cons and apply in an elegant way, and can do pattern matching on it. It seems to really get to the essence of a lot of ideas in the space and very concisely without many assumptions or overhead. And note, while I specified, nil and cons and others, this all when serialized is represented by a single symbol with open and close parens for grouping. It's all just valueless binary trees.
Tree Calculus is an alternative to lambda calculus that is capable of doing meta-theory without having to construct or bolt on something else entirely.
If lambda calculus provides a theoretical foundation for a language like Lisp. Tree calculus provides a theoretical foundation for a Lisp with a macro system that is fundamentally part of the core calculus.
You don’t have to write parsers and other stuff to do meta programming. It’s fundamentally built in and the paper I posted above explores how to construct type systems as a library, not as something that is outside of the runtime environment.
Here’s what’s really cool about it too: Just like lambda calculus, you can evaluate tree calculus with pencil and paper.
My friend this ain't memes or celebrity news, it's theoretical computer science. You'll get out of it roughly what you're ready to put into it. If you're short on time, the grammar is like 5 characters and the proof of the halting problem isn't much bigger. If you don't already know why that is interesting though you really might have to read a bit to find out
That's my complaint, this page is essentially a meme.
And yes, i know enough about computer science to know that making an axinomic system with a short grammar that has a proof that the halting problem is undecidable, isn't particularly note worthy by itself. I highly doubt the reason people are interested in this is just code golfing a proof of the undecidability of the halting problem
Tree calculus is more expressive, can define more functions, than lambda calculus or traditional combinatory logic. This theorem is so startling, so outrageous, that mere proof, even a formal proof, is not enough to get the message across. This is no one’s fault. To help, I am currently writing a series of conversations, where the characters come at these ideas from many points of view, letting it all hang out. It doesn’t help that the current formal proof of the theorem is indirect. So here is a challenge for you all. Find a direct proof that the node operator of tree calculus is not definable in traditional
combinatory logic. That is, no SK-combinator satisfies the equations of tree calculus. Conversely, trying to find such a combinator may give you a better feel for intensionality.
For people wondering what this is and whether to take this seriously, I’ll try to provide some context:
Tree Calculus is a novel alternative to lambda calculus as a minimal model of computation. Unlike most minimal systems, tree calculus is fundamentally capable of being fully reflective.
If you were ever interested in creating a programming language that could fully reflect and enhance itself with libraries, this is one of a very few number of known minimal system you can use as a starting point. Think of it as a lambda calculus with macros built into the underlying calculus, not something bolted on afterward based on a partially implemented meta-theory.
If you’re into formal proofs, you can find Rocq proofs of his work in his repo.
Personal context, Barry Jay is a respected academic and researcher who’s collaborated with people like Simon Peyton Jones and Eugenio Moggi. His PhD advisor was Joachim Lambek (from the Curry-Howard-Lambek correspondence). He’s not a random professor with a neat toy, Barry’s been working with many of the best minds on the foundations of computation long before most of us knew category theory existed. He’s been formalizing and defining pattern matching, higher-ordered patterns and has been searching/separating what is truly essential from what is not essential for decades.
Seriously, look at his research history on Google Scholar.
I think it will take the rest of us a while to understand and unpack the insight he’s already imbued into such a small and simple calculus.
The reduction rules seem kind of arbitrary to me. At that point why don't you just use combinators instead of defining a set of 5 ways their operator can be used?
35 comments
> the application of E1 to E2 attaches E2 to the root of E1 on the right.
It’s completely unclear to me what this means. The literal meaning is obviously wrong, because attaching a tree to a root that already has two child nodes would result in a ternary node, but apparently all trees in tree calculus are binary.
But what I believe was meant by the above was: "delta E1 E1" creates a new "reduction tree" (my own made up term) with E1 being the left child of this new root node, and E2 being the right child of this new root node - and which then begins applying the reduction on this newly constructed tree.
https://olydis.medium.com/a-visual-introduction-to-tree-calc...
Overall the concept seems pretty interesting - and it's nice to see someone come up with something both novel in the space and at the same time seemly "applicable".
I happened to read this link [1], so that's what I'm basing it off of. Also note, I don't know this at all so I may be flat our wrong, but hopefully this gives someone a good intuition. I would recommend glancing at the image at the top of that link (https://olydis.medium.com/a-visual-introduction-to-tree-calc...) for reference to follow along with my interpretation.
To me it overall seems like similar to a lisp, but focused on binary trees. There are rules 0a, 0b, 1, 2, 3a, 3b, 3c. And the concept is actually pretty simple. Rules 0 are construction rules. Rules 1 & 2 are combinators. And rule 3 is pattern matching which allows for reflection / acting on the shape of code (according to the docs).
First I think of there are 3 main constructions (and via analogy I'll use lisp terms even though this clearly is not lisp and will not directly translate, but I think that will make communication easier rather than using unfamiliar terms).
Being a binary tree there are 3 main basic shapes. There is nil/false/empty that is represented by a single node. There is a "boxed expression" (a single root node with a left child) and there is a "paired expression" (a root node with left and right children). (in the diagram on the link w,x,y,z ... represent "unknown" expressions or maybe arguments if you want to think of them that way). '@' is used as the "apply" operator, but I'm going to call it "cons" even though it is not the "cons" from lisp.
Here is my view:
Construction Rules:
0a - If I cons false/nil with an unknown expression what do I get? i.e. what is "(cons 'nil z)", where z is an unknown expression? The answer is '(z) (or equivalently (z . 'nil)) What I'll call a boxed expression.
0b - If I cons a "boxed expression" with an unknown expression what do I get? What is "(cons (y . 'nil) 'nil) z)"? Answer: a boxed pair: (y . z).
1 - [Note this rule does double duty and starts a new pattern that's kinda cool]. So continuing the pattern, if I cons a boxed pair with an unknown expression (cons (x . y) z) what do I get? Well it depends on the shape of the boxed pair. Specifically what is the shape of x? If it's 'nil then apply rule 1. (cons ('nil . y) z) results in y.
Detour, it turns out (looking from the the perspective of the y and z, and 'cons' being a operator/combinator that this just happens to also be the 'K' combinator.
2 - So in this same boxed pair from above (cons (x . y) z), we said apply rule 1 if x is 'nil. But we've had consistent pattern (nil, boxed expr, paired expr) here so let's follow it now on the shape of x. So if x isn't 'nil, but is instead a boxed expression (x . 'nil) then use rule 2. Or to be explicit: (cons ((x . 'nil) . y) z) reduces to (cons (cons x z) (cons y z)). Here it is helpful to think of cons as "apply" as they define it and not my made-up 'cons', because then it's easy to see that this is the S combinator. Ie (apply (apply x z) (apply y z)).
3 - Ok so we have one stage left of our pattern for (cons (x . y) z). We said if x is 'nil you get rule 1. If x is a boxed expression, you get rule 2. And so if x is a paired expression, what do you get? Well you get a flavor of rule 3. Which flavor of rule 3? Well it depends on the shape of z. And if you've been following there are 3 choices for the shape of z. The same 3: nil, boxed expression and paired expression. You'll also note that this is the only time we have transformations depend on anything in z (the second argument to cons). Up until now z has always been a blackbox and had no affect on our rules -- the first arg to cons always decided what we did. If you think of z as the data argument to cons and the first arg to cons being the code, this is allowing the code we execute to structurally depend on the data/argument. Ie this is your functional pattern matching behavior. Again, thinking of 'cons' as 'apply' here helps.
So let's walk through them, but where we left off we said in (cons (x . y) z), x could be nil and we get rule 1. x could be a boxed expression and we get rule 2. And if x is a paired expression we get rule 3. So a paired expression replacing the unknown x above looks like (cons ((w . x) . y) z).
[A note you can skip: To be consistent with the image in the link I've reused the letter 'x', but the x in (cons ((w . x) . y) z) is not the x in (cons (x . y) z). The 'x' in (cons (x . y) z) is equivalent to (w . x) in the (cons ((w . x) . y) z). Think hygienic syntax replacement for my own convenience. You can also ignore this whole bracketed aside if it didn't make sense.]
Continuing rule 3, a paired expression replacing the unknown x above looks like (cons ((w . x) . y) z). Whether you apply rule 3a, 3b or 3c depends on the shape of the "argument" z. If z is nil (or a base case like 0), you get the fixed expression 'w'. If z is a boxed expression (u . 'nil), you get (cons x u), and if it's a paired expression (u . v) you get (cons (cons y u) v). Ignoring the details, it will conditionally apply the "code" w, x or y to the argument z, depending on the shape of z. In my view that is essentially equivalent to functional programming style pattern matching - they describe it as reflection.
Overall it's a pretty cool system. 3 "basic shapes". A few basic construction rules, 2 combinators and a case for pattern matching. In my view they have an operator that combines cons and apply in an elegant way, and can do pattern matching on it. It seems to really get to the essence of a lot of ideas in the space and very concisely without many assumptions or overhead. And note, while I specified, nil and cons and others, this all when serialized is represented by a single symbol with open and close parens for grouping. It's all just valueless binary trees.
https://olydis.medium.com/a-visual-introduction-to-tree-calc...
https://dl.acm.org/doi/pdf/10.1145/3704253.3706138
Tree Calculus is an alternative to lambda calculus that is capable of doing meta-theory without having to construct or bolt on something else entirely.
If lambda calculus provides a theoretical foundation for a language like Lisp. Tree calculus provides a theoretical foundation for a Lisp with a macro system that is fundamentally part of the core calculus.
You don’t have to write parsers and other stuff to do meta programming. It’s fundamentally built in and the paper I posted above explores how to construct type systems as a library, not as something that is outside of the runtime environment.
Here’s what’s really cool about it too: Just like lambda calculus, you can evaluate tree calculus with pencil and paper.
It’s very slick.
And yes, i know enough about computer science to know that making an axinomic system with a short grammar that has a proof that the halting problem is undecidable, isn't particularly note worthy by itself. I highly doubt the reason people are interested in this is just code golfing a proof of the undecidability of the halting problem
Tree Calculus is a novel alternative to lambda calculus as a minimal model of computation. Unlike most minimal systems, tree calculus is fundamentally capable of being fully reflective.
If you were ever interested in creating a programming language that could fully reflect and enhance itself with libraries, this is one of a very few number of known minimal system you can use as a starting point. Think of it as a lambda calculus with macros built into the underlying calculus, not something bolted on afterward based on a partially implemented meta-theory.
If you’re into formal proofs, you can find Rocq proofs of his work in his repo.
https://github.com/barry-jay-personal/tree-calculus
If you’re interested in how something like tree calculus can express a type system, here’s a recent ACM paper:
https://dl.acm.org/doi/pdf/10.1145/3704253.3706138
Personal context, Barry Jay is a respected academic and researcher who’s collaborated with people like Simon Peyton Jones and Eugenio Moggi. His PhD advisor was Joachim Lambek (from the Curry-Howard-Lambek correspondence). He’s not a random professor with a neat toy, Barry’s been working with many of the best minds on the foundations of computation long before most of us knew category theory existed. He’s been formalizing and defining pattern matching, higher-ordered patterns and has been searching/separating what is truly essential from what is not essential for decades.
Seriously, look at his research history on Google Scholar.
I think it will take the rest of us a while to understand and unpack the insight he’s already imbued into such a small and simple calculus.
> f = λa λb concat ["Hello ",a," ",b,"!"] > f "Jane" "Doe" Hello Jane Doe!
then,
> g = f "Admiral" > invert g "Hello Admiral Alice!" Alice
He's really into the graphical representation of Turing machines and multiway Turing machines.
> Tree calculus is minimal, Turing-complete, reflective, modular
Ok. But what is it?
The Tao giveth △ (false)
△ gives △ △ (true)
△(△, △) giveth rise to all things computable
(just kidding, I am totally lost to this)