This is really interesting. At first glance, I was tempted to say "why not just use sqlite with JSON fields as the transfer format?" But everything about that would be heavier-weight in every possible way - and if I'm reading things right, this handles nested data that might itself be massive. This is really elegant.
While this is a neat feature, this means it is not in fact a drop in replacement for JSON.parse, as you will be breaking any code that relies on the that result being a mutable object.
I love these projects, I hope one of them someday emerges as the winner because (as it motivates all these libraries' authors) there's so much low hanging fruit and free wins changing the line format for JSON but keeping the "Good Parts" like the dead simple generic typing.
XML has EXI (Efficient XML Interchange) for precisely the reason of getting wins over the wire but keeping the nice human readable format at the ends.
Interesting. I've heard about cursors in reference to a Rust library that was mentioned as being similar to protobuf and cap'n proto.
Does this duplicate the name of keys? Say if you have a thousand plain objects in an array, each with a "version" key, would the string "version" be duplicated a thousand times?
Another project a lot of people aren't aware of even though they've benefitted from it indirectly is the binary format for OpenStreetMap. It allows reading the data without loading a lot of it into memory, and is a lot faster than using sqlite would be.
JSON's dominance is one of the most accidental success stories in computing.
Douglas Crockford didn't design it — he said he "discovered" it. It was already there in JavaScript's object literal syntax, which itself traces back to Brendan Eich's 10-day sprint in 1995.
A data format that conquered the internet was a side effect of a language built under absurd time pressure.
Every attempt to replace it has to overcome that kind of accidental ubiquity, which is much harder than overcoming a technical limitation.
A tiny note on the speed comparison: The 23,000x faster single-key lookup seems a bit misleading to me.
Once you get the computational complexity advantage, then you can make it as much times faster as you want. In these cases small instances matter to judge constants, and to the average (mean?) user, mean instance sizes.
I'm not sure how to sell the advantage succinctly though. Maybe just focus on "real-world" scenarios, but there's no footnote with details on the comparison
The documentation reference a “decode” function, and it’s imported to the example code, but it’s never called. I’m not sure what the API is after reading the examples.
could this be useful for embedding info in server generated web pages that are then picked up by a JavaScript. e.g. a tom-select country picker that gets its data from an embedded RX structure?
I recently created my own low-overhead binary JSON cause I did not like Mongo's BSON (too hacky, not mergeable). It took me half a day maybe, including the spec, thanks Claude. First, implemented the critical feature I actually need, then made all the other decisions in the least-surprising way.
At this point, probably, we have to think how to classify all the "JSON alternatives" cause it gets difficult to remember them all.
108 comments
My one eyebrow raise is - is there no binary format specification? https://github.com/creationix/rx/blob/main/rx.ts#L1109 is pretty well commented, but you can't call it a JSON alternative without having some kind of equivalent to https://www.json.org/ in all its flowchart glory!
One old version that is meant to be more human readable/writable is jsonito
https://github.com/creationix/jsonito
I'll add similar diagrams and docs for the format itself here.
https://github.com/creationix/rx/blob/main/docs/rx-format.md
Railroad diagrams will come later when I have more time.
This did catch my eye, however: https://github.com/creationix/rx?tab=readme-ov-file#proxy-be...
While this is a neat feature, this means it is not in fact a drop in replacement for JSON.parse, as you will be breaking any code that relies on the that result being a mutable object.
XML has EXI (Efficient XML Interchange) for precisely the reason of getting wins over the wire but keeping the nice human readable format at the ends.
Does this duplicate the name of keys? Say if you have a thousand plain objects in an array, each with a "version" key, would the string "version" be duplicated a thousand times?
Another project a lot of people aren't aware of even though they've benefitted from it indirectly is the binary format for OpenStreetMap. It allows reading the data without loading a lot of it into memory, and is a lot faster than using sqlite would be.
Edit: the rust library I remember may have been https://rkyv.org/
Even a technically superior format struggles without that ecosystem.
Douglas Crockford didn't design it — he said he "discovered" it. It was already there in JavaScript's object literal syntax, which itself traces back to Brendan Eich's 10-day sprint in 1995.
A data format that conquered the internet was a side effect of a language built under absurd time pressure.
Every attempt to replace it has to overcome that kind of accidental ubiquity, which is much harder than overcoming a technical limitation.
Once you get the computational complexity advantage, then you can make it as much times faster as you want. In these cases small instances matter to judge constants, and to the average (mean?) user, mean instance sizes.
I'm not sure how to sell the advantage succinctly though. Maybe just focus on "real-world" scenarios, but there's no footnote with details on the comparison
Docs are super unclear.
Is it versioned? Or does it need to be..
Why is it called RX?
The viewer is cool, took me a while to find the link to it though, maybe add a link in the readme next to the screenshot.
At this point, probably, we have to think how to classify all the "JSON alternatives" cause it gets difficult to remember them all.
Is RX a subset, a superset or bijective to JSON?
https://github.com/gritzko/librdx/tree/master/json