The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.
They even directly conclude at the end of the article that improvements in algorithm are more important than the choice of language:
> Algorithmic complexity improvements dominate language-level optimisations. Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
Yet they still have chosen to put the “Rust rewrite” part in the title. I almost think it's a click bait.
Yeah the algorithmic fix is doing most of the work here. But call that parser hundreds of times on tiny streaming chunks and the WASM boundary cost per call adds up fast. Same thing would happen with C++ compiled to WASM.
O(N²) -> O(N) was 3.3x faster, but before that, eliminating the boundary (replacing wasm with JS) led to speedups of 2.2x, 4.6x, 3.0x (see one table back).
It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.
That's a pretty big claim. I don't doubt that a lot of uv's benefits are algo. But everything? Considering that running non IO-bound native code should be an order of magnitude faster than python.
Kinda is. We came up with abstractions to help reason about what really matters. The more you need to deal with auxillary stuff (allocations, lifetimes), more likely you will miss the big issue.
"We rewrote this code from language L to language M, and the result is better!" No wonder: it was a chance to rectify everything that was tangled or crooked, avoid every known bad decision, and apply newly-invented better approaches.
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
This article is obviously AI generated and besides being jarring to read, it makes me really doubt its validity. You can get substantially faster parsing versus JSON.parse() by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.
This is why, when a programming language already has tooling for compilers, being it ahead of time, or dynamic, it pays off to first go around validating algorithms and data structures before a full rewrite.
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
Not directly related to the post but what does OpenUI do? I'm finding it interesting but hard to understand. Is it an intermediate layer that makes LLMs generate better UI?
The WASM story is interesting from a security angle too. WASM modules inheriting the host's memory model means any parsing bugs that trigger buffer overreads in the Rust code could surface in ways that are harder to audit at the JS boundary. Moving to native TS at least keeps the attack surface in one runtime, even if the theoretical memory safety guarantees go down.
Its also worth underlining that it's not just "The parsing computation is fast enough that V8's JIT eliminates any Rust advantage", but specifically that this kind of straight-forward well-defined data structures and mutation, without any strange eval paths or global access is going to be JITed to near native speed relatively easily.
I’m more of a dabbler dev/script guy than a dev but Every. single. thing I ever write in javascript ends up being incredibly fast. It forces me to think in callbacks and events and promises.
Python and C (or async!) seem easy and sorta lazy in comparison.
JS and WASM share the main arraybuffer. It's just very not-javascript-like to try to use an arraybuffer heap, because then you don't have strings or objects, just index,size pairs into that arraybuffer.
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
Is this an outlier or has Rust started to be part of the establishment and being 'old' so that people want to share their "moving away from Rust" stories?
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
I hope we can still get to a point where wasm modules can directly access the web platform APIs and get JS out of the picture entirely. After all, those APIs themselves are implemented in C++ (and maybe some Rust now).
212 comments
> Algorithmic complexity improvements dominate language-level optimisations. Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
Yet they still have chosen to put the “Rust rewrite” part in the title. I almost think it's a click bait.
It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.
> The real win here isn't TS over Rust
Kinda is. We came up with abstractions to help reason about what really matters. The more you need to deal with auxillary stuff (allocations, lifetimes), more likely you will miss the big issue.
One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.
> Title undersells the more interesting engineering imo.
Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
JSON.parse()by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.
https://open-ui.org/
Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.
Looks inside
“The old implementation had some really inappropriate choices.”
Every time.
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
AFAIK, you can create a shared memory block between WASM <-> JS:
https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
Then you'd only need to parse the SharedArrayBuffer at the end on the JS side
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
Claude tells me this is https://www.fumadocs.dev/
It was able to beat XZ on its own game by a good margin:
https://github.com/mohsen1/fesh
In their worst case it was just x5. We clearly have some progress here.