The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.
"We rewrote this code from language L to language M, and the result is better!" No wonder: it was a chance to rectify everything that was tangled or crooked, avoid every known bad decision, and apply newly-invented better approaches.
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
This article is obviously AI generated and besides being jarring to read, it makes me really doubt its validity. You can get substantially faster parsing versus JSON.parse() by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.
This is why, when a programming language already has tooling for compilers, being it ahead of time, or dynamic, it pays off to first go around validating algorithms and data structures before a full rewrite.
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
Not directly related to the post but what does OpenUI do? I'm finding it interesting but hard to understand. Is it an intermediate layer that makes LLMs generate better UI?
The WASM story is interesting from a security angle too. WASM modules inheriting the host's memory model means any parsing bugs that trigger buffer overreads in the Rust code could surface in ways that are harder to audit at the JS boundary. Moving to native TS at least keeps the attack surface in one runtime, even if the theoretical memory safety guarantees go down.
Its also worth underlining that it's not just "The parsing computation is fast enough that V8's JIT eliminates any Rust advantage", but specifically that this kind of straight-forward well-defined data structures and mutation, without any strange eval paths or global access is going to be JITed to near native speed relatively easily.
I’m more of a dabbler dev/script guy than a dev but Every. single. thing I ever write in javascript ends up being incredibly fast. It forces me to think in callbacks and events and promises.
Python and C (or async!) seem easy and sorta lazy in comparison.
JS and WASM share the main arraybuffer. It's just very not-javascript-like to try to use an arraybuffer heap, because then you don't have strings or objects, just index,size pairs into that arraybuffer.
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
Is this an outlier or has Rust started to be part of the establishment and being 'old' so that people want to share their "moving away from Rust" stories?
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
I hope we can still get to a point where wasm modules can directly access the web platform APIs and get JS out of the picture entirely. After all, those APIs themselves are implemented in C++ (and maybe some Rust now).
212 comments
So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.
JSON.parse()by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.
https://open-ui.org/
Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.
Looks inside
“The old implementation had some really inappropriate choices.”
Every time.
Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.
However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.
It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.
AFAIK, you can create a shared memory block between WASM <-> JS:
https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
Then you'd only need to parse the SharedArrayBuffer at the end on the JS side
Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.
I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).
Claude tells me this is https://www.fumadocs.dev/
It was able to beat XZ on its own game by a good margin:
https://github.com/mohsen1/fesh
In their worst case it was just x5. We clearly have some progress here.