We rewrote our Rust WASM parser in TypeScript and it got faster (openui.com)

by zahlekhan 212 comments 293 points
Read article View on HN

212 comments

[−] blundergoat 57d ago
The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.
[−] zahrevsky 56d ago
They even directly conclude at the end of the article that improvements in algorithm are more important than the choice of language:

> Algorithmic complexity improvements dominate language-level optimisations. Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.

Yet they still have chosen to put the “Rust rewrite” part in the title. I almost think it's a click bait.

[−] nulltrace 56d ago
Yeah the algorithmic fix is doing most of the work here. But call that parser hundreds of times on tiny streaming chunks and the WASM boundary cost per call adds up fast. Same thing would happen with C++ compiled to WASM.
[−] hrmtst93837 56d ago
[flagged]
[−] azakai 56d ago
O(N²) -> O(N) was 3.3x faster, but before that, eliminating the boundary (replacing wasm with JS) led to speedups of 2.2x, 4.6x, 3.0x (see one table back).

It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.

[−] hrmtst93837 56d ago
[flagged]
[−] socalgal2 57d ago
same for uv but no one takes that message. They just think "rust rulez!" and ignore that all of uv's benefits are algo, not lang.
[−] estebank 57d ago
Some architectures are made easier by the choice of implementation language.
[−] coldtea 56d ago
Just the fact that I can install a single binary is 10x better than an equally fast Python implementation.
[−] rowanG077 56d ago
That's a pretty big claim. I don't doubt that a lot of uv's benefits are algo. But everything? Considering that running non IO-bound native code should be an order of magnitude faster than python.
[−] catlifeonmars 56d ago
You’re not wrong, but that win would not get as many views. It’s not clickbaity enough
[−] wolvesechoes 56d ago

> The real win here isn't TS over Rust

Kinda is. We came up with abstractions to help reason about what really matters. The more you need to deal with auxillary stuff (allocations, lifetimes), more likely you will miss the big issue.

[−] sroussey 57d ago
Yeah, though the n^2 is overstating things.

One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.

[−] adastra22 56d ago
No AI generated comments on HN please.
[−] Aurornis 57d ago

> Title undersells the more interesting engineering imo.

Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.

[−] shmerl 57d ago
More like a misleading clickbait.
[−] nine_k 57d ago
"We rewrote this code from language L to language M, and the result is better!" No wonder: it was a chance to rectify everything that was tangled or crooked, avoid every known bad decision, and apply newly-invented better approaches.

So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.

[−] slopinthebag 56d ago
This article is obviously AI generated and besides being jarring to read, it makes me really doubt its validity. You can get substantially faster parsing versus JSON.parse() by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.
[−] spankalee 57d ago
I was wondering why I hadn't heard of Open UI doing anything with WASM.

This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.

https://open-ui.org/

Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.

[−] moomin 56d ago
“We saw huge speed-ups when changing technology.”

Looks inside

“The old implementation had some really inappropriate choices.”

Every time.

[−] pjmlp 56d ago
This is why, when a programming language already has tooling for compilers, being it ahead of time, or dynamic, it pays off to first go around validating algorithms and data structures before a full rewrite.

Additionally even after those options are exhausted, only a key parts might need a rewrite, not the whole thing.

However, I wonder how many care about actually learning about algorithms, data structures and mechanical sympathy in the age of Electron apps.

It feels quite often that a rewrite is chosen, because knowing how to actually apply those skills is the CS stuff many think isn't worthwhile learning about.

[−] gavinray 56d ago
Why weren't you able to use WASM shared heaps to get zero-copy behavior?

AFAIK, you can create a shared memory block between WASM <-> JS:

https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...

Then you'd only need to parse the SharedArrayBuffer at the end on the JS side

[−] vmsp 56d ago
Not directly related to the post but what does OpenUI do? I'm finding it interesting but hard to understand. Is it an intermediate layer that makes LLMs generate better UI?
[−] envguard 56d ago
The WASM story is interesting from a security angle too. WASM modules inheriting the host's memory model means any parsing bugs that trigger buffer overreads in the Rust code could surface in ways that are harder to audit at the JS boundary. Moving to native TS at least keeps the attack surface in one runtime, even if the theoretical memory safety guarantees go down.
[−] athrowaway3z 56d ago
Its also worth underlining that it's not just "The parsing computation is fast enough that V8's JIT eliminates any Rust advantage", but specifically that this kind of straight-forward well-defined data structures and mutation, without any strange eval paths or global access is going to be JITed to near native speed relatively easily.
[−] horacemorace 56d ago
I’m more of a dabbler dev/script guy than a dev but Every. single. thing I ever write in javascript ends up being incredibly fast. It forces me to think in callbacks and events and promises. Python and C (or async!) seem easy and sorta lazy in comparison.
[−] Dwedit 56d ago
JS and WASM share the main arraybuffer. It's just very not-javascript-like to try to use an arraybuffer heap, because then you don't have strings or objects, just index,size pairs into that arraybuffer.

Anyway, Javascript is no stranger to breaking changes. Compare Chromium 47 to today. Just add actual integers as another breaking change, then WASM becomes almost unnecessary.

[−] bulbar 56d ago
Is this an outlier or has Rust started to be part of the establishment and being 'old' so that people want to share their "moving away from Rust" stories?

I didn't mind reading articles that are not about how Rust is great in theory (and maybe practice).

[−] dmix 57d ago
That blog post design is very nice. I like the 'scrollspy' sidebar which highlights all visible headings.

Claude tells me this is https://www.fumadocs.dev/

[−] mohsen1 56d ago
When there is a solid test harness, AI Coding can do magic!

It was able to beat XZ on its own game by a good margin:

https://github.com/mohsen1/fesh

[−] mwcampbell 56d ago
I hope we can still get to a point where wasm modules can directly access the web platform APIs and get JS out of the picture entirely. After all, those APIs themselves are implemented in C++ (and maybe some Rust now).
[−] fHr 56d ago
I almost can't believe this swc for example is 80x faster then babeljs.
[−] gettingoverit 56d ago
In ye olden days of WASM just added to the browser, the difference between native JS and boost::spirit in WASM was x200.

In their worst case it was just x5. We clearly have some progress here.

[−] LunaSea 56d ago
This has been known by Node.js developers for a while with many C++ core and NPM modules being rewritten in JavaScript to improve performance.
[−] caderosche 57d ago
What is the purpose of the Rust WASM parser? Didn't understand that easily from the article. Would love a better explanation.
[−] hackwaly_new 54d ago
You don't need rewriting if you are using MoonBit. It gives you wasm, wasm-gc, js at once.