- The docs.rs docs are still building, but the docs from the recent RC are available [0]
- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.
- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).
- Ongoing releases on a monthly cadance are planned
Tangent, but Slint is a really cool project. Not being able to dynamically insert widgets from code was the only thing that turned me off of it for my use case.
Agreed, I find Slint really interesting. To me, the biggest pain point is the very limited theming support. It's virtually impossible to make a custom theme without re-implementing most widget logic, which is a shame.
Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.
It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.
> Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.
Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.
It depends on stuff like SpiderMonkey so not pure Rust.
It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.
I have been building/vibecoding a similar tool and unfortunately came to the conclusion that in practice, there are just too many features dependent on the full Chrome stack that it's just more pragmatic to use a real Chromium installation despite the file size. Performance/image generation speed is still fine, though.
I think you could in theory have a similar webkit-based stripped down headless crate that might have a good tradeoff of features, performance, and size.
That's pretty cool. I'm guessing it would need some tweaking to handle things like cookies, or does it just need a pointer to the cookiejar? I'm not too familiar with servo,
This should be the real benchmark of AI coding skills - how fast do we get safe/modern infrastructure/tooling that everyone agrees we need but nobody can fund the development.
If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.
I'd trust that way more than x% increase on y bench.
Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.
As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn't matter who/what wrote the code.
I have been thinking about that lately and isn't testing and security evaluation way harder problem than designing and carefully implementing new features? I think that vibecoding automates easiest step in SW development while making more challenging/expensive steps harder. How are we suppose to debug complex problems in critical infrastructure if no one understands code? It is possible that in future agents will be able to do that but it feels to me that we are not there yet.
AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It's low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.
I'm kind of doubtful that AI is all that great at fuzz testing. Putting that aside though, we are talking about web browsers here. Security issues from bad specification or misunderstanding the specification is relatively common.
I disagree. Thorough testing provides some level of confidence that the code is correct, but there's immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.
That's just status quo, which isn't really holding up in the modern era IMO.
I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.
I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.
If you don't fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn't thought of?
As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?
Exactly. We do not have another artifact than code which can be deterministically converted to program. That is reason we have to still read the code. Prompt is not final product in development process.
Well if the big players want to tell me their models are nearly AGI they need to put up or shut up. I don't want a stochastically downloaded C compiler. I want tech that improves something.
>We do not need vibe-coded critical infrastructure.
I think when you have virtually unlimited compute, it affords the ability to really lock down test writing and code review to a degree that isn't possible with normal vibe code setups and budgets.
That said for truly critical things, I could see a final human review step for a given piece of generated code, followed by a hard lock. That workflow is going to be popular if it already isn't.
It might when an individual function has 50 different models reviewing it, potentially multiple times each.
Perhaps part of a complex review chain for said function that's a few hundred LLM invocations total.
So long as there's a human reviewing it at the end and it gets locked, I'd argue it ultimately doesn't matter how the code was initially created.
There's a lot of reasons it would matter before it gets to that point, just more to do with system design concerns. Of course, you could also argue safety is an ongoing process that partially derives from system design and you wouldn't be wrong.
It occurred to me there's some recent prior art here:
I do not care how strong your vibes are and how many claudes you have producing slop and reviewing each others' slop. I do not think vibe coding is appropriate for critical infrastructure. I don't understand why you think telling me you'd have more slop would make me appreciate it more.
The problem with such infrastructure is not the initial development overhead.
It's the maintenance. The long term, slow burn, uninteresting work that must be done continually. Someone needs to be behind it for the long haul or it will never get adopted and used widely.
Right now, at least, LLMs are not great at that. They're great for quickly creating smaller projects. They get less good the older and larger those projects get.
Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
For those of you using a browser to generate PDFs, the Rust crate you should look into is Typst [1]. Regardless of your application language, you can use their CLI.
It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.
Is there a table of implemented RFCs? Something similar to http://caniuse.com where we can see what HTML/JS/CSS standards and features are implemented? If it exists, I can't seem to find it. Closest thing seems to be "experimental features" page but its not quite detailed enough.
So, since this is the top post on Hacker News, and the website's description is a bit too high level for me, what does Servo let me do? By "web technologies", does it mean "put a web browser inside your desktop app"?
> As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo
Wait, crate versions go up to 1.0?
EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.
I was a little curious to see if there was any Tauri integration, and it looks like there is (tauri-runtime-verso) ... Not sure where that comes out size-wise compared to Electron at that point though. My main desire there would be for Linux/flathub distribution of an app I've been working on.
It's a great move. The early development of Rust aimed to support Servo. However, it's still disappointing that the script engine uses SpiderMonkey, which is purely C++.
152 comments
- The docs.rs docs are still building, but the docs from the recent RC are available [0]
- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.
- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).
- Ongoing releases on a monthly cadance are planned
[0]: https://docs.rs/servo/0.1.0-rc2/servo
[1]: https://github.com/slint-ui/slint/tree/master/examples/servo
[2]: https://docs.rs/stylo
[3]: https://docs.rs/webrender
It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.
> Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.
Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.
Do you know if Servo is 100% Rust with no external system dependencies? (ie, can get away with rustls only?)
Can this do Javascript? (Edit: Rendering SPAs / Javascript-only UX would be useful.)
Edit 2: Can it do WebGL? Same rationale for ThreeJS-style apps and 3D renders. (This in particular is right up my use case's alley.)
It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.
In Rust, the chromiumoxide crate is a performant way to interface with it for screenshots: https://crates.io/crates/chromiumoxide
> there are just too many features dependent on the full Chrome stack
Do you mind elaborating on what features are missing?
If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.
I'd trust that way more than x% increase on y bench.
Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.
At some point security becomes - the program does the thing the human wanted it to do but didn't realize they didn't actually want.
No amount of testing can fix logic bugs due to bad specification.
Each of the last 4 comments in your thread (including yours) are conflating what they mean by AI.
But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.
And we've had some succeses, but i wouldn't expect any game changing breakthroughs any time soon.
I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.
As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?
>>
...give him unlimited model access>We do not need vibe-coded critical infrastructure.
I think when you have virtually unlimited compute, it affords the ability to really lock down test writing and code review to a degree that isn't possible with normal vibe code setups and budgets.
That said for truly critical things, I could see a final human review step for a given piece of generated code, followed by a hard lock. That workflow is going to be popular if it already isn't.
Perhaps part of a complex review chain for said function that's a few hundred LLM invocations total.
So long as there's a human reviewing it at the end and it gets locked, I'd argue it ultimately doesn't matter how the code was initially created.
There's a lot of reasons it would matter before it gets to that point, just more to do with system design concerns. Of course, you could also argue safety is an ongoing process that partially derives from system design and you wouldn't be wrong.
It occurred to me there's some recent prior art here:
https://news.ycombinator.com/item?id=47721953
It's probably fair to say the Linux kernel is critical infra, or at least a component piece in a lot of it.
In the not so distant future you'll probably be one of the few who haven't had their actual coding skills atrophy, and that's a good thing.
Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
> show us servo contrib log or something like that
Servo may not be the best project for this experiment, as it has a strict no-AI contributions allowed policy.
It's the maintenance. The long term, slow burn, uninteresting work that must be done continually. Someone needs to be behind it for the long haul or it will never get adopted and used widely.
Right now, at least, LLMs are not great at that. They're great for quickly creating smaller projects. They get less good the older and larger those projects get.
Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
I read the link twice and no AI or LLM mentioned. I don't know why people are so eager to chime in and try to steer the conversation towards AI.
It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.
[1] https://crates.io/crates/typst
> As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo
Wait, crate versions go up to 1.0?
EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.