This would be _extremely_ valuable for desktop dev when you don't have a DOM, no "accessibility" layer to interrogate. Think e.g. a drawing application. You want to test that after the user starts the "draw circle" command and clicks two points, there is actually a circle on the screen. No matter how many abstractions you make over your domain model, rendering you can't actually test that "the user sees a circle". You can verify your drawing contains a circle object. You can verify your renderer was told to draw a circle. But fifty things can go wrong before the user actually agrees he saw a circle (the color was set to transparent, the layer was hidden, the transform was incorrect, the renderer didn't swap buffers, ...).
This is a good point. For anything without a DOM, screenshot diffing is basically your only option. Mozilla did this for Gecko layout regression testing 20+ years ago and it was remarkably effective. The interesting part now is that you can feed those screenshots to a vision model and get semantic analysis instead of just pixel diffing.
I had claude build a backdoor command port in the Godot application I'm working on. Using commands, Claude can interact with the screen, dump the node tree, and take screen shots. It works pretty well. Claude will definitely iterate over layout issues.
Yes agree. Web only for now since it runs on headless Chromium. Desktop and mobile are the #1 request though. For mobile the path would be driving an iOS Simulator or Android emulator. For native desktop, probably accessibility APIs or OS-level screenshots. Definitely on my radar, will see if anyone wants to contribute since I am doing this on my free time.
I've always found screenshots on PRs incredibly helpful as a reviewer. Historically I've had mixed success getting my team to consistently add screenshots to PRs, so this tool would be helpful even for human code.
At work, we've integrated claude code with gitlab issues/merge requests, and we get it to screenshot anything it's done. We could use the same workflow to screenshot (or in this case, host a proofshot bundle of) _any_ open PR. You would just get the agent to check out any PR, get proofshot to play around with it, then add that as a comment. So not automated code reviews, which are tiresome, but more like a helpful comment with more context.
Going to try out proofshot this week, if it works like it does on the landing page it looks great.
I built something like this for native application, so that I could get automated feedback loop for the agent instead of making screenshots manually etc. Problem I found is that AI agent understands nothing of the UI. If you tell it "Make buttons evenly spaced", sure it will space them evenly, but without care for the context they are placed in. You have to describe the image yourself and still you'll find it having hard time understanding what's going on. I very much abandoned the idea of AI driven UI development as it is not there yet. I tried with GPT 5.2. Maybe newer models have improved.
I'm going the opposite of everyone else is saying.
This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked.
Also looks useful for PR's, have a before and after changed.
I usually ask Claude Code to setup a software stack that can build/run whatever I am working on. Then I let it browse a website or navigate through screens. I also use Playwright to get screenshots of the website I am building. For e.g. apps or whatever application you are building, there should be a way to get screenshots too I guess.
Added benefit is that when Claude navigates and finds a bug, it will either add them to a list for human review or fix it automatically.
Pretty much a loop where building and debugging work together;-)
I'm currently experimenting with running a web app "headless" in Node.JS by implementing some of the DOM JS functions myself. Then write mocks for keyboard input, etc. Then have the code agent run the headless client which also starts the tests. In my experience the coding agents are very bad at detecting UX issues, they can however write the tests for me if I explain what's wrong. So I'm the eye's and it's my taste, the agent writes the tests and the code.
Everyone is comparing this to Playwright but it's solving a different problem. Playwright checks structural properties, like does element X exist, is it visible, etc. That's useful but it can't tell you whether the page actually looks right.
I built something similar that takes a screenshot and uses a multi-modal LLM to evaluate it against a design mock. It catches a completely different class of error. The DOM can be structurally perfect and still look nothing like what was intended. Colors wrong, layout shifted, spacing off, components overlapping. No amount of DOM assertions will catch that.
These are two different kinds of gates: structural which are fast and deterministic, and stochastic which are slow but catch things that are completely different. There is very little overlap between the issues, and you want to catch both.
That way I can invest a lot of time getting the mock just right, then let the agents "make it so".
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.
I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.
I've been using playwright-cli (not mcp) for this same purpose. It lacks the video feature, I guess. But at least is local and without external dependencies on even more third parties (in your case, vercel). Perhaps you could allow to use a local solution as an alternative as well?
it lets agents drive terminal apps via pty, take screenshots, send keystrokes, and record everything as asciicast.
basically the same idea as proofshot but for tui/cli apps instead of browser ui. been using it to have agents prove their work actually when submitting prs.
I'd love to see an agent doing work, then launching app on iOS sim or Android emu to visually "use" the app to inspect whether things work as expected or not.
This is really cool. Have you thought of maybe accessing the screen through accessibility APIs? For Android mobile devices I have a skill I created that accesses the screen xml dump as part of feature development and it seems to work much better than screenshots / videos. Is this scalable to other OS's?
How do you handle logged in sessions/user authentication?
I built something much much more primitive, but I have it actually storing session data in the local project folder and then re-using those cookies so the agent can log in without issue.
Great to see this but exe.dev (not sponsored but they are pretty cool and I use them quite often, if they wish to sponsor me that would be awesome haha :-]) actually has this functionality natively built in.
but its great to see some other open source alternatives within this space as well.
This is actually interesting. Feels like we’re moving from “generate UI” to “validate UI,” which is a completely different problem. Curious how you handle edge cases where something looks correct but breaks in interaction?
This is basically what antigravity (Google’s Windsurf) ships with. Having more options to add this functionality to Open code / Claude code for local models is really awesome. MIT license too!
I am fed up of getting gaslit by coding assistants. "Your AI agent says it's done." really is a problem! Nice packaging here.
I built something similar[0] a few months ago but haven't maintained it because Codex UI and Cursor have _reasonable_ tooling for this themselves now IMO.
That said there is still a way to go, and space for something with more comprehensive interactivity + comparison.
Not to pile on, but I was using Claude Code through the native application and it started doing exactly this on its own, side by side with my prompt, running the server and taking screenshots in the native app. Claude also just launched its own browser control, and while it will take time to mature, I assume any AI company will have this feature in their crosshairs.
From a product design perspective, this looks pretty cool!
106 comments
https://github.com/microsoft/playwright-cli
At work, we've integrated claude code with gitlab issues/merge requests, and we get it to screenshot anything it's done. We could use the same workflow to screenshot (or in this case, host a proofshot bundle of) _any_ open PR. You would just get the agent to check out any PR, get proofshot to play around with it, then add that as a comment. So not automated code reviews, which are tiresome, but more like a helpful comment with more context.
Going to try out proofshot this week, if it works like it does on the landing page it looks great.
I don't think you need either, though, because agent-browser itself has a skill for this: https://github.com/vercel-labs/agent-browser/blob/main/skill...
Maybe the author would like to compare the three.
This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked.
Also looks useful for PR's, have a before and after changed.
Added benefit is that when Claude navigates and finds a bug, it will either add them to a list for human review or fix it automatically.
Pretty much a loop where building and debugging work together;-)
Once Claude Code
I built something similar that takes a screenshot and uses a multi-modal LLM to evaluate it against a design mock. It catches a completely different class of error. The DOM can be structurally perfect and still look nothing like what was intended. Colors wrong, layout shifted, spacing off, components overlapping. No amount of DOM assertions will catch that.
These are two different kinds of gates: structural which are fast and deterministic, and stochastic which are slow but catch things that are completely different. There is very little overlap between the issues, and you want to catch both.
That way I can invest a lot of time getting the mock just right, then let the agents "make it so".
I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.
it lets agents drive terminal apps via pty, take screenshots, send keystrokes, and record everything as asciicast.
basically the same idea as proofshot but for tui/cli apps instead of browser ui. been using it to have agents prove their work actually when submitting prs.
I'd love to see an agent doing work, then launching app on iOS sim or Android emu to visually "use" the app to inspect whether things work as expected or not.
How do you handle logged in sessions/user authentication?
I built something much much more primitive, but I have it actually storing session data in the local project folder and then re-using those cookies so the agent can log in without issue.
but its great to see some other open source alternatives within this space as well.
Anyone recommend browser-base instant preview site for web ui design with more artistic/experimental preference?
https://simonwillison.net/2026/Feb/10/showboat-and-rodney/
I built something similar[0] a few months ago but haven't maintained it because Codex UI and Cursor have _reasonable_ tooling for this themselves now IMO.
That said there is still a way to go, and space for something with more comprehensive interactivity + comparison.
[0] - https://magiceyes.dev/
From a product design perspective, this looks pretty cool!
my claude drive his own brave autonomously, even for ui ?