A big limitation for skills (or agents using browsers) is that the LLM is working against raw html/DOM/pixels. The new WebMCP API solves this: apps register schema-validated tools via navigator.modelContext, so the agent has structured JSON to work with and can be way more reliable.
WebMCP is currently being incubated in W3C [1], so if it lands as a proper browser standard, this becomes a endpoint every website can expose.
I think browser agents/skills+WebMCP might actually be the killer app for local-first apps [2]. Remote APIs need hand-crafted endpoints for every possible agent action. A local DB exposed via WebMCP gives the agent generic operations (query, insert, upsert, delete) it can freely compose multiple steps of read and writes, at zero latency, offline-capable. The agent operates directly on a data model rather than orchestrating UI interactions, which is what makes complex things actually reliable.
For example the user can ask "Archive all emails I haven't opened in 30 days except from these 3 senders" and the agent then locally runs the nosql query and updates.
OpenAPI is primarily for machine-to-machine which needs determinism and optimized for some cases (e.g. time in unix format with ms accuracy). MCP is optimized for another use case where LLM has many limitations but has good "understanding" of text. instead of sending { user: {id: 123123123123, first_name: "XYZYZYZ", "last_name": "SDFSDF", "gender": "..."..... } } you could return "Mr XYZYZYZ" or "Mrs XYZYZYZ"
llm doesn't need all these and can't parse it anyway without additional tools (e.g. why should it spend tokens even trying to convert unix timestamp to understand the time)
So you're telling me we spent over a decade turning the browser from a sieve full of vulnerabilities into an impenetrable sandbox, and now we're directly introducing an APT?
Gah - What a dumb take. There's nothing APT about an agent that you can open on a webpage to do things. If anything it's a fantastic accessibility win. Some people's critical thinking turns off when it comes to AI flows.
my most commonly repeated prompt; would be nice if the baked it into the tool itself:
"No emojis. be concise. no suggestions unless I explicitly ask for them. answer questions like the machine you are. Don't try and add personality or humour; remember you're a robot."
I know everyone hates ads or whatever, but why would anyone make content on their own website anymore if google and the browser are doing everything in their power to keep your users from interacting with your own page. Also I don't want to hear the crap about ads being too invasive, its their content, they can do that if they want, and you can not have access to their content. They have to be able to monetize the page to get viewers and its their mistake to make if they make it annoying that doesn't give everyone the right to their work.
Over the past few months, more than a few Google Doodles have simply been Gemini search prompts. This was extremely underwhelming as I usually expect a fun game or some kind of clever hack to ensue. I was also rather irate that Google could simply insert some false prompt into my Gemini conversation history. "I did not say that!"
Furthermore, it led me to muse whether "Prompt Gemini for " was a thing that any URL could do? If I went to a random malicious website, could they prompt Gemini to do something for me? If Gemini was hooked up to my Gmail, could a malicious prompt delete all my email, and all it would take is a misclick? Chilling.
These days announcements like this just make me want to put on my tinfoil hat - what's in it for Google, though? Why make it more convenient for people to submit webpages to you?
I would be more excited by this if there was a better permissions model for these things. For example I can think of a skill that would need access to a certain corpus of documents that I host on Google Drive, but, as far as I have been able to determine using Google's other AI products, there is no way for me to grant read-only access to that corpus without granting read-write access to all of my data on Google, which is simply too much access for my taste. There has to be something less binary than Personalization:on/off?
Tried to visit the first domain, baydailymedia, but doesn't seem to exist... I know its unsurprising and not against the rules or even spirit of showing off your new toy, but some humor in the aria tag "Video of user creating a protein maxing Skill" and then within the video, a fat "Video for illustrative purposes" "Results may vary" "check response for accuracy"
Second video seem's more real. And yeah, again not against the rules, but dropping onto website, no ads, prompting data out of it is very in the ethos of our current "lets just do an ai" to be relavent era.
ChatGPT just introduced me to bookmarklettes for scraping web pages with JavaScript. It’s one in that group of skills that ChatGPT does very well—the prompt is just a few sentences and the results just work.
My prompt collection lives in three different places right now — Raycast snippets, Apple Notes, and a Notion page that keeps growing. I know I wrote a good one for my git commit/push flow somewhere, but finding it when I need it usually takes longer than just rewriting it.
The browser approach makes sense for Claude code and ChatGPT. I wonder how well it holds up once you have 50+ prompts though — finding the right one fast is the real problem for me.
There is something comforting about seeing that the SV stopped having ideas and now just recycles and recombines the same tropes over and over again.
It's still all terrible, but it's a devil you know. You can live with that. You can skip the broken stair and duck, knowing exactly when they're trying to punch you in the face again.
Now here's hoping that eventually, they get bored and just stop entirely.
From a user's perspective, this is amazing. I love the idea and want to do this. However, as soon as Google does something you can use they either depreciate it, discontinue it or change the price model in an unexpected way. So I'm always hesitant to commit to the Google Solution.
I hate that. I understand that it might be useful, and tbh, on personnal PC, i'm not even concerned. But it is going towards people pushing to replace XQL or other query languages with prompting in natural languages, for no good reasons. Generate your query and copy paste if you don't want to read the documentation man, but please, please keep an intermediary between the LLM and the real world data. The last time your fucking prompt gave me a "log overview" i lost 2 hours understanding what the fuck i was reading, when a query would have taken me at most 20 minutes.
Convert my AI prompt into the code for a one-click tool, let me read and share it, that would be _great_.
Why are they eating into, again and again, into user territory? What's left for the average joe? Time to remove the browser from Alphabet. End of story.
I highly doubt that prompts are that valuable, considering the inconsistent responses by llms to repeated queries. Besides, they are easily reproduced...
This sounds to me like yet another way to automate filling out forms. I had been thinking about vibe-coding a Chrome extension for one form I fill in regularly, but perhaps this is easier.
So much of the web has no API anymore and is hostile to robots.
The script to turn the coffee maker on when dad posts on Facebook for the first time each morning that worked in 2014 won't work anymore in 2026.
Having this sort of thing built into a mainstream browser will open up a new avenue for automation, which I think will be a good thing for breaking down data silos and being good for the world overall.
114 comments
WebMCP is currently being incubated in W3C [1], so if it lands as a proper browser standard, this becomes a endpoint every website can expose.
I think browser agents/skills+WebMCP might actually be the killer app for local-first apps [2]. Remote APIs need hand-crafted endpoints for every possible agent action. A local DB exposed via WebMCP gives the agent generic operations (query, insert, upsert, delete) it can freely compose multiple steps of read and writes, at zero latency, offline-capable. The agent operates directly on a data model rather than orchestrating UI interactions, which is what makes complex things actually reliable.
For example the user can ask "Archive all emails I haven't opened in 30 days except from these 3 senders" and the agent then locally runs the nosql query and updates.
- [1] https://webmachinelearning.github.io/webmcp/
- [2] https://rxdb.info/webmcp.html
>Remote APIs need hand-crafted endpoints for every possible agent action.
They already need a remote API for every possible user action. MCP is just duplicate work.
{ user: {id: 123123123123, first_name: "XYZYZYZ", "last_name": "SDFSDF", "gender": "..."..... } }you could return "Mr XYZYZYZ" or "Mrs XYZYZYZ"llm doesn't need all these and can't parse it anyway without additional tools (e.g. why should it spend tokens even trying to convert unix timestamp to understand the time)
"No emojis. be concise. no suggestions unless I explicitly ask for them. answer questions like the machine you are. Don't try and add personality or humour; remember you're a robot."
Furthermore, it led me to muse whether "Prompt Gemini for" was a thing that any URL could do? If I went to a random malicious website, could they prompt Gemini to do something for me? If Gemini was hooked up to my Gmail, could a malicious prompt delete all my email, and all it would take is a misclick? Chilling.
I can see the appeal of this feature and I am generally speaking an AI booster.
On the other hand...like...wat? This feature feels way too premature and risky to let loose on the public.
Skills in Chrome are rolling out on Mac, Windows and ChromeOS to users with their Chrome language set to English-US.
Second video seem's more real. And yeah, again not against the rules, but dropping onto website, no ads, prompting data out of it is very in the ethos of our current "lets just do an ai" to be relavent era.
The browser approach makes sense for Claude code and ChatGPT. I wonder how well it holds up once you have 50+ prompts though — finding the right one fast is the real problem for me.
- Becoming a Platform
- AI
- User-generated content
[list continues]
There is something comforting about seeing that the SV stopped having ideas and now just recycles and recombines the same tropes over and over again.
It's still all terrible, but it's a devil you know. You can live with that. You can skip the broken stair and duck, knowing exactly when they're trying to punch you in the face again.
Now here's hoping that eventually, they get bored and just stop entirely.
Convert my AI prompt into the code for a one-click tool, let me read and share it, that would be _great_.
The script to turn the coffee maker on when dad posts on Facebook for the first time each morning that worked in 2014 won't work anymore in 2026.
Having this sort of thing built into a mainstream browser will open up a new avenue for automation, which I think will be a good thing for breaking down data silos and being good for the world overall.