The Vercel plugin on Claude Code wants to read your prompts (akshaychugh.xyz)

by akshay2603 111 comments 280 points
Read article View on HN

111 comments

[−] embedding-shape 36d ago

> skills are injected into sessions that have nothing to do with Vercel, Next.js, or this plugin's scope

> every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope

> For users working across multiple projects (some Vercel, some not), this is a fixed ~19k token cost on every session — even when the session is pure backend work, data science, or non-Vercel frontend.

I know everything is vibeslopped nowadays, but how does one even end up shipping something like this? Checking if your plugin/extension/mod works in the contexts you want, and doesn't impact the contexts you don't, seem like the very first step in even creating such a thing. "Where did the engineering go?" feels like too complicated even, where did even thinking the smallest amount go?

[−] hyperhopper 36d ago
Your comment assumes the plugin is not working as they want it to. The way it is designed gets them the maximum amount of data. It does a great job if that is their goal.
[−] embedding-shape 36d ago
Yes, I'm assuming good intentions and try to take a charitable perspective of everything, unless there is any specific evidence pointing to something else. Is there any evidence of this being intentional?

Seems to me their engineering practices such, rather than the company suddenly wanting to slurp up as much data as possible, if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.

[−] Kwpolska 36d ago
Why would you assume good intentions of any business in this day and age?
[−] embedding-shape 36d ago
Because I'm a nice person, and want to give other nice people the benefit of the doubt. And most businesses are run by people after all, not hard to imagine at least some of them would be "nice people" too.

And frankly, the alternative would be too mentally taxing. So in the camp of "Good until proven otherwise" is where I remain for now.

[−] mbesto 36d ago

> Is there any evidence of this being intentional?

The evidence is in the code! If you didn't intend for a capability to be there then why is it in the code?

> if they truly wanted that, they have about 10 better approaches for it, if they don't care about other things.

How so? What other approaches do they have that get this much data with little potential for reputational harm? This is a very common way to create plausible deniability ("we use it for improving our service, we don't know what we'll need so we just take everything and figure it out later") and then just revert the capability when people complain.

[−] processunknown 36d ago

> Is there any evidence of this being intentional?

A Vercel engineer commented "overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything."

[−] pyb 36d ago
Why are you still assuming good intentions of Vercel? This was them less yhan a month ago : https://vercel.com/changelog/updates-to-terms-of-service-mar...
[−] bdangubic 36d ago
can you name one of these 10 better approaches?
[−] serial_dev 36d ago
Well, unfortunately people always tend to only spend time on verifying that the feature they wanted works, testing the happy path. Even many superficial bosses / code reviewers / QA tester will check this...

Checking if your code also gets executed elsewhere a bazillion times, checking failure cases, etc... That's a luxury that you feel you can't afford when you are in "ship fast, break things" mode.

[−] chuckadams 36d ago

> I know everything is vibeslopped nowadays, but how does one even end up shipping something like this?

The first part of your question answers the second. No one is left who cares. People are going to have to vote with their feet before that changes.

[−] elAhmo 36d ago
No worries, they acquired Bun because they seem to be super thoroughly invested in the whole ecosystem and engineering excellence of their tools.
[−] tracerbulletx 36d ago
Engineers were holding back the ocean because no one could even make sense of what they did, even a little bit, so they had power. Now they just threaten to use AI to do what they want unless you do it for them. The leverage is gone, the dike is burst.
[−] p_stuart82 36d ago
19k tokens per session and the skill triggers don't even check project scope. you're paying that overhead on every non-vercel repo
[−] acedTrex 36d ago

> Checking if your plugin/extension/mod works

What makes you think they do this with any of their products these days?

[−] nothinkjustai 36d ago
Honestly, knowing some of the people who work for Vercel and the amount of vibe coding they do, I doubt anyone even checked this before pushing.
[−] throwaway613746 36d ago
[dead]
[−] potter098 36d ago
[flagged]
[−] abelsm 36d ago
The breach of trust here, which is hard to imagine isn't intentional, is enough reason alone to stop using Vercel, and uninstall the plugin. That part is easy. Most of these agents can help you migrate if anything.

The question is on whether these platforms are going to enforce their policies for plugins. For Claude Code in particular this behavior violates their plugin policy (1D) here explicitly: https://support.claude.com/en/articles/13145358-anthropic-so...

It's a really tough problem, but Anthropic is the company I'd bet on to approach this thoughtfully.

[−] btown 36d ago
To be sure, the problem isn't that the plugin injects behavior into the system prompt - that's every plugin and skill, ever.

But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:

> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.

(Needless to say, this is a supply chain attack in every meaningful way, and should be treated as such by security teams.)

And the argument that there's no CLI space to allow for opt-in telemetry is absurd - their readme https://github.com/vercel/vercel-plugin?tab=readme-ov-file#i... literally has you install the Vercel plugin by calling npx https://www.npmjs.com/package/plugins which is written by a Vercel employee and could add this opt-in at any time.

IMO Vercel is not a good actor. One could make a good argument that they've embrace-extend-extinguished the entire future of React as an independent and self-contained foundational library, with the complexity of server-side rendering, the undocumented protocols that power it, and the resulting tight coupling to their server environments. Sadly, this behavior doesn't surprise me.

EDIT: That npx plugins code? It's not on Github, exists only on NPM, and as of v1.2.9 of that package, if you search https://www.npmjs.com/package/plugins?activeTab=code it literally sends telemetry to https://plugins-telemetry.labs.vercel.dev/t already, on an opt-out basis! I mean, you have to almost admire the confidence.

[−] guessmyname 36d ago
I use Little Snitch and so far I have only seen Claude Code connect to api.anthropic.com and Sentry for telemetry. I have not seen any Vercel connections, but I always turn off telemetry in everything before I run it. If you log in with OAuth2, it also connects to platform.claude.com . For auto updates, it fetches release info from raw.githubusercontent.com and downloads the actual files from storage.googleapis.com. I think it also uses statsig.anthropic.com for stats. One weird thing, I did see it try to connect to app.nucleus.sh once, and I have no idea why.

Here are some environment variables that you’d like to set, if you’re as paranoid as me:

  ANTHROPIC_LOG="debug"
  CLAUDE_CODE_ACCOUNT_UUID="11111111-1111-1111-1111-111111111111"
  CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING="1"
  CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY="1"
  CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC="1"
  CLAUDE_CODE_DISABLE_TERMINAL_TITLE="1"
  CLAUDE_CODE_ENABLE_PROMPT_SUGGESTION="false"
  CLAUDE_CODE_ORGANIZATION_UUID="00000000-0000-0000-0000-000000000000"
  CLAUDE_CODE_USER_EMAIL="root@anthropic.com"
  DISABLE_AUTOUPDATER="1"
  DISABLE_ERROR_REPORTING="1"
  DISABLE_FEEDBACK_COMMAND="1"
  DISABLE_TELEMETRY="1"
  ENABLE_CLAUDEAI_MCP_SERVERS="false"
  IS_DEMO="1"
[−] nothinkjustai 36d ago
I’ve often seen people say that AI is a multiplier, where a 2x dev becomes a 4x dev, but a -1x dev becomes a -2x dev, etc.

I think it’s fairly easy to tell what impact AI is having at Vercel. Knowing the pre-ai quality of the engineering at that company, I’m not surprised in the AI era they’re pushing stuff like this. I doubt anyone even thought to check it on a repo outside of a Vercel one.

[−] sibeliuss 36d ago
This thing is horrible.

If you have it installed, it will silently inject a warning into claude that you should use tailwind, even if your app is not! Then every single request will silently question the decision as to why yr app is using one thing, rather than another, leading to revisions as it starts writing incorrect code.

I couldn't believe it when I discovered it. For so many reasons I am vehemently anti Vercel. Just discovered this two days ago, after installing their frontend skill.

[−] an0malous 36d ago
That whole company is built on sketchy practices
[−] croemer 36d ago
The article doesn't link to the code that shows all bash tool uses are sent to Vercel servers by default, i.e. even without opt-in.

Here's the relevant line as a GitHub permalink: https://github.com/vercel/vercel-plugin/blob/b95178c7d8dfb2d...

[−] hybirdss 36d ago
I ship Claude Code skills and hooks, so I've hit this from the other side — there's no way for users to verify what my hooks do without reading the source. The permission model is basically "install and hope."

Anthropic already has the right policy — 1D says "must not collect extraneous conversation data, even for logging purposes." But there's no enforcement at the architecture level. An empty matcher string still gives a hook access to every prompt on every project. The rules exist on paper but not in code.

The fix is what VS Code solved years ago: hook declarations should include a file glob or dependency gate, and plugin-surfaced questions should have visual attribution so users know it's not Claude asking.

[−] heliumtera 36d ago
Oh boy, the guy in the middle wants to take advantage of you! Surprising stuff.

You always had the option to not, ever, touch Vercel.

[−] jp57 36d ago
AI tools right now remind me of the old days of single-user PC/Mac operating systems without protected memory or preemptive multitasking. You could read any file, write directly to video memory, load machine code into the heap and then jump to it, etc.
[−] RadiozRadioz 36d ago
The HN headline transformer has mangled this one. The _all your prompts_ part of the original title was important.

@dang

[−] akshay2603 32d ago
OP here. update: all 4 identified issues in the blog are now fixed by the Vercel team.

https://akshaychugh.xyz/writings/png/vercel-plugin-telemetry...

[−] awestroke 36d ago
We're moving away from Vercel for an increasing number of reasons. But the Vercel brand has fallen so far that we're also moving away from any open source projects they have had any part in creating. The company is almost revolting.
[−] chaisan 36d ago
we moved our whole org off Vercel after that selfie Rauch put out. rotten company, overpriced product for what it is, sneaky practices. never looked back.
[−] nisegami 36d ago
This and the comments here make me even more sad that they ended up acquiring the Nuxt project/team :(
[−] Surac 36d ago
I still use Claude to code in a very stoneage way. I copy c code into the web site/desktop App and type in my prompt. Then i read the output and if i like it i copy paste it into my code. I always felt very old doing that way when things like Claude code exists. Now i fell somehow not so old. All this hacking into my private space using a develpment tool is insane. Also i do not use Git
[−] infecto 36d ago
Every single scam website I have gotten from spam text messages is being hosted on vercel. Not surprising.
[−] cush 36d ago
If there were any semblance of liability for software engineering firms things like this wouldn’t happen
[−] gronky_ 36d ago
Mobile rendering of the post has some issues. Tables are overflowing and not responsive for example
[−] samarth0211 35d ago
this is really very helpful. Thanks for sharing
[−] kyleee 35d ago
Vercel is poison
[−] gverrilla 36d ago
once you accept genocide, anything passes.
[−] stpedgwdgfhgdd 36d ago
“We collect the native tool calls and bash commands”

Holy shit, I cant imagine this to hold for every bash command Claude Code executes. That would be terrible, probably violating GDPR. (The cmd could contain email address etc)

I must be wrong.

[−] michiosw 36d ago
This is a broader pattern I keep seeing with agent plugins/extensions — the permission model is "all or nothing." Once you install a plugin, it gets full context on every session, every prompt.

Compare this to how we think about OAuth scopes or container sandboxing — you'd never ship a CI integration that gets read access to every repo in your org just because it needs to lint one. But that's essentially what's happening here with the token injection across all sessions.

The real problem isn't Vercel specifically, it's that Claude Code's plugin architecture doesn't have granular activation scopes yet. Plugins should declare which project types they apply to and only activate in matching contexts. Until that exists, every plugin author is going to make this same mistake — or exploit it.

[−] prostheticrazor 36d ago
[dead]
[−] roninforge 35d ago
[dead]
[−] andrewqu 36d ago
Engineer at Vercel here who worked on the plugin!

We have been super heads down to the initial versions of the plugin and constantly improving it. Always super happy to hear feedback and track the changes on GitHub. I want to address the notes here:

The plugin is always on, once installed on an agent harness. We do not want to limit to only detected Vecel project, because we also want to help with greenfield projects "Help build me an AI chat app".

We collect the native tool calls and bash commands. These are pipped to our plugin. However, VERCEL_PLUGIN_TELEMETRY=off kills all telemetry.

All data is anonymous. We assign a random UUID, but this does not connect back to any personal information or Vercel information.

Prompt telemetry is opt-in and off by default. The hook asks once; if you don't answer, session-end cleanup marks it as disabled. We don't collect prompt text unless you explicitly say yes.

On the consent mechanism: the prompt injection approach is a real constraint of how Claude Code's plugin architecture works today. I mentioned this in the previous GitHub issue - if there's a better approach that surfaces this to users we would love to explore this.

The env var VERCEL_PLUGIN_TELEMETRY=off kills all telemetry and keeps the plugin fully functional. We'll make that more visible, and overall make our wording around telemetry more visible for the future.

Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.