I still prefer MCP over skills (david.coffee)

by gmays 375 comments 460 points
Read article View on HN

375 comments

[−] antirez 35d ago
Don't focus on what you prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way. MCP adds friction, imagine doing yourself the work using the average MCP server. However, skills alone are not sufficient if you want, for instance, creating the ability for LLMs to instrument a complicated system. Work in two steps:

1. Ask the LLM to build a tool, under your guide and specification, in order do a specific task. For instance, if you are working with embedded systems, build some monitoring interface that allows, with a simple CLI, to do the debugging of the app as it is working, breakpoints, to spawn the emulator, to restart the program from scratch in a second by re-uploading the live image and resetting the microcontroller. This is just an example, I bet you got what I mean.

2. Then write a skill file where the usage of the tool at "1" is explained.

Of course, for simple tasks, you don't need the first step at all. For instance it does not make sense to have an MCP to use git. The agent knows how to use git: git is comfortable for you, to use manually. It is, likewise, good for the LLM. Similarly if you always estimante the price of running something with AWS, instead of an MCP with services discovery and pricing that needs to be queried in JSON (would you ever use something like that?) write a simple .md file (using the LLM itself) with the prices of the things you use most commonly. This is what you would love to have. And, this is what the LLM wants. For complicated problems, instead, build the dream tool you would build for yourself, then document it in a .md file.

[−] prohobo 35d ago
I feel like the MCP conversation conflates too many things and everyone has strong assumptions that aren't always correct. The fundamental issue is between one-off vs. persistent access across sessions:

- If you need to interact with a local app in a one-off session, then use CLI.

- If you need to interact with an online service in a one-off session, then use their API.

- If you need to interact with a local app in a persistent manner, and if that app provides an MCP server, use it.

- If you need to interact with an online service in a persistent manner, and if that app provides an MCP server, use it.

Whether the MCP server is implemented well is a whole other question. A properly configured MCP explains to the agent how to use it without too much context bloat. Not using a proper MCP for persistent access, and instead trying to describe the interaction yourself with skill files, just doesn't make any sense. The MCP owner should be optimizing the prompts to help the agent use it effectively.

MCP is the absolute best and most effective way to integrate external tools into your agent sessions. I don't understand what the arguments are against that statement?

[−] xyzzy123 35d ago
My main complaint with mcp is that it doesn't compose well with other tools or code. Like if I want to pull 1000 jira tickets and do some custom analysis I can do that with cli or api just fine, but not mcp.
[−] prohobo 35d ago
Right, that feels like something you'd do with a script and some API calls.

MCP is more for a back and forth communication between agent and app/service, or for providing tool/API awareness during other tasks. Like MCP for Jira would let the AI know it can grab tickets from Jira when needed while working on other things.

I guess it's more like: the MCP isn't for us - it's for the agent to decide when to use.

[−] xyzzy123 35d ago
I just find that e.g. cli tools scale naturally from tiny use cases (view 1 ticket) to big use cases (view 1000 tickets) and I don't have to have 2 ways of doing things.

Where I DO see MCPs getting actual use is when the auth story for something (looking at you slack, gmail, etc) is so gimped out that basically, regular people can't access data via CLI in any sane or reasonable way. You have to do an oauth dance involving app approvals that are specifically designed to create a walled garden of "blessed" integrations.

The MCP provider then helpfully pays the integration tax for you (how generous!) while ensuring you can't do inconvenient things like say, bulk exporting your own data.

As far as I can tell, that's the _actual_ sweet spot for MCPs. They're sort of a technology of control, providing you limited access to your own data, without letting you do arbitrary compute.

I understand this can be considered a feature if you're on the other side of the walled garden, or you're interested in certain kinds of enterprise control. As a programmer however I prefer working in open ecosystems where code isn't restricted because it's inconvenient to someone's business model.

[−] SOLAR_FIELDS 35d ago
The auth angle is pretty interesting here. I spend a fair amount of time helping nontechnical people set up AI workflows in Claude Cowork and MCP works pretty well for giving them an isolated external system that I can tightly control their workflow guardrails but also interestingly give them the freedom to treat what IS exposed as a generic api automation tool. That combined with skills lets these non technical people string together zapier like workflows in natural language which is absolutely huge for the level of agency and autonomy it awards these people. So I find it quite interesting for the use case of providing auth encapsulated API access to systems that would normally require an engineer to unlock. The story around “wrap this REST API into a controlled variant only for the end users use case and allow them to complete auth challenges in every which way” has been super useful. Some of my mcp servers go through an oauth challenge response, others provide them guidance to navigate to the system and generate an api key and paste it into the server on initial connection.
[−] hadlock 35d ago

>while ensuring you can't do inconvenient things like say, bulk exporting your own data

I think this is the key; I want my analysts to be able to access 40% of the database they need to do their job, but not the other 60% parts that would allow them to dump the business-secrets part of the db, and start up business across the street. You can do this to some extent with roles etc but MCP in some ways is the data firewall as your last line of protection/auth.

[−] michaelbuckbee 35d ago
MCPs are for documentation. CLI->API is for interaction.
[−] Twirrim 34d ago
Weird... I've been happily using Atlassian's MCP for this kind of thing just fine?
[−] somnium_sn 35d ago
Give the model a REPL and let it compose MCP calls either by using tool calls structured output, doing string processing or piping it to a fast cheap model to provide structured output.

This is the same as a CLI. Bash is nothing but a programming language and you can do the same approach by giving the model JavaScript and have it call MCP tools and compose them. If you do that you can even throw in composing it with CLis as well

[−] insin 35d ago
You can make it compose by also giving the agent the necessary tools to do so.

I encountered a similar scenario using Atlassian MCP recently, where someone needed to analyse hundreds of Confluence child pages from the last couple of years which all used the same starter template - I gave the agent a tool to let it call any other tool in batch and expose the results for subsequent tools to use as inputs, rather than dumping it straight into the context (e.g. another tool which gives each page to a sub-agent with a structured output schema and a prompt with extraction instructions, or piping the results into a code execution tool).

It turned what would have been hundreds of individual tool calls filling the context with multiple MBs of raw confluence pages, into a couple of calls returning relevant low-hundreds of KBs of JSON the agent could work further with.

[−] __alexs 35d ago
The agent cannot compose MCPs.

What it can do is call multiple MCPs, dumping tons of crap into the context and then separately run some analysis on that data.

Composable MCPs would require some sort of external sandbox in which the agent can write small bits of code to transform and filter the results from one MCP to the next.

[−] losvedir 35d ago
But in the context of this discussion, Atlassian has a CLI tool, acli. I'm not quite following why that wouldn't have worked here. As a normal CLI you have all the power you need over it, and the LLM could have used it to fetch all the relevant pages and save to disk, sample a couple to determine the regular format, and then write a script to extract out what they needed, right? Maybe I don't understand the use case you're describing.
[−] xyzzy123 35d ago
Hmm, but you can't write a standard MCP (e.g. batch_tool_call) that calls other MCPs because the protocol doesn't give you a way to know what other MCPs are loaded in the runtime with you or any means to call them? Or have I got that wrong?

So I guess you had to modify the agent harness to do this? or I guess you could use... mcp-cli ... ??

[−] CuriouslyC 35d ago
MCP is less discoverable than a CLI. You can have detailed, progressive disclosure for a CLI via --help and subcommands.

MCPs needs to be wrapped to be composed.

MCPs needs to implement stateful behavior, shell + cli gives it to you for free.

MCP isn't great, the main value of it is that it's got uptake, it's structured and it's "for agents." You can wrap/introspect MCP to do lots of neat things.

[−] mbesto 35d ago
The way I see it is more like this:

- Skills help the LLM answer the "how" to interact with API/CLIs from your original prompt

- API is what actually sends/receives the interaction/request

- CLI is the actual doing / instruct set of the interaction/request

- MCP helps the LLM understand what is available from the CLI and API

They are all complementary.

[−] Eldodi 35d ago
There was a great presentation at the MCP Dev Summit last week explaining MCP vs CLI vs Skills vs Code Mode: https://www.figma.com/deck/H6k0YExi7rEmI8E6j6R0th/MCP-Dev-Su...
[−] mbreese 35d ago
I think a lot of the MCP arguments conflate MCP the protocol versus how we currently discover and use MCP tool servers. I think there’s a lot of overhead and friction right now with how MCP servers are called and discovered by agents, but there’s no reason why it has to be that way.

Honestly, an agent shouldn’t really care how it’s getting an answer, only that it’s getting an answer to the question it needs answered. If that’s a skill, API call, or MCP tool call, it shouldn’t really matter all that much to the agent. The rest is just how it’s configured for the users.

[−] noodletheworld 35d ago

> MCP is the absolute best and most effective way to integrate external tools into your agent sessions

Nope.

The best way to interact with an external service is an api.

It was the best way before, and its the best way now.

MCP doesn't scale and it has a bloated unnecessarily complicated spec.

Some MCP servers are good; but in general a new bad way of interacting with external services, is not the best way of doing it, and the assertion that it is in general, best, is what I refer to as “works for me” coolaid.

…because it probably does work well for you.

…because you are using a few, good, MCP servers.

However, that doesn't scale, for all the reasons listed by the many detractors of MCP.

Its not that it cant be used effectively, it is that in general it is a solution that has been incompetently slapped on by many providers who dont appreciate how to do it well and even then, it scales badly.

It is a bad solution for a solved problem.

Agents have made the problem MCP was solving obsolete.

[−] addandsubtract 35d ago
Meanwhile, I'm using MCP for the LLM to lookup up-to-date documentation, and not hallucinate APIs.
[−] Aperocky 35d ago
It's like saying it is very safe and nice to drive a F150 with half ton of water on the truck bed.

How about driving the same truck without that half ton of water?

[−] JamesSwift 35d ago
Hard disagree. Apis and clis have been THOROUGHLY documented for human consumption for years and guess what, the models have that context already. Not only of the docs but actual in the wild use. If you can hook up auth for an agent, using any random external service is generally accomplished by just saying “hit the api”.

I wrap all my apis in small bash wrappers that is just curl with automatic session handling so the AI only needs to focus on querying. The only thing in the -h for these scripts is a note that it is a wrapper around curl. I havent had a single issue with AI spinning its wheels trying to understand how to hit the downstream system. No context bloat needed and no reinventing the wheel with MCP when the api already exists

[−] pavelbuild 35d ago
[dead]
[−] gitgud 35d ago

>

For instance it does not make sense to have an MCP to use git.

What if you don’t want the AI to have any write access for a tool? I think the ability to choose what parts of the tool you expose is the biggest benefit of MCP.

As opposed to a READ_ONLY_TOOL_SKILL.md that states “it’s important that you must not use any edit API’s…”

[−] BatteryMountain 35d ago
This is exactly what I do too. Works very well. I have a whole bunch of scripts and cli tools that claude can use, most of them was built by claude too. I very rarely need to use my IDE because of this, as I've replicated some of Jetbrains refactorings so claude doens't have to burn tokens to do the same work. It also turns a 5 minute claude session into a 10 second one, as the scripts/tools are purpose made. Its reallly cool.

edit: just want to add, i still haven't implemented a single mcp related thing. Don't see the point at all. REST + Swagger + codegen + claude + skills/tools works fine enough.

[−] siva7 35d ago

> MCP adds friction, imagine doing yourself the work using the average MCP server.

Why on earth don't people understand that MCP and skills are complementary concepts, why? If people argue over MCP v. Skills they clearly don't understand either deeply.

[−] morgaesis 35d ago
This is my life motto. Progressive exploration, codifying, use your codified workflows.

> for each desired change, make the change easy (warning: this may be hard), then make the easy change - Kent Beck

https://x.com/KentBeck/status/250733358307500032

[−] 1minusp 35d ago
Feels to me like the toolchain for using LLMs in various tasks is still in flux (i interpret all of this as "stuff in different places like .md or skills or elsewhere that is appended to the context window" (i hope that is correct)). Shouldnt this overall process be standardized/automated? That is, use some self-reflection to figure out patterns that are then dumped into the optimal place, like a .md file or a skill?
[−] tomaytotomato 35d ago
Although the author is coming from a place of security and configuration being painful with Skills, I think the future will be a mix of MCP, Agents and Skills. Maybe even a more granular defined unit below a skill - a command...

These commands would be well defined and standardised, maybe with a hashed value that could be used to ensure re-usability (think Docker layers).

Then I just have a skill called:

- github-review-slim:latest - github-review-security:8.0.2

MCPs will still be relevant for those tricky monolithic services or weird business processes that aren't logged or recorded on metrics.

[−] tehryanx 28d ago
It makes a lot of sense to use an MCP for git and everything else if you want observability across many users. It gives you a place to shim security controls, monitoring, and alerting into the tool call pipeline.
[−] neya 35d ago

> Focus on what tool the LLM requires to do its work in the best way.

I completely agree with you. There was a recent finding that said Agents.md outperforms skills. I'm old school and I actually see best results by just directly feeding everything into the prompt context itself.

https://vercel.com/blog/agents-md-outperforms-skills-in-our-...

[−] fny 35d ago
This is covered well in the article too. See "The Right Tool for the Job" and "Connectors vs. Manuals."

Perhaps the title is just clickbait. :)

[−] ReDeiPirati 35d ago

> Don't focus on what you prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way.

I noticed that LLMs will tend to work by default with CLIs even if there's a connected MCP, likely because a) there's an overexposure of CLIs in training data b) because they are better composable and inspectable by design so a better choice in their tool selection.

[−] jFriedensreich 35d ago
If your llm sees even a difference between local skill and remote MCP thats a leak in your abstraction and shortcoming of the agent harness and should not influence the decision how we need to build these system for the devs and end users. They way this comment thinks about building for agents would lead to a hellscape.
[−] richardlblair 35d ago
I've found makefiles to be useful. I have a small skill that guides the LLM towards the makefile. It's been great for what you're talking about, but it's also a great way to make sure the agent is interacting with your system in a way you prefer.
[−] pojzon 35d ago
This is how I work with my agent harness. Also have skills for writing tools and skills.

And I still think ppl dont understand why MCPs are still needed and when to use them.

Its actually pretty simple.

[−] the_axiom 35d ago
this comment just assumes skills ori better without dealing with any of the arguments presented

low quality troll

[−] federicosimoni 35d ago
[dead]
[−] eblair 35d ago
[dead]
[−] plandis 35d ago
I could not agree any less with the author. I don’t want APIs, I want agents to use the same CLI tooling I already use that is locally available. If my agents are using CLI tooling anyways there is no need to add an extra layer via MCP.

I don’t want remote MCP calls, I don’t even want remote models but that’s cost prohibitive.

If I need to call an API, a skill with existing CLI tooling is more than capable.

[−] tow21 35d ago
This argument always sounds like two crowds shouting past each other.

Are you a solo developer, are you fully in control of your environment, are you focused on productivity and extremely tight feedback loops, do you have a high tolerance for risk: you should probably use CLIs. MCPs will just irritate you.

Are you trying to work together with multiple people at organizational scale and alignment is a problem; are you working in a range of environments which need controls and management, do you have a more defensive risk tolerance ... then by the time you wrap CLIs into a form that are suitable you will have reinvented a version of the MCP protocol. You might as well just use MCP in the first place.

Aside - yes, MCP in its current iteration is fairly greedy in its context usage, but that's very obviously going to be fixed with various progressive-disclosure approaches as the spec develops.

[−] alierfan 35d ago
This isn't a zero-sum game or a choice of one over the other. They solve different layers of the developer experience: MCP provides a standardized, portable interface for external data/tools (the infrastructure), while Skills offer project-specific, high-level behavioral context (the orchestration). A robust workflow uses MCP to ensure tool reliability and Skills to define when and how to deploy those tools.
[−] _pdp_ 35d ago
Scanning through the comments here I am almost certain the majority of people in this thread run coding agents on-device. Skills that access already available resources is then more convenient and you can easily make the argument that it is more agronomic.

That being said, majority of users on this planet don't use AI agents like that. They go to ChatGPT or equivalent. MCP in this case is the obvious choice because it provides remote access and it has better authentication story.

In order to make any argument about pro/con of MCP vs Skills you first need to find out who is the user.

[−] grensley 35d ago
The "only skills" people are usually non-technical and the "only CLI" people are often solo builders.

MCP makes a lot of sense for enterprise IMO. Defines auth and interfaces in a way that's a natural extension of APIs.

[−] lifeisstillgood 35d ago
I agree for a slightly different reason - human stupidity.

Despite many decades of proof that automation simplifies and reveals the illogical in organisations, digitisation has mostly stopped at below the “CXO” level - and so there are not APIs or CLIs available to anyone - but MCP is cutting through

Just consider:

Throughout companies large and small, Agile is what coders do, real project managers still use deadlines and upfront design of what will be in the deadline - so any attempt to convert the whole company to react to the reality of the road is blocked

Reports flow upwards - but through the reporting chain. So those PowerPoints are … massaged to meet to correct story, and the more levels it’s massaged the more it fails to resemble reality. Everyone knows this but managing the transition means potentially losing control …

There are plenty of digitisationmprojects going on - but do they enable full automation or are they another case of an existing political arena building its own political choices in software - “our area in a database to be accessed via an UI by our people” - almost never “our area to be used by others via API and totally replacing our people”.

(I think I need to be more persuasive

[−] noisy_boy 35d ago
I feel like MCPs are encapsulation of multiple steps where the input to the first step is sufficient to drive the flow. Why would I spend tokens for the LLM to do reasoning at each of the steps when I can just provide the input + MCP call backed by a fixed program that can deal with the overall flow deterministically. If I have to do the same series of steps everytime, a script beats LLM doing the each step individually in terms of cost and time. If the flow involved some sort of fuzzy analysis or decision making in multiple places, I would probably let the LLM carry out the flow or break it into a combination of MCP calls orchestrated by the LLM.

In my case, my MCP is setup with the endpoints being very thin LLM facing layer with the meat of the action being done by helper methods. I also have cli scripts that import/use the same helpers so the core logic is centralized and the only difference is that thin layer, which could be the LLM endpoint or cli's argparse. If I need another type of interface, that can also call the same helpers.

[−] alexhans 35d ago
This frames MCP vs Skills as an either/or, but they operate at different layers. MCP exposes capabilities and Skills may shape how capabilities are used.

Both are useful to different people (and role families) in different ways and if you don't feel certain pain points, you may not care about some of the value they provide.

Agent skills are useful because they're standardized prompt sharing but more than that, because they have progressive disclosure so you don't bloat your context with an inefficietly designed MCP and their UX is very well aligned such that "/SkillBuilder" skills are provided from the start and provide a good path for developers or non traditional builders to turn conversations into semi or full automation. I use this mental model to focus on the iteration pattern and incremental building [1].

[1] https://alexhans.github.io/posts/series/evals/building-agent...

[−] robotobos 35d ago
Despite thinking this is AI-generated, I agree but everything has a caveat.

Skills are good for instilling non-repeatable, yet intuitive or institutional knowledge.

MCP’s are great for custom, repeatable tasks. After 5-10 runs of watching my LLM write the same exact script, I just asked it to hardcode the solution and make it a tool. The result is runs are way faster and repeatable.

[−] WhyNotHugo 35d ago
I like skills because they rely on the same tools which humans rely upon. A well-written skill can be read and used by a human too.

A skill is just a description for how to use an existing CLI tool. You don't need to write new code for the LLM to interact with some system. You just tell the LLM to use the same tool humans do. And if you find the CLI is lacking in some way, you can improve it and direct human usage benefits from that improvement too.

On the other hand, an MCP requires implementing a new API for a service, an API exclusive to LLMs, and keeping parallel documentation for that. Every hour of effort put into it is an hour that's taken away from improving the human-facing API and documentation.

The way skills are lazy-loaded when needed also keeps context clean when they're not used. To be fair, MCPs could be lazy-loaded the same way, that's just an implementation detail.

[−] nextaccountic 35d ago

> Context Bloat: Using a skill often requires loading the entire SKILL.md into the LLM’s context window, rather than just exposing the single tool signature it needs. It’s like forcing someone to read the entire car’s owner’s manual when all they want to do is call car.turn_on().

MCP has severe context bloat just by starting a thread. If harnesses were smart enough to, during install time, summarize the tools provided by a MCP server (rather than dumping the whole thing in context), it would be better. But a worse problem is that the output of MCP goes straight into the context of the agent, rather than being piped somewhere else

A solution is to have the agent run a cli tool to access mcp services. That way the agent can filter the output with jq, store it in a file for analysis later, etc

[−] losvedir 35d ago
For my use I prefer just a raw CLI. As long as it's built following conventions (e.g. using cobra for a Go app) then the agent will just natively know how to use it, by which I mean how to progressively learn what it needs by reading the help output. In that case you don't need a skill or anything. Just say "I want this information, use the xyz app". It will then try xyz --help or xyz help or a variant, just like a human would, see the subcommands, do xyz help subcommand and eventually find what it needs to do the job. Good tools provide an OAuth flow like xyz login, which will open a browser window where you can determine which resources you want to give the CLI (and thereby the agent) access to.

This only works for people using agents themselves on computers they control, rather than, e.g., the Claude web app, but is a good chunk of my usage.

I think people are either over or under thinking the auth piece, though. The agent should have access to their own token. Both CLIs and MCPs and even raw API requests work this way. I don't think MCPs provide any further security. You should assume the agent can access anything in its environment and do everything up to what the credential permits. You don't want to give your more powerful credential to the MCP server and hope that the MCP server somehow restricts the agent to doing less (it can probably find the credential and make out-of-band calls if it wants). The only way I think it could work like that is how... is it Sprite does it?... where you give use a fake token and have an off-machine proxy that it goes through where it MitMs the request and injects the real credential.

[−] CharlieDigital 35d ago
One thing that I have found is that the platforms are surprisingly poor at consistently implementing MCP, which is actually a pretty simple protocol.

Take Codex, for example, it does not support the MCP prompts spec[0][1] which is quite powerful because it solves a lot of friction with deploying and synchronizing SKILL.md files. It also allows customization of virtual SKILL.md files since it allows compositing the markdown on the server.

It baffles me why such a simple protocol and powerful capability is not supported by Codex. If anyone from OpenAI is reading this, would love to understand the reasoning for the poor support for this relatively simple protocol.

[0] https://github.com/openai/codex/issues/5059

[1] https://modelcontextprotocol.io/specification/2025-06-18/ser...

[−] password4321 35d ago
Surprised to see no mention in the article or discussion yet about using MCPs in 'code mode', where an API is generated client-side relying on MCP primarily as an interface standard. I'm still learning but I've read this reduces the amount of context required to use the MCP.

It seems like a lot of the discussion is arguing in favor of API usage without realizing that MCP basically standardizes a universal API, thus enabling code mode.

[−] jauntywundrkind 35d ago
I've remained leaning a bit towards MCP until lately. Both have pretty easy ways to call the other (plenty of cli API callers, and tools like mcp-cli for the reverse https://github.com/philschmid/mcp-cli). Skills have really made progressive discovery if cli-tools much better, and MCP design has adapted likewise. I've lightly preferred MCP for formalism, for it feeling more concrete as a thing.

But what really changed my mind is seeing how much more casual scripting the LLMs do these days. They'll build rad unix pipes, or some python or node short scripts. With CLI tools, it all composes: every trick it learns can plug directly into every other capability.

Where-as with MCP, the LLM has to act as the pipe. Tool calls don't compose! It can read something like this tmux skill then just adapt it in all sorts of crazy ways! It can sort of do that with tool calls, but much less so. https://github.com/nickgnd/tmux-mcp

I'd love to see a capnproto capnweb or some such, with third party handoff (apologies Kenton for once again raising 3ph), where a tool call could return a result and we could forward the result to a different LLM, without even waiting for the result to come back. If the LLM could compose tool calls, it would start to have some parity with the composability of the cli+skill. But it doesn't. And as of very recently I've decided that is too strong a selling point to be ignored. I also just like how the cli remains the universe system: if these are so isomorphic as I keep telling myself, what really does the new kid on the block really bring? How much is a new incarnation better if their capabilities are so near? We should keep building cli tools, good cli tools, so that man and machine benefit.

That said I still leave the beads mcp server around. And I turn on the neovim MCP when I want to talk to neovim. Ah well. I should try harder to switch.

[−] usrbinbash 35d ago

> The core philosophy of MCP is simple: it’s an API abstraction. The LLM doesn’t need to understand the how; it just needs to know the what.

Wrong. It needs to "understand" both these things. The only difference is where and how the strings explaining them are generated.

[−] Aperocky 35d ago
Occams Razor spares none.

Everything will go to the simplest and most convenient, often both, despite the resistance of the complexity lovers.

Sorry MCP, you are not as simple as CLI/skill/combination, and no, you are not more secure just because you are buried under 3 level of spaghetti. There are no reason for you to exist, just like Copilot. I don't just wish, but know you'll go into obscurity like IE6.

[−] imron 35d ago
My biggest gripe with skills is that even clear and explicit instructions are regularly ignored - even when the skill is brief (< 100 lines).

I’ll often see the agent saying it’s about to do something so I’ll stop it and ask “what does the xxx skill say about doing that?’ And it’ll go away and think and then say “oh, the skill says I should never do that”

[−] interpol_p 35d ago
We had a contention between MCP / Skills for our product and ended up offering both. We built a CLI tool that could interface with the MCP server [1]. It seems redundant but our app is a coding app on iOS (Codea), and the issue with offering a plain MCP server meant that the agentic coding harness found it harder to do its job.

With the CLI the agent could check out the project, work on it locally with its standard file editing / patching / reading tools, then push the work back to device. Run and debug on device, edit locally, push.

With MCP the agent had to query the MCP server for every read and write and was no longer operating in its normal coding loop. It still works, though, and as a user you can choose to bypass the CLI and connect directly via MCP.

The MCP server was valuable as it gave us a consistent and deterministic language to speak. The CLI tool + Skill was valuable for agentic coding because it allowed the coding work to happen with the standard editing tools used by agents.

The CLI also gave us device discovery. So the agent can simply discover nearby devices running Codea and get to work, instead of a user having to add a specific device via its IP address to their agent.

[1] https://codea.io/cli

[−] ghm2199 35d ago
For indie developers like myself, I often use chat GPT desktop and Claude desktop for arbitrary tasks, though my main workhorse is a customized coding harness with CC daemons on my nas. With the apps, b I missed having access to my Nas server where my dev environment is. So I wrote a file system MCP and hosted it with a reverse proxy on my Truenas with auth0. I wanted access to it from all platforms CharGPT mobile, desktop. Same for CC.

For chatgpt desktop and Claude desktop my experience with MCPs connected to my home NAS is pretty poor. It(as in the app) often times out fetching data(even though there is no latency for serving the request in the logs), often the existing connection gets invalidated between 2 chat turns and chat gpt just moves on answering without the file in hand.

I am not using it for writing code, its mostly read only access to Fs. Has anyone surmounted these problems for this access patterns and written about how to build mcps to be reliable?

[−] neosat 35d ago
The juxtaposition of MCP vs Skills in the article is very strange. These are not competing ways to achieve something. Rather skills is often a way to enable an optimization on top of MCPs.

A simplified but clarifying way to think about it is that MCP exposes all the things that can be done, and Skills encode a workflow/expertise/perspective on how something should be done given all the capabilities.

So I'm not sure why the article portrays one to be conflicting with the other (e.g. "the narrative that “MCP is dead” and “Skills are the new standard” has been hammered into my brain. Everywhere I look, someone is celebrating the death of the Model Context Protocol in favor of dropping a SKILL.md into their repository.").

You can just not choose to use a skill if it's not useful. But if it's useful a skill can add to what an MCP alone can do.

[−] woeirua 35d ago
Anthropic says that Skills and MCPs are complementary, and frankly the pure Skills zealots tend to miss that in enterprise environments you’ll have chatbots or the like that don’t have access to a full CLI. It doesn’t matter if your skills tell the agent exactly what to do if they can’t execute the commands. Also, MCP is better for restricted environments because you know exactly what it can or cannot do. That’s why MCP will exist for some time still. They solve distinct problem sets.
[−] cphoover 35d ago
I think language grammars are the an interesting way to define a ruleset too. Forget REST API's or MCP Servers for a second... Define a domain specific language, and let the language model generate a valid instruction within the confines of that grammar.

Than pass the program, your server or application can parse the instructions and work from the generated AST to do all sorts of interesting things, within the confines of your language features.

It's verifiable, since you are providing within the defined grammar, and with the parser provided.

It is implicitly sandboxed by the powers you give (or rather exclude) to your runtime via an interpreter/compiler

I've tried this before for a grammar I defined for searching documents, and found it to be quite good at creating valid often complex search instructions.

[−] fancyraccoon 35d ago
Really interesting post. The "connectors vs manuals" framing stuck with me because I think it points at something beyond the UX argument. A Skill that papers over an API loses the signal the friction was carrying. Working with a raw interface tells you something about the design.

The same thing plays out at the language layer. The pain of C++ multiple inheritance drove people toward better abstractions. If LLM's absorb that friction before it reaches anyone, the signal that produces the next Go never gets felt by the people who could act on it.

Wrote about where that leads: https://blog.covet.digital/a/the_last_language_you_can_read....

[−] lewisjoe 35d ago

    > ChatGPT can’t run CLIs. Neither can Perplexity or the standard web version of Claude. Unless you are using a full-blown compute environment (like Perplexity Computer, Claude Cowork, Claude Code, or Codex), any skill that relies on a CLI is dead on arrival. 
Incorrect observation. Claude web does support skills upload. I guess claude runs code_interpreter tool and filesystem in the background to run user uploaded skills. ChatGPT business plans too allow uploading custom skills in web.

I can see Skills becoming a standard soon. But the concern still holds. When you publish a MCP you liberate the user out of installing anything. But with skills what happens if the skill running environment don't have access to the cli binary or if it isn't in PATH?

[−] hasyimibhar 35d ago
We use MCP at work. In my team of about 6 people, everyone has Claude access, but about half of us are non-engineers. I built an MCP over our backend and Clickhouse, and setup a Claude Project with instruction (I'm assuming this count as skill?). The instruction is mostly for enriching the analytics data that we have, e.g. hinting Claude to prefer certain datasets for certain questions.

This allows the non-engineers (and also engineers) to use Claude Desktop to do day-to-day operations (e.g. ban user X for fraud) and analytics (e.g. how much revenue we made past 7 days? Any fraud patterns?). The MCP helps to add audit, authorization, and approval layer (certain ops action like banning user will require approval).

[−] simianwords 35d ago
Yesterday I accidentally stumbled on a place where I could really appreciate MCP's.

I wanted to connect my Claude account to my Notion account. Apparently all you need to do is just submit the notion MCP and log in. That's it! And I was able to interact with my Notion data from my Claude account!

Imagine how hard this would be with skills? It is literally impossible because with skills, you may need to install some local CLI which Claude honestly should not allow.

If not CLI, you need to interact with their API which again can't happen because you can't authenticate easily.

MCP's fill this narrow gap in my opinion - where you don't own the runtime and you want to connect to other tools like plugins.

[−] bharat1010 35d ago
The MCP vs skills debate feels like it's still very early days — I suspect we'll look back in a year and laugh at how much we debated this once the patterns become more obvious through real-world use.
[−] leonidasv 35d ago
This is the same as saying "I still prefer hammer over screwdriver".
[−] hereme888 35d ago
I see the real argument is against poorly-designed MCP servers and where a skill/script would be a better fit.

If all you need is "teach the model how to use an existing tool", then use a skill, or even scripts, which are great for bulk work or teaching workflows.

MCPs are good at giving agents a stable, app-owned interface to a system w/o making the agents having to rediscover the integration every session. There's no way a skill/script would be able to handle the stuff I do via my local MCPs for managing certain apps and databases.

[−] bloppe 35d ago
Every CLI can be expressed as an API and vice versa. Thus every skill can be expressed as an MCP server and vice versa. Any argument about the technical or practical merits of one over the other is willfully ignoring the fact that you can always use exactly the same patterns in one vs. the other.

So it's really all about availability or preference. Personally, I don't think we needed a whole new standard with all its complexities and inevitable future breaking changes etc.

[−] socketcluster 35d ago
I prefer skills with simple curl commands. It's easy. You just need to create a server with HTTP endpoints and Claude (or other LLM) can call them with the curl commands you provide in your skills files. Claude is really good with curl and it's a well known HTTP client so what Claude is doing is more transparent to the user.

Also, with skills, you can organize your files in a hierarchy with the parent page providing the most general overview and each child page providing a detailed explanation of each endpoint or component with all possible parameters and errors. I also made a separate page where I list all the common issues for troubleshooting. It works very well.

I created some skills for my no-code platform so that Claude could access and make changes to the control panel via HTTP. My control panel was already designed to update in real-time so it's cool to watch it update as Claude creates the schema and adds dummy data in the background.

I spent a huge amount of effort on refining my HTTP API to make it as LLM-friendly as possible with flexible access control.

You can see how I built my skills marketplace from the docs page if anyone is interested: https://saasufy.com/

[−] jsw97 35d ago
From the article: "Sandboxing: Remote MCPs are naturally sandboxed. They expose a controlled interface rather than giving the LLM raw execution power in your local environment."

I think this is underappreciated. CLI access gives agents a ton of freedom and might be more effective in many applications. But if you require really fine granularity on permissions -- e.g., do lookups in this db and nothing else -- MCP is a natural fit.

[−] localhost3000 35d ago
How I think about this:

If you're using an agent in a shell environment with unfettered internet access and code execution: CLI + Skills.

If you're using a hosted agent on a website or in an app without code execution and limited/no internet access: MCP.

We want both patterns. Folks who are agro about MCP do ~all of their work in the former, so it seems pointless. Most people interact with agents in the later.

[−] senordevnyc 35d ago
I love the idea of MCP, but it needs a progressive disclosure mechanism. A large MCP from a provider with hundreds or even thousands of tools can eat up a huge amount of your context window. Additionally, MCPs come in a bunch of different flavors in terms of transport and auth mechanisms, and not all harnesses support all those options well.

I’ve gone the other way, and used MCP-CLI to define all my MCP servers and wrap them in a CLI command for agent use. This lets me easily use them both locally and in cloud agents, without worrying about the harness support for MCP or how much context window will be eaten up. I have a minimal skill for how to use MCP-CLI, with progressive disclosure in the skill for each of the tools exposed by MCP-CLI. Works great.

All that said, I do think MCP will probably be the standard going forward, it just has too much momentum. Just need to solve progressive disclosure (like skills have!) and standardize some of the auth and transport layer stuff.

[−] s-xyz 35d ago
I never understood why there is a discussion about it, one or the other… both serve a different purpose and are complementary.
[−] Xenoamorphous 35d ago
I use both and don't feel they're mutually exclusive.

E.g. if I have some ElasticSearch cluster, I use a skill to describe the data, and if I ask the LLM to write code that queries ElasticSearch but to test it first it can use a combination of skill + MCP to actually run a query.

I think this model works nicely.

[−] medbar 35d ago
I still use vanilla Claude Code without MCP or skills, am I in the minority? Not trying to be a luddite.
[−] 0xbadcafebee 35d ago
I have vibe-coded 4 different software projects recently, on multiple platforms. I added search, RAG, ticketing, notifications, voice, and more features to them, in 2 minutes. All I had to do was implement MCP client, and suddenly all that other complex functionality "just worked", both locally and remotely.

Skills would have required me to 1) add all the skill files to all those projects (and maintain all those files), and 2) install software tools (some of these tools don't have CLIs) to be usable by the skills. Not to mention: the skills aren't deterministic! You have to iterate on a skill file for a while to get the LLM to reliably use it the way you want.

[−] simianwords 35d ago
SKILLS.md or AGENTS are good concepts but they miss two crucial things that will make them much more usable. I predict that this will happen.

Each SKILLS.md will come with two hooks:

1. first for installing the SKILL itself - maybe install the CLI or do some initial work to get it working

2. Each skill may have dependencies on other skills - we need to install those first

Expressing these two hooks in a formal way in skills would help me completely replace MCP's.

My concrete prediction is that this will happen soon.

Wrote more about it here: https://simianwords.bearblog.dev/what-agent-skills-misses-no...

[−] tomaytotomato 35d ago
As others have said I have found CLI tools much better

This is how I am structuring stuff in Claude Code

- Ansible setup github cli, git, atlassian cli, aws-cli, terraform cli tooling

- Claude hooks for checking these cli tools are authenticated and configured

- Claude skills to use the CLI tooling

[−] michaelashley29 35d ago
100% MCPs truly give the agent tools and allow the agent to make better informed decisions given you can have configured the right MCP tools. Skills are good for knowledge and general guidelines. They give context to the agent, and I have seen some skills being excessively long that could into eat into the context window of the agent. This tool https://protomcp.io/ helps a lot with testing MCP servers before integrating into the agent workflow. You can even see the agent call different tools in real time and view the trace.
[−] vlucas 34d ago
Agreed on the MCP sentiment, and particularly remote MCPs. They keep themselves up to date with predefined tools, schemas, descriptions of how to use them, etc. where skills tend to be a mess over time, as described in the article.

Plug: If you want to try chatting with your financial data via an MCP, give FINTECH_MCP a try: https://www.fintechmcp.app - it's got a preview mode too so you can see how it works without linking any real data.

[−] fjellsystems 34d ago
Experimenting with something adjacent — torget.ai, idea is an agent/tool marketplace so enjoyed this discussion :)

What strikes me is that MCP vs Skills vs Bespoke are all answers to ‘how does an agent use a known capability.’ How does it find one (?) is where I’m experimenting.

Discovery and payment at the agent layer still feels like the missing primitive.

Also (food for thought) the local LLM angle keeps getting underweighted in many discussions. For someone running Gemma 4 locally, there’s no tool layer at all by default. Different problem than the cloud agent angle.

[−] chris_money202 35d ago
I think the worse thing is when someone takes a clearly defined list of steps to do something and writes it as a skill rather than just having AI write it as a script. It’s like people have forgot what scripting is
[−] qrbcards 35d ago
The comparison to app stores is interesting but I think MCP registries solve a different problem. App stores are for humans browsing. MCP registries are for agents discovering tools at runtime based on the task at hand. The user never browses — they describe what they need and the agent finds the tool.

That is a meaningful distribution shift. Products no longer need to be marketed to end users if an agent can find and invoke them directly. Skills require the developer to install them ahead of time, which means someone already decided this tool was relevant.