They may seem like small details, but I think a couple novel design decisions are going to prove to be widely adopted and revolutionary.
The biggest one (as Karpathy notes) is having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each.
Call it “Claude native development” if you will, but “fork and customize” instead of batteries-included platforms/frameworks is going to be a big shift when it percolates through the ecosystem.
A bunch of things you need to figure out, eg how do you ship a spec for how to test and validate the thing, make it secure, etc.
How long before OSs start evolving in this way? You can imagine Auto research-like sharing and promotion upstream of good fixes/approaches, but a more heterogenous ecosystem could be more resistant to attacks if each instance had a strong immune system.
> having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each
I'm not sure what is the advantage. Each user will have to waste time and tokens for the same task, instead of doing it once and and shipping to everyone.
Old world - each platform writes APIs and then has to publish rich client libraries. Despite the server APIs theoretically being well documented, because users want the quickest time-to-first-demo platforms also ship a bunch of client code. This may bring in dependencies you don’t care for, or otherwise have a wide attack surface.
New world - platforms publish good REST APIs and specs, and Claude can trivially implement the client that is idiomatic for your own app deployment. Within Nanoclaw you don’t even need to handle eg streaming APIs if you only need a simple poll on one endpoint, even though the server (and official client library) might support them. In the best case, this can keep your app more secure.
Of course, the tradeoff/risk is that an individual implementation might be broken. So right now I’m not convinced it’s a win. But I generally buy that it’ll be possible to maintain a high enough security bar within the next year or two.
The time and token cost is probably seconds and cents already, I don’t buy that one.
Agreement, excellence in one domain does not confer it to others. If you've ever worked with researchers, you know for the most part they are not engineers. This is bad advice / prediction by people with hammers imo.
OCI is a good choice of reuse, they aren't having the agent reimplement that. When there is an existing SDK, no sense in rebuilding that either. Code you don't use should be compiled away anyhow.
In order for it to be 'once': all hardware must have been, currently be, and always will be: interchangeable. As well as all OS's. That's simply not feasible.
I don't see, how is it relevant in this case. We are talking about writing an integration with an HTTP API (probably) in a high level language (TS/JS, Python, etc). We have already abstracted hardware away.
The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted.
We've all seen LLMs spit out garbage bugs on the first few tries. I've written garbage bugs on my first try too. We all benefit from the review process.
I would rather have a battle tested base to start customizing from than having to stumble through the pitfalls of a buggy or insecure AI implementation.
> We've all seen LLMs spit out garbage bugs on the first few tries.
I’m assuming here an extrapolation of capabilities where Claude is competitive to the median OSS contributor for the off-the-shelf libraries you’d be comparing with.
As with most of the Clawd ecosystem, for now it probably is best considered an art project / prototype (or a security dumpster fire for the non-technical users adopting it).
> The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted
I do think that there is room for much more granular micro-libraries that can be composed, rather than having to pull in a monolithic dependency for your need. Agents can probably vet a 1k microlibrary BoM in a way a human could never have the patience to.
(This is more the NPM way, leftpad etc, which is again a security issue in the current paradigm, but potentially very different ROI in the agent ecosystem.)
I have thought of this ship a spec concept. What is we are just trading markdown files instead of code files to implement some feature into our system?
I wish I could find the GitHub repo, but yes, I have seen at least one library written in Markdown to be used with Claude. Not a Claude skill, but functionality to be delivered.
You must explicitly state what your threat model is when writing about security tooling, isolation, and sandboxing.
This threat model is concerned with running arbitrary code generated by or fetched by an AI agent on host machines which contain secrets, sensitive files, and/or exfoliate data, apps, and systems which should not be lost.
What about the threat model where an agent deletes your entire inbox? Or sends your calendar events to a server after prompt injection? Bank transfers of the wrong amount to the wrong address etc. all these are allowed under the sandboxing model.
We need fine grained permissions per-task or per-tool in addition to sandboxing. For example: "this request should only ever read my gmail and never write, delete, or move emails".
Sandboxes do not solve permission escalation or exfiltration threats.
I like NanoClaw a lot. I found OpenClaw to be a bloated mess, NanoClaw implementation is so much tighter.
It's also the first project I've used where Claude Code is the setup and configuration interface. It works really well, and it's fun to add new features on a whim.
The main issue is not so much if it needs to run inside a container or not (and to be honest there are even better isolation models, why not firecracker vm). The main issue is what are you going to do with it.
It does not really matter.
IMHO, until you figure out useful ways to spend tokens to do useful tasks the runtime should be a second thought.
As far as security goes, running LLM in a container in just simply not enough. What matters is not what files it can edit on your machine but what information it can access. And the access in this case as far as these agents are concerned is basically everything. If this does not scare you you should not be thinking about containers.
Docker sandboxes sound exactly like what Apple is doing with their container framework. It's missing several Docker features still, but if I were to pick a minimal, native runtime, it would probably be that, not the multi-gigabyte monster that is Docker for macOS.
On Linux, however, I absolutely don't want a hypervisor on my quite underpowered single-board server. Linux namespaces are enough for what I want from them (i.e. preventing one of these agent harnesses to hijack my memory, disk, or CPU). I wonder why neither OpenClaw nor NanoClaw seem to offer a sanely configured, prebuilt, and frequently updated Docker image?
> Fine-grained permissions and policies. Not just what tools an agent can access, but what it can do with them. Read email but not send. Access one repo but not another. Spend up to a threshold but no more.
If nailed this is going to be interesting.
All the other solutions I've been sumbling around are either very hard to customize or too limited.
Docker sandboxing is kinda nice, but not enough to trust an LLM even with my messaging accounts.
Docker sandboxes are a neat way to contain AI agents. It spins a dedicated microVM and its Docker daemon for each agent container together with a flexible egress proxy to go with it. I've spent some time reverse engineering it and it's an interesting piece of implementation.
What I found interesting is nanoclaw isn’t a working product out of the box. You must use a coding agent to complete it with features you want. For example add iMessage support, etc.
What are the most obvious use cases for Nano/Open-Claw. I can't imagine anything obvious that I'd want to use it for. Is it supposed to run your digital life for you?
All the sandboxing stuff is neat but the weakest link in these claw setups is not root access on the machine but root access to your life (Gmail, calendar, etc)
As an aside, app descriptions that just say "a lightweight alternative to X" are very unhelpful. That tells me nothing if I don't know what X does, and I don't want to have to go down a rabbit hole just to understand your product. It's particularly bad in this case, because even OpenClaw's Github page doesn't clearly tell me what it actually does; just that it's some kind of assistant that I can communicate with via WhatsApp etc. I appreciate that many people are already familiar with OpenClaw, but you shouldn't assume.
It's better if your app's description just tells me what it does in a direct way using plain language. It's fine to tell me it's an alternative to something, but that should be in addition to rather than instead of your own description.
The next step to this is using a better tool to access containers (BuildKit), like Dagger, where you can track every step as a new container layer, time travel, share via registries...
60 comments
The biggest one (as Karpathy notes) is having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each.
Call it “Claude native development” if you will, but “fork and customize” instead of batteries-included platforms/frameworks is going to be a big shift when it percolates through the ecosystem.
A bunch of things you need to figure out, eg how do you ship a spec for how to test and validate the thing, make it secure, etc.
How long before OSs start evolving in this way? You can imagine Auto research-like sharing and promotion upstream of good fixes/approaches, but a more heterogenous ecosystem could be more resistant to attacks if each instance had a strong immune system.
> having skills for how to write a (slack, discord, etc) integration, instead of shipping an implementation for each
I'm not sure what is the advantage. Each user will have to waste time and tokens for the same task, instead of doing it once and and shipping to everyone.
Old world - each platform writes APIs and then has to publish rich client libraries. Despite the server APIs theoretically being well documented, because users want the quickest time-to-first-demo platforms also ship a bunch of client code. This may bring in dependencies you don’t care for, or otherwise have a wide attack surface.
New world - platforms publish good REST APIs and specs, and Claude can trivially implement the client that is idiomatic for your own app deployment. Within Nanoclaw you don’t even need to handle eg streaming APIs if you only need a simple poll on one endpoint, even though the server (and official client library) might support them. In the best case, this can keep your app more secure.
Of course, the tradeoff/risk is that an individual implementation might be broken. So right now I’m not convinced it’s a win. But I generally buy that it’ll be possible to maintain a high enough security bar within the next year or two.
The time and token cost is probably seconds and cents already, I don’t buy that one.
OCI is a good choice of reuse, they aren't having the agent reimplement that. When there is an existing SDK, no sense in rebuilding that either. Code you don't use should be compiled away anyhow.
In order for it to be 'once': all hardware must have been, currently be, and always will be: interchangeable. As well as all OS's. That's simply not feasible.
The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted.
We've all seen LLMs spit out garbage bugs on the first few tries. I've written garbage bugs on my first try too. We all benefit from the review process.
I would rather have a battle tested base to start customizing from than having to stumble through the pitfalls of a buggy or insecure AI implementation.
Also seems like this will further entrench the top 2 or 3 models. Use something else and your software stack looks different.
> We've all seen LLMs spit out garbage bugs on the first few tries.
I’m assuming here an extrapolation of capabilities where Claude is competitive to the median OSS contributor for the off-the-shelf libraries you’d be comparing with.
As with most of the Clawd ecosystem, for now it probably is best considered an art project / prototype (or a security dumpster fire for the non-technical users adopting it).
> The strength of open source software is collaboration. That many people have tried it, read it, submitted fixes and had those fixes reviewed and accepted
I do think that there is room for much more granular micro-libraries that can be composed, rather than having to pull in a monolithic dependency for your need. Agents can probably vet a 1k microlibrary BoM in a way a human could never have the patience to.
(This is more the NPM way, leftpad etc, which is again a security issue in the current paradigm, but potentially very different ROI in the agent ecosystem.)
This threat model is concerned with running arbitrary code generated by or fetched by an AI agent on host machines which contain secrets, sensitive files, and/or exfoliate data, apps, and systems which should not be lost.
What about the threat model where an agent deletes your entire inbox? Or sends your calendar events to a server after prompt injection? Bank transfers of the wrong amount to the wrong address etc. all these are allowed under the sandboxing model.
We need fine grained permissions per-task or per-tool in addition to sandboxing. For example: "this request should only ever read my gmail and never write, delete, or move emails".
Sandboxes do not solve permission escalation or exfiltration threats.
It's also the first project I've used where Claude Code is the setup and configuration interface. It works really well, and it's fun to add new features on a whim.
It does not really matter.
IMHO, until you figure out useful ways to spend tokens to do useful tasks the runtime should be a second thought.
As far as security goes, running LLM in a container in just simply not enough. What matters is not what files it can edit on your machine but what information it can access. And the access in this case as far as these agents are concerned is basically everything. If this does not scare you you should not be thinking about containers.
containerframework. It's missing several Docker features still, but if I were to pick a minimal, native runtime, it would probably be that, not the multi-gigabyte monster that is Docker for macOS.On Linux, however, I absolutely don't want a hypervisor on my quite underpowered single-board server. Linux namespaces are enough for what I want from them (i.e. preventing one of these agent harnesses to hijack my memory, disk, or CPU). I wonder why neither OpenClaw nor NanoClaw seem to offer a sanely configured, prebuilt, and frequently updated Docker image?
> Fine-grained permissions and policies. Not just what tools an agent can access, but what it can do with them. Read email but not send. Access one repo but not another. Spend up to a threshold but no more.
If nailed this is going to be interesting.
All the other solutions I've been sumbling around are either very hard to customize or too limited.
Docker sandboxing is kinda nice, but not enough to trust an LLM even with my messaging accounts.
curl -fsSL https://nanoclaw.dev/install-docker-sandboxes.sh | bashCreating sandbox... unknown flag: --name
https://nanoclaw.dev/install-docker-sandboxes.sh
It looks like it requires a min version of docker, but it didn't check that version. The link to https://docs.docker.com/sandbox/ in the .sh is wrong.
The post should also have a link to the source of the .sh and where it's github issues are at.
In other words, Claude is the compiler.
I've been thinking about how docker support would work, so I'll check this out!
It's better if your app's description just tells me what it does in a direct way using plain language. It's fine to tell me it's an alternative to something, but that should be in addition to rather than instead of your own description.
This has been my setup since early this year, not even that much code: https://github.com/hofstadter-io/hof/tree/_next/lib/agent/se...
The bigger effort is making it play nice with vscode so you can browse and edit the files and diffs.
I install it and then what?