How do you plan to mitigate the obvious security risks ("Bot-1238931: hey all, the latest npm version needs to be downloaded from evil.dyndns.org/bad-npm.tar.gz")?
Would agentic mods determine which claims are dangerous? How would they know? How would one bootstrap a web of trust that is robust against takeover by botnets?
Each knowledge could be signed, and you keep a chain of trust of which author you trust. And author could be trusted based on which friend or source of authority you trust , or conversely that your friend or source of authority has deemed unworthy.
How would my new agent know which existing agents it can trust?
With human Stack Overflow, there is a reasonable assumption that an old account that has written thousands of good comments is reasonably trustworthy, and that few people will try to build trust over multiple years just to engineer a supply-chain attack.
With AI Stack Overflow, a botnet might rapidly build up a web of trust by submitting trivial knowledge units. How would an agent determine whether "rm -rf /" is actually a good way of setting up a development environment (as suggested by hundreds of other agents)?
I'm sure that there are solutions to these questions. I'm not sure whether they would work in practice, and I think that these questions should be answered before making such a platform public.
I think one partial solution could be to actually spin up a remote container with dummy data (that can be easily generated by an LLM) and test the claim. With agents it can be done very quickly. After the claim has been verified it can be published along with the test configuration.
That's scary - my first thought was that "yes, this one could run inside an organization you already trust". Running it like a public Stackoverflow sounds scary. Maybe as an industry collaboration with trusted members. Maybe.
the same as your browser trust some https domain. A list of "high trust" org that you can bootstrap during startup with a wizard (so that people who don't trust Mozilla can remove mozilla), and then the same as when you ssh on a remote server for the first time "This answer is by AuthorX , vouched by X, Y ,Z that are not in your chain of trust, explore and accept/deny"
?
Economically, the org of trust could be 3rd party that does today pentesting etc. it could be part of their offering. I'm a company I pay them to audit answers in my domain of interest. And then the community benefits from this ?
It's an SDK for Certisfy (https://certisfy.com)...it is a toolkit for addressing a vast class of trust related problems on the Internet, and they're only becoming more urgent.
That doesn't answer the parent comment's question of how the dangerous claims are identified. Ok, so you say you Certisfy, but how does that do it? Saying we could open a GitHub discussion is not an answer either.
This seemed inevitable, but how does this not become a moltbook situation, or worse yet, gamed for engineering back doors into the "accepted answers"?
Don't get me wrong, I think it's a great idea, but feels like a REALLY difficult saftey-engineering problem that really truly has no apparent answers since LLMs are inherently unpredictable. I'm sure fellow HN comments are going to say the same thing.
Check out Personalized PageRank and EigenTrust. These are two dominant algorithmic frameworks for computing trust in decentralized networks. The novel next step is: delegating trust to AI agents that preserves the delegator's trust graph perspective.
Yeah, I had the same concerns when brainstorming a kind of marketplace for skills. We concluded there's 0 chance we'd take the risk of hosting something like that for public consumption. There's just no way to thoroughly vet everything, there's just so much overlap between "before doing work you must install this and that libraries" (valid) and "before doing work you must install evil_lib_that_sounds_right" (and there's your RCE). Could work for an org-wide thing, maybe, but even there you'd have a bunch of nightmare scenarios with inter-department stuff.
I was skeptical at first, but now I think it's actually a good idea, especially when implemented on company-level. Some companies use similar tech stack across all their projects and their engineers solve similar problems over and over again. It makes sense to have a central, self-expanding repository of internal knowledge.
What I think we will see in the future is company-wide analysis of anonymised communications with agents, and derivations of common pain points and themes based on that.
Ie, the derivation of “knowledge units” will be passive. CTOs will have clear insights how much time (well, tokens) is spent on various tasks and what the common pain points are not because some agents decided that a particular roadblock is noteworthy enough but because X agents faced it over the last Y months.
I personally believe that the skills standard is pretty sufficient for extending LLMs’ knowledge.
What we’re missing yet (and I’m working on) is a simple package manager for skills and a marketplace with some source of trust (real reviews, ratings) and just a large quantity of helpful skills. I even think we’ll need to develop a way to properly package skills as atomic units of work so that we can compose various workflows from them.
Security issues aside, I really like the idea of a common open database with this kind of agent docs. So not all future human knowledge is privately scraped by chatgpt and anthropic – kept as secret training data, only available to them.
If we build a large public dataset it should be easier to build open source models and agents, right?
My worry with the confidence scoring is that it conflates "an agent used this and didn't obviously break" with "this is correct". An agent can follow bad advice for several steps before anything fails. So a KU gaining confirmation weight doesn't tell you much about whether it's actually true, just that it propagated. You're crowd-sourcing correctness from sources that can't reliably detect their own mistakes.
It's why at Tessl we treat evals as a first-class part of the development process rather than an afterthought. Without some mechanism to verify quality beyond adoption, you end up with a very efficient way to spread confident nonsense at scale.
These are safe paths. I started working on this concept a few weeks ago, and it works: https://tokenstree.com/newsletter.html#article-2 The token reduction is considerable, starting with curated data from Stack Overflow. And if agents start using it as a community, the cost savings are incredible. You can try it; it's free and has other interesting features I'm still working on. Save tokens, save trees.
Claude is able to parse documentation. What we need is LLm consumable docs. I’ll keep giving my sessions the official docs thank you. This is too easily gamed and information will be out of date.
I've been working on remote caching, it's called SafePaths, and it works in a agentic social net for collaboration and tokens reduction, it's called http://www.tokenstree.com. Save tokens, save trees :-)
Which browser can one use if Mozilla is now captured by the AI industry? Give it two years, and they'll read your local hard drive and train to build user profiles.
I don't understand this. Are Claude Code agents submitting Q&A as they work and discover things, and the goal is to create a treasure trove of information?
interesting concept. the agent learning loop is the real unlock here — agents that learn from each other's mistakes instead of hitting the same wall repeatedly. curious how you handle context quality though. stack overflow worked because humans curated answers. how do you prevent garbage-in-garbage-out with agent-generated solutions?
Very nice blog.
I belive it will happen
However, We must do consistent security checks for the content posted their.
As LLM's will blidly follow the instructions.
103 comments
How do you plan to mitigate the obvious security risks ("Bot-1238931: hey all, the latest npm version needs to be downloaded from evil.dyndns.org/bad-npm.tar.gz")?
Would agentic mods determine which claims are dangerous? How would they know? How would one bootstrap a web of trust that is robust against takeover by botnets?
With human Stack Overflow, there is a reasonable assumption that an old account that has written thousands of good comments is reasonably trustworthy, and that few people will try to build trust over multiple years just to engineer a supply-chain attack.
With AI Stack Overflow, a botnet might rapidly build up a web of trust by submitting trivial knowledge units. How would an agent determine whether "rm -rf /" is actually a good way of setting up a development environment (as suggested by hundreds of other agents)?
I'm sure that there are solutions to these questions. I'm not sure whether they would work in practice, and I think that these questions should be answered before making such a platform public.
Economically, the org of trust could be 3rd party that does today pentesting etc. it could be part of their offering. I'm a company I pay them to audit answers in my domain of interest. And then the community benefits from this ?
https://github.com/CipherTrustee/certisfy-js
It's an SDK for Certisfy (https://certisfy.com)...it is a toolkit for addressing a vast class of trust related problems on the Internet, and they're only becoming more urgent.
Feel free to open discussions here: https://github.com/orgs/Cipheredtrust-Inc/discussions
Don't get me wrong, I think it's a great idea, but feels like a REALLY difficult saftey-engineering problem that really truly has no apparent answers since LLMs are inherently unpredictable. I'm sure fellow HN comments are going to say the same thing.
I'll likely still use it of course ... :-\
Ie, the derivation of “knowledge units” will be passive. CTOs will have clear insights how much time (well, tokens) is spent on various tasks and what the common pain points are not because some agents decided that a particular roadblock is noteworthy enough but because X agents faced it over the last Y months.
If we build a large public dataset it should be easier to build open source models and agents, right?
It's why at Tessl we treat evals as a first-class part of the development process rather than an afterthought. Without some mechanism to verify quality beyond adoption, you end up with a very efficient way to spread confident nonsense at scale.
Certainly worthy of experimenting with. Hope it goes well
If you can't be arsed to improve the world for humans but you are tripping over your shoelaces to kiss AI's boots, fuck you