What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.
Yeah exactly like this. I like being able to approve/deny requests or "learn" from a good run and apply that policy to later runs so I can leave them unattended and know they can't access anything aside from what I approved.
is your idea of granular control (roughly) a group of agents in separate containers writing back to their own designated store each sufficient, or more control than that?
Great work. As someone who spends many hours a day in Claude Code and dreads the dreaded auto compact moment, the memory problem is genuinely a big point of frustration.
Right now I use a skill on every commit (or when the auto compact warning starts showing up) that forces Claude to update its "memory" . It is a flat markdown file that gets stuffed into conversations, not v smart. Claude forgets things I've told it dozens of times.
Your MCP server approach makes total sense. The create_atom tool alongside semantic_search makes it read/write from day one. I would love to wire a stop hook to automatically atomize session insights (the write side). That's the dream: I work on something in my code, Claude learns why, and that knowledge flows into Atomic without me saying "remember this."
Thanks! Integrating Atomic with tools like claude code is one of the more exciting use cases in my opinion. There are a lot of tools for AI memory out there, but not a ton that allow you to browse, organize, and collaborate directly with the memories.
Not 100% sure what are the ingestions methods available ?
Browser extension clipper and RSS are two. I guess I can manually create a node/atom ? Can it scan a local folder for markdown notes ? Or ocr some pdf -> markdown/frontmatter sidecar files -> atomic node ? That would be the dream.
I like the headless approach here. Since you already have hierarchical auto-tagging, do those categories act as "gravitational anchors" for the spatial canvas to prevent a "semantic hairball" once the knowledge base scales beyond a few hundred atoms?
27 comments
What's holding me back from AI repos and agents isn't running it locally though. Its the lack of granular control. I'm not even sure what I want. I certainly don't want to approve every request, but the idea of large amounts of personal data being accessible, unchecked, to an AI is concerning.
I think perhaps an agent that focuses just on security, that learns about your personal preferences, is what might be needed.
Agreed regarding the privacy/security hesitations. Running the models locally with ollama is an option, but of course there's the hardware requirements and limitations of open source models to contend with. ultimately it's a balance between privacy and ease of use, and I'm not sure that there's a good one-size-fits-all for that balance.
greywall.io
Right now I use a skill on every commit (or when the auto compact warning starts showing up) that forces Claude to update its "memory" . It is a flat markdown file that gets stuffed into conversations, not v smart. Claude forgets things I've told it dozens of times.
Your MCP server approach makes total sense. The create_atom tool alongside semantic_search makes it read/write from day one. I would love to wire a stop hook to automatically atomize session insights (the write side). That's the dream: I work on something in my code, Claude learns why, and that knowledge flows into Atomic without me saying "remember this."
Not 100% sure what are the ingestions methods available ? Browser extension clipper and RSS are two. I guess I can manually create a node/atom ? Can it scan a local folder for markdown notes ? Or ocr some pdf -> markdown/frontmatter sidecar files -> atomic node ? That would be the dream.
I saw sqlite-vec for semantic search so I assume notes are stored in sqlite.
- What considerations did you have for the storage layer?
- Also does storage on disk increase linearly as notes/atoms grow?