"why not just put markdown files in each folder?" I had those files. I also had the answer.
I had lots of .md files, custom skills for consistency checks, a test-audit skill for coverage gaps, a code-truth skill that worked backwards from code to documentation. All untethered from the actual code. Nothing connected a design decision in auth-flow.md to the function implementing it. Nothing told me when a renamed method made the docs fiction.
So I became the binding. Personally running consistency checks. Running reconciliation. Constantly worrying about drift. It didn't scale — it scaled me.
lat.md replaced that work with three things: docs link into source code and source code comments link back to docs — the connection is explicit, not hopeful. lat check enforces referential integrity automatically — the tool worries about consistency so I don't have to. And the knowledge compounds — every session the agent consults the graph and can update it. I've taken to ending conversations with "review this and update [x]" where "x" is some document that I'm working on in the thread. lat.md gets updated for free.
I haven't run rigorous tests. It isn't perfect. There's no planning engine but I get by fine with various *plan.md file. But I'm rolling it out across all my repos because it replaced manual work I was already doing, more reliably than I was doing it.
Not "why do I need a tool for documentation?" but "why was I personally doing the job of a linter?"
I've been doing similar work since Claude code updated their "slash command"(later merge to skills), first 3-4 long content docs, gradually split to modular groups. I designed it for loading the docs based on what agent is actually doing. The maintenance part is honestly not that hard, for me, I created some CI jobs that diffs the docs against the codebase and flags drift handles of it.
The pattern works.
But I keep catching myself spending more time on how to organize context than on what the agent is actually supposed to accomplish.
Feels like the whole space has that problem right now.
Creator of lat.md here. There are two videos with me talking about lat in more detail [1] and less detail [2]. But I'm also working on a blog post exploring lat and its potential, stay tuned.
I found having smaller structured markdowns in each folder explaining the space and classes within keeps Claude and Codex grounded even in a 10M+ LOC codebase of c/c++
It’s a start but it wouldn’t solve my uses cases. I developed my own skill for this. Called morning-routine that does a recursive ls -R on all the Claude markdowns. It’s staged in multiple stages so I don’t have to waste as much context if I don’t need to.
good catch. Makes me wonder if we could feed the Agent with a repository of known vulnerability and security best practices to check against and get ride of most deps. Just ask _outloud_ so to speak.
This is one of the things that GitHub Spec Kit solves for me. The specify.plan step launches code exploration agents and builds itself the latest data model, migrations, etc etc. Really reduces the need to document stuff when the agent self discovers codebase needs.
Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
Can you offer any insights how this compares to building an AST or RAG over your codebase? Several projects do that and it autoupdates on changes too. The agent does a wide sweep using AST/RAG search followed by drill down using an LSP. This sped up my search phase by 50%. How Will this project help me?
I think it's a great idea and I'm considering building this in lat too. Code embedding models can definitely speed up grepping further, but they still wouldn't help much when you have a business logic detail encoded across multiple complex files. With lat you'd have it documented in a paragraph of text + a few [[..]] links into your code.
Intriguing concept, especially because I've thought of dabbling into this area.
Can this tool solve these problems with models:
- insisting on using sed and shell redirection instead of File Edit Tools?
- trying to use npx to run commands instead of npm run that's right there in package.json
- forgetting to produce the docs that was asked for in AGENTS.md
- checking and using latest package versions instead of deciding somehow that years old versions are good enough
To my mind these are context problems, where somehow model chooses other information it has over what's in the repository, and what tools it has on call.
The other side of Lat.md, checking diffs models make among other changes, that's hard for me to grasp. I'd need to see it in action. Perhaps a coding session as a stream?
We've been doing this with simple mkdocs for ages. My experience is that rendering the markdown to feel like public docs is important for getting humans to review and take it seriously. Otherwise it goes stale as soon as one dev on the project doesn't care.
I definitely agree with the need for this. There's just too much to put into the agents file to keep from killing your context window right off the bat. Knowledge compression is going to be key.
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
> I suspect there's ways to shrink that context even more.
Yeah, I'm experimenting with some ideas on that, like adding lat agent command to act as a subagent to search through lat and summarize related knowledge without polluting the parent's agent context.
The staleness problem mentioned here is real. For agentic systems, a markdown-based DAG of your codebase is more practical than a traditional graph because agents work within context windows. You can selectively load relevant parts without needing a complex query engine. The key is making updates low-friction -- maybe a pre-commit hook or CI job that refreshes stale nodes.
So the graph is human-maintained, and agents consume it and lat check is supposed to catch broken links and code-spec drift. How do you manage this in a multi-agent setup? Is it still a manual merge+fix conflicts situation? That's where I keep seeing the biggest issues with multi-agent setups
Curious if you've seen a difference between agents finding things through the graph structure vs just vector search over a docs directory. The section constraints and check validation seem like the real quality wins here, wondering how much the wiki link topology adds on top of that.
I was thinking the same. Especially now that Obsidian has a CLI to work with the vault.
The one thing I saw in the README is that lat has a format for source files to link back to the lat.md markdown, but I don't see why you couldn't just define an "// obs:wikilink" sort of format in your AGENTS.md
Because lat gives agents more tools and enforces the workflow.
Unlike obsidian, lat allows markdown files link into functions/structs/classes/etc too.
This saves agents time on grepping but also allows you to build better workflows with tests.
Test cases can be described as sections in lat.md/ and marked with require-code-mention: true. Each spec then must be referenced by a // @lat: comment in test code. lat check flags any spec without a backlink, so you can review and maintain test coverage from the knowledge graph.
This is interesting. I've been using a system-wide Obsidian vault that all my agents use for stuff that's platform-specific instead of project specific (think common Android/Samsung-related or ANR fixes). So far, it hasn't been mind-blowing but it's only been a month so its knowledge from there is also limited.
lat seems like it could be useful to cross-reference company-wide projects.
I have a vitepress package in most of my repos. It is a knowledge graph that also just happens to produce heat looking docs for humans when served over http. Agents are very happy to read the raw .md.
managing agents.md is important, especially at scale. however I wonder how much of a measurable difference something like this makes? in theory, it's cool, but can you show me that it's actually performing better as compared to a large agents.md, nested agents.md, skills?
more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
62 comments
"why not just put markdown files in each folder?" I had those files. I also had the answer.
I had lots of
.mdfiles, custom skills for consistency checks, a test-audit skill for coverage gaps, a code-truth skill that worked backwards from code to documentation. All untethered from the actual code. Nothing connected a design decision inauth-flow.mdto the function implementing it. Nothing told me when a renamed method made the docs fiction.So I became the binding. Personally running consistency checks. Running reconciliation. Constantly worrying about drift. It didn't scale — it scaled me.
lat.md replaced that work with three things: docs link into source code and source code comments link back to docs — the connection is explicit, not hopeful.
lat checkenforces referential integrity automatically — the tool worries about consistency so I don't have to. And the knowledge compounds — every session the agent consults the graph and can update it. I've taken to ending conversations with "review this and update [x]" where "x" is some document that I'm working on in the thread. lat.md gets updated for free.I haven't run rigorous tests. It isn't perfect. There's no planning engine but I get by fine with various *plan.md file. But I'm rolling it out across all my repos because it replaced manual work I was already doing, more reliably than I was doing it.
Not "why do I need a tool for documentation?" but "why was I personally doing the job of a linter?"
The pattern works.
But I keep catching myself spending more time on how to organize context than on what the agent is actually supposed to accomplish.
Feels like the whole space has that problem right now.
AMA :)
[1] https://x.com/mitsuhiko/status/2037649308086902989?s=20
[2] https://www.youtube.com/watch?v=gIOtYnI-8_c
ls -Ron all the Claude markdowns. It’s staged in multiple stages so I don’t have to waste as much context if I don’t need to.Keep going, prompt engineering is fascinating.
> "chalk": "^5.6.2",
security.md ist missing apparently.
https://github.com/1st1/lat.md/commit/da819ddc9bf8f1a44f67f0...
Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
Can this tool solve these problems with models:
- insisting on using sed and shell redirection instead of File Edit Tools?
- trying to use npx to run commands instead of npm run that's right there in package.json
- forgetting to produce the docs that was asked for in AGENTS.md
- checking and using latest package versions instead of deciding somehow that years old versions are good enough
To my mind these are context problems, where somehow model chooses other information it has over what's in the repository, and what tools it has on call.
The other side of Lat.md, checking diffs models make among other changes, that's hard for me to grasp. I'd need to see it in action. Perhaps a coding session as a stream?
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
> I suspect there's ways to shrink that context even more.
Yeah, I'm experimenting with some ideas on that, like adding
lat agentcommand to act as a subagent to search through lat and summarize related knowledge without polluting the parent's agent context.lat initwould install hooks to double check that Claude/Codex/OpenCode update lat.md when they finish the work.lat checkis supposed to catch broken links and code-spec drift. How do you manage this in a multi-agent setup? Is it still a manual merge+fix conflicts situation? That's where I keep seeing the biggest issues with multi-agent setupsThe one thing I saw in the README is that lat has a format for source files to link back to the lat.md markdown, but I don't see why you couldn't just define an "// obs:wikilink" sort of format in your AGENTS.md
Unlike obsidian, lat allows markdown files link into functions/structs/classes/etc too.
This saves agents time on grepping but also allows you to build better workflows with tests.
Test cases can be described as sections in
lat.md/and marked withrequire-code-mention: true. Each spec then must be referenced by a// @lat:comment in test code.lat checkflags any spec without a backlink, so you can review and maintain test coverage from the knowledge graph.lat seems like it could be useful to cross-reference company-wide projects.
more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
https://news.ycombinator.com/item?id=47543324
what's the point of markdown? there's nothing useful you can do with it other than handing it over to llm and getting some probabilistic response
So give you agent a whole obsidian
I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.