nit's defaults go beyond what --short does. The token savings come from stripping headers, padding, instructional text, etc. Headers and decorative text ends up tokenizing poorly, so it helps quite a bit there.
You should expect to be gently ridiculed if you submit this as a serious project. Do not use LLMs to claim knowledge or abilities you don't actually have.
It's not impressive, and it's obvious to anyone with more than a year or two of actual programming experience.
- The claim of "71% token savings" is misleading. You've already told us that git output is 7% of the "bash commands", so your token savings is really ~5% of bash command output. Also, the "200K tokens saved" refers the total saved across more than 3000 agent sessions. So it's really just ~60 tokens saved per session. That's a fraction of a fraction of a penny. Did you check your math here? Or just trust the model?
- You have not rebuilt git, you have replaced a small part of its porcelain CLI. Most of what makes git git is in the library code.
- Zig is a high-performance systems programming language that requires the programmer to manage memory manually. You've used it here to handle command line options (and yes, to call into libgit). That is overkill, and that matters, because:
- The performance difference (e.g. 8ms vs 16ms) might be real, but it still doesn't matter at all. For one, there's no way you yourself could tell the difference. The slower time is still a single frame of 60fps. For another, this is not a hot path. These git commands - however fast - are swallowed up by the time it takes the agent to do inference. There's no question that inference itself will take orders of magnitude more time than a 16ms shell command.
It defaults to being a wrapper around git when it's not custom implemented, and it's recommended that you alias nit as git so the agent can work the way it normally would, just faster and cheaper.
The 71% reduction is interesting but I'd want to see where those tokens are actually going in a typical agent session. In my experience running multi-step coding agents, the git output itself is rarely the bottleneck...
34 comments
git status --shortorgit log --oneline, I see output similar to your tool's.It's not impressive, and it's obvious to anyone with more than a year or two of actual programming experience.
- The claim of "71% token savings" is misleading. You've already told us that git output is 7% of the "bash commands", so your token savings is really ~5% of bash command output. Also, the "200K tokens saved" refers the total saved across more than 3000 agent sessions. So it's really just ~60 tokens saved per session. That's a fraction of a fraction of a penny. Did you check your math here? Or just trust the model?
- You have not rebuilt git, you have replaced a small part of its porcelain CLI. Most of what makes git git is in the library code.
- Zig is a high-performance systems programming language that requires the programmer to manage memory manually. You've used it here to handle command line options (and yes, to call into libgit). That is overkill, and that matters, because:
- The performance difference (e.g. 8ms vs 16ms) might be real, but it still doesn't matter at all. For one, there's no way you yourself could tell the difference. The slower time is still a single frame of 60fps. For another, this is not a hot path. These git commands - however fast - are swallowed up by the time it takes the agent to do inference. There's no question that inference itself will take orders of magnitude more time than a 16ms shell command.
Also, aren’t LLMs RLHFd a lot with using tools like git and as such have a better time interacting with it than custom tools?
And does 8ms really matter when you're shunting bulk crap back and forth from the cloud?