Claude Code users hitting usage limits 'way faster than expected' (bbc.com)

by steveharing1 18 comments 22 points
Read article View on HN

18 comments

[−] edimuj 41d ago
I got curious where my tokens were actually going, so I wrote a quick audit tool that reads the Claude Code session logs.

Turns out the biggest sink isn't your prompts. It's the agent's own tool calls. In my case, grep alone ate 3.5M tokens across ~350 sessions. 1800+ calls, most of them dumping raw output that the agent barely used. Full file reads for one function signature. Complete test output when only failures matter.

So I built wrappers: function signatures without bodies (~90% smaller), test output with just failures, that kind of thing. 2.3M tokens saved so far.

You can audit your own sessions without installing anything:

  npx tokenlean audit --all --savings
https://github.com/edimuj/tokenlean
[−] GuestFAUniverse 44d ago
For a start they could make the answers less talkative?

I switched back to ChatGPT out of necessity, because Claude stopped working after two queries, where it gave overly elaborate answers (about a simple web app config).

But Claude isn't alone. It seems a recent (subjective) trend that Claude and ChatGPT give very lengthy answers, with a lot of repetition from the original query on the free plans.

I got used to add "answer briefly", to keep the noise in check.

[−] mentalgear 43d ago

> Anthropic recently accidentally released part of its internal source code for Claude Code due to "human error".

I wonder who that human was counting on leading up to this "human error" ...

[−] akmarinov 44d ago
Yeah the whole OpenAI exodus brought in a ton of people and Anthropic was struggling to meet the previous usage already

That’s why there’re now work hours restrictions

[−] gregoriol 44d ago
Is that really on BBC? what a world we live in...
[−] SamuelBraude 43d ago
There are ways to reduce token usage if you use Claude correctly :)
[−] general_reveal 44d ago
[flagged]