I spent $514 on Claude Code in 30 days. Here's what I learned.
Real data from 50 sessions. The good, the bad, and the expensive.
The setup
For the past month, I've been tracking every Claude Code session with claudestat — an open-source tool I built to monitor token usage, costs, and patterns.
This isn't a hypothetical study. These are my real numbers from real work on real projects.
The numbers
30-day period: April 12, 2026 → May 10, 2026
| Metric | Value |
|---|---|
| Total spent | $514.86 |
| Sessions | 50 |
| Total tokens | 3,803,319 |
| Loops detected | 375 |
| Avg efficiency | 69/100 |
That's about $18/day on average. For context: I'm on Max 5, so this isn't cheap.
The most expensive session
April 26, 2026 — I let Claude run on a refactoring project for hours without checking:
- Cost: $32.94
- Loops detected: 31
- Efficiency score: 35/100
- What happened: Claude went in circles editing files, each edit triggering more edits. I wasn't watching.
That's $33 in one session. More than my daily coffee budget for a month.
Key lesson: Loops are expensive. 31 loops × average tool call cost = $33 down the drain.
The project breakdown
Who consumed the most?
| Project | Spent |
|---|---|
| claudetrace (side project) | $326.44 |
| wodrival | $61.32 |
| claudestat | $60.35 |
| conductor | $39.40 |
| Other | $27.85 |
63% of my spending went to one side project. It's not my main job — it's my hobby project that's burning my quota.
This is the kind of insight you only get from tracking.
The loop problem
375 loops detected across 50 sessions.
That's ~7.5 loops per session on average. But it's skewed:
- 12 sessions had more than 10 loops
- 9 sessions had efficiency below 50%
Most loops look like:
Read → Edit → Edit → Edit → Read → Edit → Edit → Edit
Claude tries something, it doesn't work, tries slightly different, repeats. This is where most of my waste came from.
What I learned
1. Loops are the silent killer
I wasn't aware of loops until I saw the data. Now I watch for:
- Same tool called 3+ times in a row
- Context creeping up while output stays flat
- "Let me try..." messages from Claude
2. One project dominates
63% on claudetrace — I would have guessed 40/60 at best. Tracking revealed the truth.
3. Efficiency varies wildly
- Best session: 100/100 (perfect)
- Worst session: 35/100 (that $32.94 day)
Understanding this helps me spot when I'm about to have a bad session.
4. Bash is my most expensive tool
Running claudestat top shows Bash at 35-45% of total cost across all sessions. Batch your commands.
How to track your own data
npm install -g @statforge/claudestat
claudestat install
claudestat start
# ... use Claude Code normally ...
claudestat export json > my-data.json
Then analyze:
- Total spent →
claudestat export | jq '.[].total_cost_usd | add' - Top projects →
claudestat export | jq 'group_by(.project_path) | map({project: .[0].project_path, cost: map(.total_cost_usd | add)})' - Problem sessions → Look for efficiency < 50 or loops > 10
What's next
Now that I have this data:
- I set up alerts at 70%, 85%, 95% quota → no more surprises
- I use the kill switch at 95% → prevents runaway sessions
- I check top tools weekly → batch those Bash commands
- I watch for loops → stop them early
The black box is open. You have the data.
Try it
npm install -g @statforge/claudestat
Questions? Drop them in the comments.
Data from my actual sessions. Not estimated. Not hypothetical.
Top comments (0)