Most AI coding tools still feel like driving without an odometer.
You can move fast. You can cover a lot of ground. But unless you are checking a dashboard after the fact, you do not really know how much distance you burned getting there.
That matters more than it sounds.
A lot of AI waste does not show up as one dramatic mistake. It shows up as a session that slowly gets heavier:
- old logs still sitting in context
- files pasted once and carried forever
- three retries after the answer was already good enough
- switching to a bigger model because the current chat is muddy
- asking for one more polish pass when the work is basically done
By the time you notice the bill, the useful decision already passed.
That is why I built TokenBar as a tiny macOS menu bar app instead of another end-of-month dashboard.
I wanted the signal while I was still working.
If a simple bug fix starts turning into a huge token session, I want to see that early. Maybe the right move is to restart with a cleaner brief. Maybe it is time to trim context. Maybe I should stop asking the model to clean up a task that is already solved.
The point is not to obsess over every token.
The point is to make token usage visible enough that it changes behavior in the moment.
AI tools are getting faster, bigger, and cheaper, which is great. But bigger context windows can also hide sloppy workflows for longer. When there is no live meter, waste feels invisible until it becomes normal.
TokenBar is my attempt to add that missing odometer for AI work.
It sits in the macOS menu bar and shows live token usage while you work, so you can catch bloated sessions before they quietly become your default workflow.
If you build with AI on a Mac, it is here:
Top comments (0)