Token Tracker: Token usage tracker for AI coding agent CLIs

Token Tracker is a free, open-source, local-first dashboard that automatically tracks token usage across 13 AI coding agent CLIs in one place: Claude Code, Codex CLI, Cursor IDE, Gemini CLI, OpenCode, OpenClaw, Every Code, Kiro, Hermes Agent, GitHub Copilot, Kimi Code, oh-my-pi (omp), and CodeBuddy. It focuses on minimal data collection, auditable metrics, and clear visibility into model and project usage. Token counts only — never prompts or conversation content.

Install in one command

npx tokentracker-cli

On first run, the CLI auto-detects installed AI coding tools, installs the appropriate hooks (SessionEnd hooks, TOML notify arrays, plugins, or passive log readers depending on the tool), and starts a local dashboard at http://localhost:7680. macOS users can also install the native menu bar app via brew install --cask mm7894215/tokentracker/tokentracker.

What you get after installing

Token Tracker runs entirely on your machine. After npx tokentracker-cli, a local dashboard at http://localhost:7680 exposes the following views — none of them require an account, and your data never leaves your device unless you opt into cloud sync.

Privacy by design: token counts only, never prompts or conversation content. Local-first; cloud sync is optional and powers only the public leaderboard.

Supported AI coding agent CLIs

Who should use Token Tracker

Token Tracker is built for developers, founders, and engineering teams who rely on multiple AI coding assistants and need a single place to understand token consumption. Instead of checking each tool separately, you can compare usage trends by model, project, and time window in one dashboard.

Teams use Token Tracker to answer practical questions: which model is consuming the most budget, whether a new workflow increases cost, how a Pro plan is tracking against rate limits, and how token usage shifts over days or weeks. This makes budgeting, optimization, and reporting easier without introducing heavy tracking overhead.

How it works

Each supported tool emits usage data when a session ends. Token Tracker's parser (src/lib/rollout.js) normalizes 9 distinct log formats into a unified schema: half-hour UTC buckets keyed by (source, model, hour_start). Aggregated buckets are written to queue.jsonl and served from a local HTTP API on port 7680. The React dashboard reads from those endpoints; cloud sync (optional) uploads bucketed counts to InsForge for the public leaderboard.

Frequently asked questions

How do I install Token Tracker?

Run npx tokentracker-cli. The CLI auto-detects your AI coding tools, installs hooks, and opens the dashboard.

Which AI coding CLIs does Token Tracker support?

Claude Code, Codex CLI, Cursor IDE, Gemini CLI, OpenCode, OpenClaw, Every Code, Kiro, Hermes Agent, GitHub Copilot, Kimi Code, oh-my-pi (omp), and CodeBuddy — 13 in total.

Is Token Tracker free and open source?

Yes. Source on GitHub at github.com/mm7894215/TokenTracker; npm package tokentracker-cli.

Does Token Tracker collect my prompts or conversations?

No. Only token counts, model names, timestamps, and project attribution are recorded. Prompts and completions are never read or uploaded.

What does Token Tracker track?

Input, output, cached, and cache-creation tokens per model, project, and time window. It also surfaces rate limits and subscription status for Claude Pro, ChatGPT plans, Cursor, Gemini, Kiro, and Antigravity.

Does Token Tracker work offline?

Yes. Token Tracker is local-first. All parsing, aggregation, and dashboard rendering happen on your machine. Cloud sync is optional and powers only the public leaderboard.

Resources

Note: The usage dashboard, rate-limit panel, project attribution, skills browser, and widget previews are part of the local app at http://localhost:7680 after install — they are not standalone public URLs.