Claude Token Counter — Free Tool (2026 Pricing)
Paste any prompt and see the estimated Claude token count + cost across Haiku 4.5, Sonnet 4.5, and Opus 4.5. Includes Korean (CJK) character handling and cache-hit savings calculator. All client-side — your text never leaves the browser. For exact counts, see Anthropic's messages.count_tokens API. For full cost simulation across a workload, use the cost calculator.
Cost per call (3 models)
Output: $0.00200
Output: $0.00750
Output: $0.0375
Cost per 1,000 calls
count_tokens API endpoint. Pricing reflects 2026-04 rates. Korean and CJK characters are weighted ~1.5 chars/token; English ~4 chars/token.How this estimate works
Anthropic's tokenizer is a BPE (byte-pair encoding) tokenizer similar to but not identical to GPT's. Without making an API call, we approximate using character-class ratios:
- ASCII / English: ~4 chars per token
- Korean / CJK: ~1.5 chars per token
- Code / numbers: ~3.5 chars per token
This estimate is typically within 5% of the actual Anthropic tokenizer count. For workloads where exact accuracy matters (billing reconciliation, contract pricing), use the official messages.count_tokens endpoint.
Why the cache hit ratio matters
Prompt caching reduces input cost to 10% of normal price (90% savings) for cached portions. The 5-minute TTL cache breaks even at 1.28 reuses — see the break-even analysis.
For a typical chatbot with a 1500-token system prompt that's reused across many user messages, set cache hit ratio to ~80% to see realistic production cost.
Frequently Asked Questions
How accurate is this estimate?
Within 5% for typical English/Korean text. Less accurate for heavy emoji, special characters, or very short text (where rounding dominates). For exact billing, use Anthropic's count_tokens API.
Why do Korean characters cost more tokens than English?
BPE tokenizers split CJK characters into multiple bytes. A typical Korean character is encoded as 3 bytes in UTF-8, so the tokenizer often produces 1-3 tokens per Korean character. English is more compact at ~0.25 tokens per character.
Does this include output tokens?
Yes — the "Expected output tokens" slider lets you simulate cost. Output tokens are billed separately and at 4-5x input rate.
Why is my Claude bill different from this estimate?
Common reasons: (1) tokenizer estimate has 5% error, (2) you're not accounting for tool use overhead (~150-300 tokens per tool definition), (3) cache write costs 25% extra on miss, (4) Batch API discount applied (50% off), (5) actual prompts include more system context than the sample. Use cost monitoring for full breakdown.
Does this work offline?
Yes. All calculation is client-side JavaScript. Your text never sends to any server.