Claude Token Counter — Free Tool (2026 Pricing)

Paste any prompt and see the estimated Claude token count + cost across Haiku 4.5, Sonnet 4.5, and Opus 4.5. Includes Korean (CJK) character handling and cache-hit savings calculator. All client-side — your text never leaves the browser. For exact counts, see Anthropic's messages.count_tokens API. For full cost simulation across a workload, use the cost calculator.

Estimated input tokens
39
138 characters · ratio 3.54 chars/token

Cost per call (3 models)

Haiku 4.5
$0.00203
₩2.80
Input: <$0.0001
Output: $0.00200
Sonnet 4.5
$0.00762
₩10.51
Input: $0.00012
Output: $0.00750
Opus 4.5
$0.0381
₩52.56
Input: $0.00059
Output: $0.0375

Cost per 1,000 calls

Haiku 4.5
$2.03
2,803
Sonnet 4.5
$7.62
10,511
Opus 4.5
$38.09
52,557
Note: Token counts are estimates (~5% error vs Anthropic's actual tokenizer). For exact counts, use Anthropic's count_tokens API endpoint. Pricing reflects 2026-04 rates. Korean and CJK characters are weighted ~1.5 chars/token; English ~4 chars/token.

How this estimate works

Anthropic's tokenizer is a BPE (byte-pair encoding) tokenizer similar to but not identical to GPT's. Without making an API call, we approximate using character-class ratios:

This estimate is typically within 5% of the actual Anthropic tokenizer count. For workloads where exact accuracy matters (billing reconciliation, contract pricing), use the official messages.count_tokens endpoint.

Why the cache hit ratio matters

Prompt caching reduces input cost to 10% of normal price (90% savings) for cached portions. The 5-minute TTL cache breaks even at 1.28 reuses — see the break-even analysis.

For a typical chatbot with a 1500-token system prompt that's reused across many user messages, set cache hit ratio to ~80% to see realistic production cost.

Frequently Asked Questions

How accurate is this estimate?

Within 5% for typical English/Korean text. Less accurate for heavy emoji, special characters, or very short text (where rounding dominates). For exact billing, use Anthropic's count_tokens API.

Why do Korean characters cost more tokens than English?

BPE tokenizers split CJK characters into multiple bytes. A typical Korean character is encoded as 3 bytes in UTF-8, so the tokenizer often produces 1-3 tokens per Korean character. English is more compact at ~0.25 tokens per character.

Does this include output tokens?

Yes — the "Expected output tokens" slider lets you simulate cost. Output tokens are billed separately and at 4-5x input rate.

Why is my Claude bill different from this estimate?

Common reasons: (1) tokenizer estimate has 5% error, (2) you're not accounting for tool use overhead (~150-300 tokens per tool definition), (3) cache write costs 25% extra on miss, (4) Batch API discount applied (50% off), (5) actual prompts include more system context than the sample. Use cost monitoring for full breakdown.

Does this work offline?

Yes. All calculation is client-side JavaScript. Your text never sends to any server.

Related