OPEN_SOURCE ↗
YT · YOUTUBE// 2h agoBENCHMARK RESULT
Claude Opus 4.7 tokenizer hikes prompt costs
Independent testing shows Anthropic's new tokenizer makes English and code prompts about 1.3-1.45x larger than on Claude 4.6, with real Claude Code-style sessions costing roughly 20-30% more. The upside is modest: a small but measurable improvement in strict instruction following.
// ANALYSIS
This looks less like a tokenizer tweak and more like a hidden price increase for anyone running long, cache-heavy Claude workflows.
- –Real-world code-heavy inputs like `CLAUDE.md` and technical docs landed near the top of Anthropic's stated range, so developers should plan for the worst case, not the midpoint.
- –Prompt caching still works, but larger prefixes mean more cache-write and cache-read tokens every turn, which shortens Max windows and pushes rate limits sooner.
- –The measured upside is narrow: about +5 percentage points on a small IFEval sample, enough to matter, not enough to call it a clean trade.
- –CJK content barely changed, so the cost hit is concentrated on English/code-heavy users rather than global text workloads.
- –Bottom line: if your Claude usage is mostly coding, this is a material cost and throughput regression unless the extra literalness saves enough human time to justify it.
// TAGS
claude-opus-4-7llmpricingbenchmarkai-codingagent
DISCOVERED
2h ago
2026-04-20
PUBLISHED
2h ago
2026-04-20
RELEVANCE
8/ 10
AUTHOR
Theo - t3․gg