YOU ARE VIEWING ONE ITEM FROM THE AICRIER FEED

GPT-5.5 Codex Burns More Tokens

AICrier tracks AI developer news across Product Hunt, GitHub, Hacker News, YouTube, X, arXiv, and more. This page keeps the article you opened front and center while giving you a path into the live feed.

// WHAT AICRIER DOES

7+

TRACKED FEEDS

24/7

SCRAPED FEED

Short summaries, external links, screenshots, relevance scoring, tags, and featured picks for AI builders.

GPT-5.5 Codex Burns More Tokens
OPEN LINK ↗
// 2h agoBENCHMARK RESULT

GPT-5.5 Codex Burns More Tokens

A Reddit chart comparing Codex runs suggests GPT-5.5 used about 2.8M tokens per task versus about 2.5M for GPT-5.4 in the same setup. That does not automatically contradict OpenAI’s efficiency claim, but it does mean raw token count alone is not a clean proxy for cost or quality.

// ANALYSIS

The chart is a useful sanity check, not a verdict: OpenAI says GPT-5.5 is pricier per token than GPT-5.4, but tuned in Codex to get better results with fewer tokens for most users. If this specific workload shows the opposite, the likely explanation is task mix, tool calls, or a longer reasoning trace, not some hidden pricing trick.

  • GPT-5.5 pricing is higher than GPT-5.4 across input, cached input, and output, so it does not win on price-per-token
  • Cached tokens are discounted for both models, but the discount is proportional, so caching alone does not make GPT-5.5 relatively cheaper
  • The meaningful metric is cost per completed task or cost per successful result, not just total tokens burned
  • A chart like this can still be real if GPT-5.5 spends more tokens but avoids retries, improves output quality, or finishes harder tasks more reliably
  • Any Cursor comparison is noisy unless the benchmark controls for prompt length, tool usage, context reuse, and model routing
// TAGS
gpt-5.5gpt-5.4codexai-codingcoding-agentbenchmarkpricingreasoning

DISCOVERED

2h ago

2026-05-11

PUBLISHED

5h ago

2026-05-11

RELEVANCE

8/ 10

AUTHOR

Additional-Alps-8209