BACK_TO_FEEDAICRIER_2
OpenAI Codex throttles context window to 258k
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoNEWS

OpenAI Codex throttles context window to 258k

OpenAI's Codex application has reportedly slashed its usable context window from 1 million to roughly 258k tokens, disrupting developers relying on large-scale codebase ingestion. The unannounced change has sparked a "rugpull" outcry among Pro and Enterprise users whose automated agents now face truncation.

// ANALYSIS

The unannounced throttling of Codex reveals the brutal unit economics of million-token inference and the "hallucination cliff" at extreme context lengths. Sudden server-side enforcement breaks autonomous agents and RAG pipelines designed to ingest entire repositories at once. The shift to a 258k-400k window likely prioritizes inference reliability over brute-force context capacity, as "needle in a haystack" performance degrades significantly at 1M. Pro and Enterprise users are frustrated by the lack of communication, highlighting the risks of building on proprietary, black-box developer tools. Competitors like Claude Code now have a temporary marketing advantage by maintaining stable 1M+ context support for their flagship models.

// TAGS
openaiopenai-codexllmai-codingdevtoolbenchmark

DISCOVERED

3h ago

2026-04-27

PUBLISHED

5h ago

2026-04-27

RELEVANCE

8/ 10

AUTHOR

Odd-Environment-7193