OpenAI Codex throttles context window to 258k
OpenAI's Codex application has reportedly slashed its usable context window from 1 million to roughly 258k tokens, disrupting developers relying on large-scale codebase ingestion. The unannounced change has sparked a "rugpull" outcry among Pro and Enterprise users whose automated agents now face truncation.
The unannounced throttling of Codex reveals the brutal unit economics of million-token inference and the "hallucination cliff" at extreme context lengths. Sudden server-side enforcement breaks autonomous agents and RAG pipelines designed to ingest entire repositories at once. The shift to a 258k-400k window likely prioritizes inference reliability over brute-force context capacity, as "needle in a haystack" performance degrades significantly at 1M. Pro and Enterprise users are frustrated by the lack of communication, highlighting the risks of building on proprietary, black-box developer tools. Competitors like Claude Code now have a temporary marketing advantage by maintaining stable 1M+ context support for their flagship models.
DISCOVERED
3h ago
2026-04-27
PUBLISHED
5h ago
2026-04-27
RELEVANCE
AUTHOR
Odd-Environment-7193