Anthropic details Claude Code regressions
Anthropic says weeks of Claude Code quality complaints were real and traced them to three separate product-layer changes: lowering default reasoning effort, a stale-session caching bug that dropped prior reasoning, and a prompt tweak that over-compressed responses. The company says the API was unaffected, all fixes were shipped by April 20, 2026, and subscriber usage limits were reset on April 23, 2026.
This is a useful postmortem because it admits the uncomfortable truth: you can make a frontier coding agent feel dramatically worse without changing the base model at all. For developers, that is a reminder that harness, prompting, caching, and defaults are now part of the product's real capability surface.
- –The biggest takeaway is that perceived “model nerfs” can come from UI and orchestration decisions, not model weights, which makes observability and rollback discipline much more important for AI tools.
- –Anthropic’s stale-session bug is especially notable because it degraded both quality and quota efficiency at once, making Claude Code look forgetful while also burning usage limits faster.
- –The prompt change is a warning shot for anyone shipping agent products: aggressive brevity constraints can save tokens but still tank coding performance if they suppress planning and tool-use context.
- –Anthropic’s writeup also suggests its public eval coverage lagged real-world usage; the company is now promising broader per-model prompt evals, longer soak periods, and more staff on exact public builds.
- –Community reaction appears split between frustration over how long this took to diagnose and respect for Anthropic publishing a specific, technical postmortem instead of hand-waving away the complaints.
DISCOVERED
3h ago
2026-04-23
PUBLISHED
5h ago
2026-04-23
RELEVANCE
AUTHOR
mfiguiere