OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoNEWS
Anthropic blames Claude Code defaults for quality drop
Anthropic’s April 23 postmortem says recent quality complaints came from three separate changes affecting Claude Code, the Claude Agent SDK, and Claude Cowork: lowering default reasoning effort, a caching bug that kept dropping prior thinking, and a verbosity-cutting system prompt tweak. Anthropic says the API and inference layer were not impacted, all three issues were fixed by April 20, and subscriber usage limits were reset.
// ANALYSIS
This is a strong reminder that “model quality” is not just weights and benchmarks; product-layer defaults can materially change what users experience.
- –Lowering reasoning effort to save latency and usage made the system feel less intelligent, even though the underlying model did not change.
- –The caching bug shows how a small harness mistake can create the exact symptoms users interpret as forgetfulness or regression.
- –The verbosity prompt change is the most interesting one: trimming output can quietly damage coding quality when it interacts with other prompt assumptions.
- –The open-weight/local-model argument gets real support here, not because hosted models are inherently bad, but because control over serving behavior matters when quality is part of your workflow.
- –Anthropic’s transparency is better than most vendors, but the core risk remains: paying customers can still inherit silent quality tradeoffs they did not approve.
// TAGS
anthropicclaude-codeclaudepostmortemai-codingllmhosted-modelsopen-weightself-hosted
DISCOVERED
5h ago
2026-04-24
PUBLISHED
8h ago
2026-04-24
RELEVANCE
9/ 10
AUTHOR
spaceman_