Qwen 3.5 "thinking" issue plagues larger models
Users report that larger Qwen 3.5 models (27B and 35B) exhibit "thinking anxiety," either producing shallow 1-2 sentence reasoning traces before failing tasks or entering infinite reasoning loops. While the 9B model reasons properly, the larger variants appear sensitive to quantization and sampling parameters, requiring manual tuning to function effectively.
The Qwen 3.5 "thinking" bug highlights the fragile nature of internal reasoning traces in open-weights models, demonstrating that larger scale doesn't always guarantee smarter logic. The 27B and 35B variants often skip deep reasoning or enter repetitive loops, requiring community workarounds like increased Presence Penalty and manual tuning of sampling parameters. These issues, exacerbated by low-precision quantization, suggest a possible alignment mismatch between the small and mid-sized versions of the Qwen 3.5 family.
DISCOVERED
3h ago
2026-04-15
PUBLISHED
3h ago
2026-04-15
RELEVANCE
AUTHOR
Glad-Mode9459