BACK_TO_FEEDAICRIER_2
Qwen3.6 hits infinite Yoshi reasoning loop
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoMODEL RELEASE

Qwen3.6 hits infinite Yoshi reasoning loop

The newly released Qwen3.6-35B-A3B exhibits a humorous "infinite reasoning loop" failure mode when tasked with simple ASCII art. Despite its performance in agentic coding, the model’s recursive "thinking" mode can lead to resource-draining overthinking on creative requests, trapping it in a cycle of self-correction without final output.

// ANALYSIS

The "Yoshi incident" highlights a critical executive function gap in reasoning-native LLMs where internal logic becomes decoupled from output generation.

  • Native "thinking" toggles can create a "hallucination of progress," as the model endlessly self-corrects without delivering a final response.
  • Sparse MoE architecture (35B total / 3B active) allows for extreme speeds (200+ tok/s), making these recursive loops particularly aggressive in token consumption.
  • While the model hits a record 73.4% on SWE-bench Verified, its reliance on rigorous self-critique fails when there is no objective "correct" solution.
  • Community users suggest increasing repetition penalties or disabling thinking mode for non-logical tasks to avoid termination collapse.
// TAGS
qwen3.6-35b-a3bqwenllmreasoningopen-weightsmoeai-coding

DISCOVERED

3h ago

2026-04-17

PUBLISHED

5h ago

2026-04-17

RELEVANCE

8/ 10

AUTHOR

anzzax