BACK_TO_FEEDAICRIER_2
Qwen3.6-35B-A3B stalls inside agent wrappers
OPEN_SOURCE ↗
REDDIT · REDDIT// 6h agoMODEL RELEASE

Qwen3.6-35B-A3B stalls inside agent wrappers

Qwen3.6-35B-A3B runs fine in Ollama’s CLI, but this Reddit thread reports it hanging inside OpenCode and Claude Code. The debate is whether the issue is the new model, a too-small context window, or missing agent/tool-call config.

// ANALYSIS

My read: this is more likely a wrapper mismatch than a broken model. Qwen3.6 is explicitly aimed at agentic coding, but its official docs assume specific reasoning and tool-call parsers that local agent shells may not be wired for by default.

  • Qwen’s docs show Qwen3.6 defaults to thinking mode and recommend `reasoning-parser qwen3` plus `tool-call-parser qwen3_coder` for tool use, which means a generic agent client can stall even when plain chat works.
  • Context is worth checking, but it is probably not the root cause: the model is natively 262K tokens, and Qwen says 128K+ helps preserve thinking behavior. A 4K default would be bad for agents, but not the only plausible failure mode.
  • Qwen also documents a non-thinking path via `chat_template_kwargs.enable_thinking: false`, so some agent workflows may need thinking disabled or preserved explicitly rather than left to defaults.
  • The practical takeaway is that “works in `ollama run`” does not prove agent compatibility; tool protocols, chat templates, and wrapper support matter more than raw generation speed here.
// TAGS
qwen3.6-35b-a3bqwenllmagentai-codingcliollamaopencodeclaude-code

DISCOVERED

6h ago

2026-04-18

PUBLISHED

7h ago

2026-04-18

RELEVANCE

9/ 10

AUTHOR

vuncentV7