OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoMODEL RELEASE
Qwen3.6-35B-A3B fails on array tool inputs
Qwen3.6-35B-A3B, a 35B parameter Mixture-of-Experts model, is facing criticism for incorrectly stringifying array inputs during tool calls. Despite specific "agentic coding" optimizations, the model struggles with schema adherence in complex workflows.
// ANALYSIS
Qwen3.6-35B-A3B is an impressive MoE model, but its tool-calling reliability is hit-or-miss for complex agentic tasks.
- –The model incorrectly formats array parameters as stringified JSON (e.g., ["cmd"] becomes "[\"cmd\"]"), breaking standard tool execution.
- –Even when provided with error feedback, the model often fails to correct its formatting in subsequent turns.
- –Developers are forced to use "dirty" pre-parsing layers or custom Jinja templates to work around these formatting drifts.
- –Despite these issues, its coding performance (SWE-bench Verified: 73.4) remains highly competitive for its size.
// TAGS
qwen3.6-35b-a3bqwenllmtool-callingagentai-codingopen-weightsmoe
DISCOVERED
3h ago
2026-04-17
PUBLISHED
4h ago
2026-04-17
RELEVANCE
9/ 10
AUTHOR
benevbright