OPEN_SOURCE ↗
REDDIT · REDDIT// 7h agoINFRASTRUCTURE
Qwen3.6 Tool Calls Fray in n8n
A Reddit user says Qwen3.6-35B-A3B works well in Roo Code but inconsistently fails to trigger tools inside an n8n workflow served through llama.cpp. The discussion points less to a bad model and more to a mismatch between the model’s tool-calling format and what n8n expects from the serving API.
// ANALYSIS
This looks like a harness problem masquerading as a model problem. Qwen’s own docs emphasize tool-use support and recommend specific server-side tool-call parsers, which is exactly where n8n integrations tend to break.
- –n8n wants structured OpenAI-style `tool_calls`; raw `<tool_call>` XML or chat text will often be ignored.
- –Qwen3.6’s official model card says tool use is supported and shows explicit `--tool-call-parser qwen3_coder` setup for serving stacks like vLLM and SGLang.
- –Roo Code and OpenWebUI can hide a lot of backend mistakes because they include client-side parsing logic that n8n does not.
- –If llama.cpp is emitting the wrong chat template or a non-OpenAI tool format, the fix is usually in the server config, not in the workflow.
- –For local agent setups, the practical test is the raw `/v1/chat/completions` response: if it does not return structured `tool_calls`, n8n will stay flaky.
// TAGS
qwen3.6-35b-a3bllmagentautomationinferenceself-hostedn8n
DISCOVERED
7h ago
2026-04-17
PUBLISHED
8h ago
2026-04-17
RELEVANCE
8/ 10
AUTHOR
TimWardle