OPEN_SOURCE ↗
REDDIT · REDDIT// 9d agoMODEL RELEASE
Korean tool calls stall on Qwen3.5
This Reddit post describes a local LLM workflow where a Qwen3.5-35B-A3B Opus-distilled model reliably reaches tool calls in English but appears to stall after starting a response in Korean. The user is asking whether this is an inherent limitation or something that can be fixed with prompting.
// ANALYSIS
Hot take: this is probably a tooling/prompt-format problem, not “Korean is impossible.”
- –Qwen3.5 is documented to default to thinking mode, and its tool-use path depends on the serving stack handling the model’s output correctly.
- –The fact that English works but Korean hangs suggests the prompt/template or parser is less robust outside English, not that the model cannot reason in Korean.
- –A stronger system prompt that forces a fixed tool-call format can help, but if the backend/parser is brittle you may still need serving-side changes.
- –If the model is being run with the wrong reasoning or tool-call parser, it can look like “I will read the file now:” and then stall instead of emitting the actual function call.
- –Inference: the most likely fix is a combination of tighter tool-call instructions, explicit language constraints for the tool-call segment, and verifying the inference backend supports Qwen3.5 tool calls correctly.
// TAGS
qwen3.5-35b-a3b-claude-4.6-opus-reasoning-distilledqwenqwen3.5local-llmtool-callingmultilingualkoreanreasoninginferencellm-serving
DISCOVERED
9d ago
2026-04-02
PUBLISHED
10d ago
2026-04-02
RELEVANCE
6/ 10
AUTHOR
Interesting-Print366