OPEN_SOURCE ↗
REDDIT · REDDIT// 2d agoNEWS
Qwen3.5 35B spews slash runs on llama.cpp
A Reddit user reports that Qwen3.5 35B quantized to Q5_K_M runs normally at first in opencode via llama.cpp, but after extended use it eventually degenerates into outputting only slashes until the stream ends. They’ve reproduced it across rebuilds and with reasoning disabled, and note that GPU and CPU utilization stay normal during the failure, which suggests a model/template/runtime interaction rather than a simple crash.
// ANALYSIS
Hot take: this looks less like a random hardware fault and more like a long-context or chat-template degeneration issue that only shows up after the session has accumulated enough state.
- –The symptom is deterministic enough to matter: it works for a while, then falls into a repetitive token loop instead of failing outright.
- –The fact that resource usage stays steady points away from an obvious OOM or process stall.
- –The configuration uses very long context, aggressive penalties, and a custom llama.cpp/opencode stack, so the likely fault surface is template handling, cache state, or quant/model compatibility.
- –Slash spam is a classic “collapsed generation” failure mode in local inference setups, especially when the prompt history or formatting drifts over time.
- –The post is useful signal for anyone running Qwen3.5 locally, but it is not a launch announcement or product update.
// TAGS
qwen3.5llama.cpplocal-llmquantizationopencodeinference-bugllm-debugging
DISCOVERED
2d ago
2026-04-10
PUBLISHED
2d ago
2026-04-09
RELEVANCE
8/ 10
AUTHOR
keepthememes