BACK_TO_FEEDAICRIER_2
Qwen3.5-9B lands Opus 4.6 GGUF reasoning boost
OPEN_SOURCE ↗
REDDIT · REDDIT// 20d agoMODEL RELEASE

Qwen3.5-9B lands Opus 4.6 GGUF reasoning boost

This is a local-first Qwen3.5-9B finetune/export pipeline that leans on nohurry/Opus-4.6-Reasoning-3000x-filtered, mixes in function-calling and assistant data, and ships clean GGUF quants with llama.cpp. The author's first GSM8K pass on Q4_K_M lands around 0.84 exact match, and the RTX 4090 throughput numbers make the release feel practical rather than purely experimental.

// ANALYSIS

This looks like a credible local-model release rather than a vanity quant drop: the recipe is focused, the early GSM8K number is strong, and the speed tradeoff data is actually useful. The real test is whether the reasoning gains hold up on messy instruction-following and structured outputs, which is where most small finetunes separate themselves.

  • The blend of Opus 4.6 reasoning data with `Salesforce/xlam-function-calling-60k` and `OpenAssistant/oasst2` is the right kind of mix if the goal is a small assistant that can reason and format outputs, not just ace math.
  • `Q4_K_M` looks like the day-to-day winner; it should be the first quant most people try, while `Q8_0` is the safer pick if you want to squeeze out a bit more fidelity.
  • The benchmark story is still early because only `Q4_K_M` has a task eval so far; `Q8_0` is speed-tested, but not yet quality-compared head to head.
  • The explicit naming (`opus46`, `mix`, `i1`) is a nice touch for reproducibility and future comparisons.
// TAGS
qwen35-9b-opus46-mix-i1-ggufllmfine-tuningreasoninginferenceopen-weightsbenchmarkself-hosted

DISCOVERED

20d ago

2026-03-23

PUBLISHED

20d ago

2026-03-23

RELEVANCE

8/ 10

AUTHOR

RiverRatt