OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoBENCHMARK RESULT
OpenCode, Qwen3.5 397B impress on DGX Sparks
A LocalLLaMA user calls OpenCode a "vibe benchmark" after finding it genuinely usable with Qwen3.5-397B-A17B-int4-AutoRound on two DGX Sparks linked by InfiniBand. For server management and Rust app work, it felt close enough to Cursor and Claude Code to justify daily use, and the slower 27B option was skipped.
// ANALYSIS
Local coding agents are crossing the threshold where throughput and workflow fit matter as much as model quality. The really telling signal is not that the setup works, but that a daily Cursor and Claude Code user is willing to keep using it.
- –OpenCode's model-agnostic terminal flow makes it a natural place to test local stacks without changing habits.
- –Qwen3.5-397B-A17B plus INT4 AutoRound shows quantization is making huge open-weight models practical on real hardware, not just paper benchmarks.
- –Skipping the 27B because TPS was lower underlines a core agentic-coding tradeoff: faster token flow can beat a smaller model.
- –This is anecdotal, not a controlled eval, but it matches broader chatter that Qwen3.5 is unusually strong for local coding.
// TAGS
opencodeai-codingagentcliself-hostedopen-weightsgpubenchmark
DISCOVERED
19d ago
2026-03-24
PUBLISHED
19d ago
2026-03-23
RELEVANCE
8/ 10
AUTHOR
einthecorgi2