OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoMODEL RELEASE
Qwen3.6-35B-A3B drops amid secrecy concerns
Alibaba's sparse MoE model, Qwen3.6-35B-A3B, delivers elite agentic coding with just 3B active parameters. While the weights remain open, the community is increasingly wary of a shift toward proprietary models by major AI labs.
// ANALYSIS
The 35B-A3B release solidifies Qwen as the leader in "efficient frontier" models, specifically targeting autonomous agent loops.
- –Sparse MoE architecture (35B total, 3B active) enables high-performance inference on consumer-grade dual-GPU setups.
- –Native "thinking preservation" features solve "agent amnesia," allowing stable reasoning across multi-step repository-level tasks.
- –Performance on SWE-bench Verified (73.4%) rivals much larger dense models, proving the scaling laws of sparse routing.
- –Growing community concern suggests a "management" shift in AI labs, potentially prioritizing proprietary APIs over future open-weight releases.
// TAGS
qwen3.6-35b-a3bqwenllmmoeai-codingagentopen-weights
DISCOVERED
3h ago
2026-04-17
PUBLISHED
4h ago
2026-04-17
RELEVANCE
10/ 10
AUTHOR
Porespellar