OPEN_SOURCE ↗
HN · HACKER_NEWS// 3h agoMODEL RELEASE
Qwen3.6-35B-A3B Tops Claude Opus 4.7
Alibaba’s first open-weight Qwen3.6 release is a 35B MoE model with 3B active parameters, built for stronger agentic coding, longer-context work, and local deployment. Simon Willison’s tongue-in-cheek pelican benchmark says it can also outdraw Claude Opus 4.7 on some SVG image prompts.
// ANALYSIS
The real story is not “better at pelicans,” it’s that a laptop-runnable open model is now credible enough to challenge frontier proprietary models in creative structured output. That’s a strong signal that sparse MoE + better post-training is making local models genuinely useful, not just cheap.
- –The model card says Qwen3.6-35B-A3B is Apache 2.0, with 35B total params, 3B activated, and native support for multimodal input.
- –Alibaba is positioning the release around agentic coding and “thinking preservation,” which matters more for day-to-day dev workflows than benchmark vanity.
- –The benchmark table shows competitive results across coding, tool use, reasoning, and vision-language tasks, with especially strong agentic coding scores.
- –Willison’s comparison is deliberately absurd, but it’s a useful reminder that model quality is now multi-dimensional: a model can be “worse” overall and still be better at a specific task on consumer hardware.
- –For developers, the practical takeaway is simpler: this is another strong open model to try locally before defaulting to a closed frontier API.
// TAGS
qwen3.6-35b-a3bllmopen-sourcemultimodalreasoningagentai-coding
DISCOVERED
3h ago
2026-04-16
PUBLISHED
5h ago
2026-04-16
RELEVANCE
9/ 10
AUTHOR
simonw