OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoMODEL RELEASE
Qwen3.6 one-shots flashy Tetris clone
A LocalLLaMA user shared a CodePen Tetris game generated in one prompt by Qwen3.6-35B-A3B running locally through llama.cpp with Unsloth's GGUF quant. The demo is anecdotal, but it lands squarely in the model's advertised strength: agentic frontend coding from a compact sparse MoE.
// ANALYSIS
This is not a benchmark, but it is the kind of artifact developers actually care about: can the model turn a loose prompt into a working, polished web app without a repair loop?
- –Qwen3.6-35B-A3B is a 35B sparse MoE model with roughly 3B active parameters, making local coding demos unusually practical for its capability class
- –The generated output took 9,113 tokens and about 11 minutes, so quality came with real latency even at 13.62 tokens/sec
- –Unsloth's GGUF path matters because it lowers the hardware barrier for local experimentation, especially with llama.cpp-style serving
- –A flashy Tetris clone is toy-scale, but frontend coherence, game state, animation, and particle effects are a useful stress test for one-shot code generation
// TAGS
qwen3-6-35b-a3bqwenllmai-codingopen-weightsinference
DISCOVERED
5h ago
2026-04-22
PUBLISHED
5h ago
2026-04-22
RELEVANCE
8/ 10
AUTHOR
deadman87