OPEN_SOURCE ↗
REDDIT · REDDIT// 21d agoBENCHMARK RESULT
Nemotron Cascade 2 posts strong code gains
The Reddit post spotlights NVIDIA’s Nemotron-Cascade-2-30B-A3B, an open 30B MoE model with 3B active parameters. The author’s local IQ4_XS quant run claims 97.6% on HumanEval and 88% on ClassEval, which makes it look unusually strong for its size.
// ANALYSIS
This is the kind of release that can get buried under bigger model chatter, but it matters a lot for local inference: a compact open MoE with surprisingly sharp code results is exactly what people running on consumer hardware want to see.
- –NVIDIA’s official docs place the model on top of Nemotron-3-Nano’s hybrid Mamba-Transformer MoE lineage, so the “not just another Qwen fork” angle appears to hold.
- –The reported HumanEval and ClassEval numbers are impressive, especially coming from a quantized local run rather than a curated vendor benchmark.
- –The official model card also frames Cascade 2 as thinking/instruct-capable, with strong results across coding, math, tool use, and agentic evals, so this is broader than a code-only toy.
- –The big caveat is that this is still one community datapoint; it’s promising signal, not final proof, but it’s enough to justify a wider bake-off.
// TAGS
nemotron-cascade-2-30b-a3bllmbenchmarkreasoningai-codingopen-weights
DISCOVERED
21d ago
2026-03-21
PUBLISHED
21d ago
2026-03-21
RELEVANCE
9/ 10
AUTHOR
ilintar