BACK_TO_FEEDAICRIER_2
Reddit eyes larger Gemma 4, Qwen3.6
OPEN_SOURCE ↗
REDDIT · REDDIT// 2h agoNEWS

Reddit eyes larger Gemma 4, Qwen3.6

This Reddit thread asks whether Google and Alibaba will ship larger MoE follow-ups to Gemma 4 and Qwen3.6. Officially, Gemma 4 currently tops out at 26B MoE and 31B dense, while Qwen3.6 already has 35B-A3B and Qwen3.6-Plus, so anything bigger is still speculation.

// ANALYSIS

The demand is real: people want a “flagship-but-runnable” open model that feels closer to the big closed models without blowing up local inference costs. But the current releases suggest both labs are still balancing capability against deployability, not simply scaling for bragging rights.

  • Gemma 4 is already a four-size family with a 26B MoE and 31B dense variant; Google has not officially announced a larger open MoE checkpoint yet.
  • Qwen3.6 has moved fast on both open weights and API variants, but the jump to a much larger MoE like a 122B-class model remains community speculation, not a confirmed roadmap.
  • The thread highlights a clear local-LLM preference: if active parameters stay modest, a larger MoE can be attractive; if not, the hardware cost quickly becomes impractical.
  • For developers, the interesting question is less “will they scale?” and more “will they keep the activation footprint small enough to stay self-hostable?”
  • Treat this as sentiment and roadmap-watching, not a release announcement.
// TAGS
gemma-4qwen3.6llmopen-weightsinferenceself-hosted

DISCOVERED

2h ago

2026-04-30

PUBLISHED

5h ago

2026-04-30

RELEVANCE

8/ 10

AUTHOR

Non-Technical