OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoMODEL RELEASE
Mistral Medium Rumored at 128B
Redditors are speculating that Mistral’s next Medium-class model is coming soon, with one clue pointing to a 128B parameter system. If that number holds, it likely means a denser or less sparse MoE sibling to Mistral Small 4 rather than a tiny local model.
// ANALYSIS
Hot take: this is less about the raw parameter count than Mistral trying to reclaim the “practical frontier model” slot with something that feels big enough for serious work but still deployable.
- –Mistral Small 4 is officially 119B parameters with 6.5B active, so a 128B Medium would probably be another MoE-style design, not a straightforward dense jump.
- –The Reddit thread cites a vLLM PR, which makes this feel like a tooling breadcrumb rather than an official launch announcement.
- –If Mistral ships a Medium successor that is meaningfully better on coding and reasoning, that would strengthen its position against bigger but pricier closed models.
- –For local and self-hosted users, the key question is active parameters and memory footprint, not total parameter count.
- –The naming cadence suggests Mistral is continuing a rapid family refresh across Small, Medium, and the surrounding tooling ecosystem.
// TAGS
mistralllmopen-weightsinferencereasoning
DISCOVERED
3h ago
2026-04-28
PUBLISHED
5h ago
2026-04-28
RELEVANCE
9/ 10
AUTHOR
Few_Painter_5588