BACK_TO_FEEDAICRIER_2
Mistral Small 4 drops 119B MoE model
OPEN_SOURCE ↗
REDDIT · REDDIT// 26d agoMODEL RELEASE

Mistral Small 4 drops 119B MoE model

Mistral AI launches Mistral Small 4 (119B-2603), a unified Mixture-of-Experts model integrating instruction, reasoning, and coding capabilities. The Apache 2.0 release features a 256k context window and a configurable "Reasoning Effort" mode for deep problem-solving.

// ANALYSIS

Mistral Small 4 is a major architectural pivot that brings 100B+ tier reasoning to the Small family via high-density MoE. It uses 128 experts with 4 active per token to keep activated parameters at just 6.5B, maintaining high inference speed despite the 119B total size. The release unifies the Instruct, Reasoning (Magistral), and Devstral (coding) models into a single all-rounder optimized for local deployment. A new reasoning_effort parameter allows users to trade time for depth, signaling a shift toward test-time compute for local models. Apache 2.0 licensing undercuts proprietary competitors like GPT-4o mini and Claude 3.5 Haiku by offering a royalty-free alternative for 100B-class performance on consumer-accessible hardware like high-memory Macs.

// TAGS
mistral-small-4llmmo-eopen-weightsai-codingreasoningmultimodal

DISCOVERED

26d ago

2026-03-16

PUBLISHED

26d ago

2026-03-16

RELEVANCE

10/ 10

AUTHOR

seamonn