BACK_TO_FEEDAICRIER_2
Mistral Small 4 trips over agents
OPEN_SOURCE ↗
REDDIT · REDDIT// 23d agoMODEL RELEASE

Mistral Small 4 trips over agents

Mistral Small 4 is Mistral’s new open model with 256k context, native multimodality, and configurable reasoning. This Reddit thread argues that, despite the release’s ambition, its chat/template behavior still breaks common coding-agent loops like Aider, pi, and OpenCode.

// ANALYSIS

The model looks strong on paper, but agentic coding lives or dies on wrapper compatibility more than benchmark slides. If a model can’t survive tool round-trips cleanly, it won’t feel smart in daily use, no matter how good the weights are.

  • Mistral positions Small 4 as a 119B MoE model with 6B active parameters, Apache 2.0 licensing, and support for coding, reasoning, and multimodal workflows.
  • The user’s report is less about raw capability than about formatting failures, consecutive-message constraints, and tool-output parsing breaking common local agent stacks.
  • A commenter suggests LiteLLM as a translation layer between the agent CLI and the backend, which hints the serving/template stack may be the real bottleneck.
  • For local benchmarkers, this is a reminder to score “agent compatibility” separately from coding accuracy, because a brittle chat template can sink an otherwise capable model.
// TAGS
mistral-small-4llmai-codingagentreasoningmultimodal

DISCOVERED

23d ago

2026-03-20

PUBLISHED

23d ago

2026-03-19

RELEVANCE

9/ 10

AUTHOR

Real_Ebb_7417