BACK_TO_FEEDAICRIER_2
MiniMax drops M2.7 open weights with self-improving MoE
OPEN_SOURCE ↗
REDDIT · REDDIT// 7h agoMODEL RELEASE

MiniMax drops M2.7 open weights with self-improving MoE

The highly anticipated open weights for MiniMax M2.7 have been released, delivering a 230-billion parameter Sparse MoE model with a 200k context window. Built with a recursive self-improvement training loop, the model is heavily optimized for complex agentic workflows and local inference.

// ANALYSIS

MiniMax M2.7 is a massive win for the local LLM community, proving that top-tier agentic performance isn't locked behind closed APIs.

  • The MoE architecture activates only 10B parameters per token, making this massive 230B model surprisingly viable for high-end local setups and cost-effective deployment
  • Its recursive self-improvement approach to synthetic data generation has yielded impressive results, scoring 56.22% on SWE-Pro
  • Out-of-the-box support for vLLM and SGLang ensures developers can immediately integrate it into multi-agent pipelines
// TAGS
minimax-m2-7llmopen-weightsagentreasoninginference

DISCOVERED

7h ago

2026-04-12

PUBLISHED

10h ago

2026-04-12

RELEVANCE

9/ 10

AUTHOR

samthepotatoeman