Cursor confirms Kimi K2.5 foundation
Cursor's "Composer 2" model, a frontier-level coding upgrade with an agentic 200K token context, was revealed as a fine-tuned version of Moonshot AI’s Kimi K2.5. Cursor has since confirmed the collaboration, highlighting its custom reinforcement learning for context self-summarization.
Cursor’s lack of transparency initially sparked controversy, but the technical reality shows a sophisticated blend of open-source foundations and proprietary training. Seventy-five percent of the total compute for Composer 2 was spent on custom reinforcement learning rather than the base Kimi K2.5 model. This effort includes a novel self-summarization technique that compresses context memory to 1,000 tokens when approaching limits, reducing compaction errors by 50%. By leveraging Kimi K2.5 via Fireworks AI, Cursor achieves frontier-level performance at a 90% cost reduction compared to proprietary giants like Claude Opus. The move highlights a strategic shift for AI startups toward optimizing specialized inference for developer workflows.
DISCOVERED
20d ago
2026-03-23
PUBLISHED
20d ago
2026-03-23
RELEVANCE
AUTHOR
Wes Roth