Ramp's Fast Ask beats Claude Opus in Sheets
Ramp used Prime Intellect Lab to post-train Fast Ask, a specialist retrieval subagent for financial spreadsheet search. Prime Intellect says it outperformed Claude Opus 4.6 by 4 points on exact-match accuracy while keeping Haiku-class latency and lower cost.
Hot take: this is a strong proof that narrow, workflow-specific RL can beat frontier generalists on high-value enterprise tasks without needing a bigger base model.
- –The win is not “general intelligence”; it is targeted retrieval quality on one ugly but important spreadsheet workflow.
- –The training setup matters: custom environment, tool use, and constrained workbook navigation are what made the improvement measurable.
- –The reported gains are commercially meaningful because the model is faster and cheaper, not just more accurate.
- –The strongest signal here is product strategy: build a specialist subagent for a repeated bottleneck instead of routing everything through a general-purpose model.
DISCOVERED
1d ago
2026-05-07
PUBLISHED
1d ago
2026-05-07
RELEVANCE
AUTHOR
PrimeIntellect