OPEN_SOURCE ↗
REDDIT · REDDIT// 36d agoMODEL RELEASE
Sarvam-105B opens India-built reasoning model
Sarvam AI has open-sourced Sarvam-105B on Hugging Face under Apache 2.0 alongside a smaller 30B sibling, positioning it as a frontier-class MoE reasoning model trained entirely in India. The company says the 105B model is strongest on reasoning, coding, agentic tasks, and Indian-language performance, with support notes for Transformers, SGLang, and vLLM.
// ANALYSIS
This is more interesting than a routine model-card drop: Sarvam is pitching a full-stack sovereign AI story, not just another checkpoint with borrowed branding.
- –Sarvam-105B uses a 105B-parameter MoE design with 10.3B active parameters, which matters because it targets frontier-style capability without frontier-style inference cost on every token.
- –The official benchmarks put it in the same conversation as GLM-4.5-Air, GPT-OSS-120B, and Qwen3-Next-80B-A3B-Thinking, especially on math, coding, and agentic evaluations like BrowseComp and Tau2.
- –The strongest differentiator is Indian language coverage: Sarvam claims state-of-the-art results across 22 scheduled Indian languages, which gives the release a clearer market angle than most open model launches.
- –The release also shows deployment intent, not just research theater: Sarvam documents local and server inference paths and says the model already powers its Indus assistant for complex reasoning workflows.
// TAGS
sarvam-105bllmreasoningai-codingagentopen-weights
DISCOVERED
36d ago
2026-03-06
PUBLISHED
36d ago
2026-03-06
RELEVANCE
9/ 10
AUTHOR
Relevant-Audience441