OPEN_SOURCE ↗
REDDIT · REDDIT// 35d agoMODEL RELEASE
Sarvam open-sources 30B, 105B reasoning models
Sarvam AI has open-sourced two reasoning models trained in India, a 30B model aimed at efficient deployment and a 105B model aimed at stronger reasoning and agentic workflows. The release includes benchmark claims against frontier open models, downloadable weights, API access, and production usage in Sarvam’s own Samvaad and Indus products.
// ANALYSIS
This is less a random Reddit benchmark thread than a meaningful new open-model release, and the real story is Sarvam trying to prove India can ship a full-stack sovereign AI stack with competitive reasoning, coding, and multilingual performance.
- –Sarvam 105B is positioned against models like Qwen3-Next-80B, GPT-OSS-120B, GLM-4.5-Air, and DeepSeek R1 rather than a direct apples-to-apples Qwen 3.5 matchup
- –Sarvam 30B looks especially notable for practical deployment, with the company claiming strong coding and agentic results plus higher inference throughput than Qwen baselines on some hardware
- –The differentiator is not just raw benchmarks but Indian-language performance, tokenizer efficiency across many Indic scripts, and deployment optimization from H100s down to laptops
- –Open weights on Hugging Face and AI Kosh make this more useful to developers than a closed API-only release, especially for teams experimenting with local inference, vLLM, or SGLang
- –Benchmark claims are strong but still company-reported, so the next important signal will be independent comparisons against Qwen 3.5, DeepSeek, and other popular open reasoning models
// TAGS
sarvam-30b-105bllmreasoningopen-sourcebenchmarkagentapi
DISCOVERED
35d ago
2026-03-08
PUBLISHED
35d ago
2026-03-08
RELEVANCE
9/ 10
AUTHOR
DockyardTechlabs