BACK_TO_FEEDAICRIER_2
GigaChat 3.1 releases Ultra, Lightning open weights
OPEN_SOURCE ↗
REDDIT · REDDIT// 18d agoMODEL RELEASE

GigaChat 3.1 releases Ultra, Lightning open weights

Sber released GigaChat-3.1-Ultra and GigaChat-3.1-Lightning as MIT-licensed open weights on Hugging Face, saying the models were pretrained from scratch rather than fine-tuned from DeepSeek. Ultra is a 702B MoE for heavyweight deployments, while Lightning is a 10B A1.8B MoE built for local inference, 256k context, multilingual coverage, and tool calling.

// ANALYSIS

The big-model flex is real, but Lightning is the one with a believable developer story. If Sber's numbers survive independent testing, this is one of the stronger regional open-weights releases to land this year.

  • Ultra is mostly a strategic statement: its scale matters for frontier credibility, but most teams will never deploy a 702B MoE casually.
  • MIT licensing, Hugging Face distribution, and GGUF availability make the release unusually easy to try, self-host, and fine-tune.
  • Sber's own tables look strongest on Russian/general reasoning and arena preference, while coding is good but not a runaway win.
  • Native FP8 DPO, MTP, and the reported BFCLv3 0.76 make Lightning the more interesting model for tool-using agents and offline assistants.
  • The release widens the open-weights race beyond the usual U.S./China center of gravity and gives CIS-focused teams a native option with real size and speed.
// TAGS
gigachat-3-1llmopen-weightsbenchmarkinferenceagent

DISCOVERED

18d ago

2026-03-24

PUBLISHED

18d ago

2026-03-24

RELEVANCE

9/ 10

AUTHOR

netikas