BACK_TO_FEEDAICRIER_2
Vera 1.6 pushes agentic context to 1M
OPEN_SOURCE ↗
REDDIT · REDDIT// 34d agoMODEL RELEASE

Vera 1.6 pushes agentic context to 1M

Cortex Research has introduced Vera 1.6, a multimodal model built for agentic workloads with a 1M-token context window, native image and video support, and reinforcement-learning alignment for tool use. The company reports 92.0% on HMMT, 85.9% on GPQA Diamond, and 72.4% on SWE-bench Verified while using a hybrid Gated DeltaNet and sparse MoE architecture to keep long-context inference practical.

// ANALYSIS

The interesting part here is not just the million-token headline but the fact that Cortex is optimizing the stack around autonomous tool use and long-horizon execution, which is where many general-purpose models still wobble.

  • Cortex is making a compute-efficiency bet with a 3:1 mix of linear and quadratic attention plus 256 experts with only 9 active per token, aiming to make million-token agent loops deployable instead of just demoable.
  • The benchmark spread is strong in math, multilingual tasks, document understanding, and video, but the 41.6% Terminal-Bench 2 result shows terminal-heavy agents are still the weakest link.
  • A 150B-token synthetic training corpus, agent-specific RL, and a broader platform push around integrations and agents suggest Cortex is targeting enterprise agent systems rather than pure model leaderboard hype.
// TAGS
vera-1-6llmagentmultimodalreasoningbenchmark

DISCOVERED

34d ago

2026-03-09

PUBLISHED

34d ago

2026-03-09

RELEVANCE

9/ 10

AUTHOR

Beneficial_Air_191