OPEN_SOURCE ↗
REDDIT · REDDIT// 4h agoMODEL RELEASE
DeepSeek V4 lands with 1M context
DeepSeek has previewed V4 Pro and V4 Flash, a new MoE model family with 1M-token context windows and stronger coding, reasoning, and agentic performance claims. The release keeps DeepSeek in the frontier-model race, but the most meaningful numbers still come from DeepSeek’s own benchmarks and need outside validation.
// ANALYSIS
This is a real step up in packaging and efficiency, not just another benchmark tweet.
- –V4-Pro and V4-Flash both support 1M-token context, with Pro at 1.6T parameters and 49B activated, and Flash at 284B/13B.
- –DeepSeek says the architecture cuts long-context compute and KV-cache costs versus V3.2, which is the part that matters for practical deployment.
- –The API docs already map `deepseek-chat` and `deepseek-reasoner` to V4-Flash behavior, so this is already entering the product surface.
- –The release is still a preview, so independent evals matter more than the company’s top-line benchmark claims.
// TAGS
deepseek-v4llmreasoningagentopen-sourceapi
DISCOVERED
4h ago
2026-04-24
PUBLISHED
4h ago
2026-04-24
RELEVANCE
10/ 10
AUTHOR
Alternative-Duty-532