BACK_TO_FEEDAICRIER_2
DeepSeek V4 drops with 1M context, aggressive pricing
OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoMODEL RELEASE

DeepSeek V4 drops with 1M context, aggressive pricing

DeepSeek's new MoE flagship introduces "Engram" memory for 1-million-token context and industry-low pricing. The release targets coding dominance with SWE-bench scores rivaling the top Western frontier models.

// ANALYSIS

DeepSeek V4 is a price-to-performance wrecking ball that makes token context essentially free for high-volume developers.

  • The 1M context window utilizes a new Engram memory architecture that offloads static knowledge, keeping inference fast and accurate.
  • Pricing is aggressively low: V4-Flash costs just $0.14/1M input tokens, undercutting competitors by orders of magnitude.
  • With 1 trillion total parameters and 32B active per token, it maintains efficiency without sacrificing multimodal capability.
  • Targeted coding benchmarks show V4 reaching 85% on SWE-bench, positioning it as a primary threat to Claude and GPT-5 for software engineering.
  • A 90% discount on cached tokens ($0.028/1M) incentivizes massive RAG and many-shot prompting workflows.
// TAGS
deepseek-v4deepseekllmmodel-releasecodinginferenceopen-weightsopen-sourcebenchmark

DISCOVERED

5h ago

2026-04-24

PUBLISHED

6h ago

2026-04-24

RELEVANCE

10/ 10

AUTHOR

Which-Jello9157