BACK_TO_FEEDAICRIER_2
DeepSeek V4 Pro Draws Token Backlash
OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoBENCHMARK RESULT

DeepSeek V4 Pro Draws Token Backlash

A Reddit thread argues DeepSeek-V4-Pro is less token-efficient than V3.2, with even non-thinking mode using more output to reach similar results. That criticism lands awkwardly next to DeepSeek’s own launch claim that V4 cuts long-context compute and KV-cache cost versus V3.2.

// ANALYSIS

The complaint is plausible at the user-experience layer even if DeepSeek’s architecture is more efficient under the hood: output verbosity and task latency still dominate how “smart” a model feels.

  • DeepSeek’s V4 release says the model is optimized for 1M-context efficiency and lower FLOPs/KV cache than V3.2, so the official story is about backend efficiency, not necessarily shorter answers.
  • If V4-Pro needs more visible tokens to solve the same task, developers pay twice: more latency and more billable output, even when raw inference is cheaper.
  • The 10x comparison to GPT-5-class models is directionally interesting but not a clean benchmark without identical prompts, stop conditions, and task sets.
  • This looks less like a settled regression than a reminder that “intelligence density” is now a product metric, not just a research metric.
// TAGS
deepseekdeepseek-v4-prollmreasoningbenchmarkinference

DISCOVERED

5h ago

2026-04-25

PUBLISHED

9h ago

2026-04-25

RELEVANCE

9/ 10

AUTHOR

Mindless_Pain1860