BACK_TO_FEEDAICRIER_2
DeepSeek-V3.2 ignores system instructions in high-momentum narratives
OPEN_SOURCE ↗
REDDIT · REDDIT// 1d agoBENCHMARK RESULT

DeepSeek-V3.2 ignores system instructions in high-momentum narratives

A systematic experiment reveals that DeepSeek V3.2 frequently ignores system instructions injected after long conversation histories, particularly in context windows over 15k characters with strong narrative momentum. The model appears to favor historical continuity as a completion task rather than following new directives, suggesting a significant limitation in its sparse attention-based reasoning for complex interactive fiction or roleplay scenarios.

// ANALYSIS

The "momentum trap" in DeepSeek V3.2 indicates a fundamental tension between coherent pattern completion and reactive instruction following in long-context models.

  • Sparse attention mechanisms may over-index on historical consistency, effectively filtering out new system signals.
  • Traditional prompting techniques like XML or reasoning parameters fail to resolve the issue in high-entropy narrative scenes.
  • Response pre-filling or aggressive context pruning remain the only reliable workarounds for developers.
  • System prompt placement (pre-history) is even less effective than post-history placement, indicating a strong recency bias toward narrative flow.
// TAGS
llmdeepseek-v3-2prompt-engineeringreasoninglong-context

DISCOVERED

1d ago

2026-04-14

PUBLISHED

1d ago

2026-04-13

RELEVANCE

8/ 10

AUTHOR

yofache