BACK_TO_FEEDAICRIER_2
Anthropic accuses DeepSeek of Claude distillation
OPEN_SOURCE ↗
YT · YOUTUBE// 36d agoSECURITY INCIDENT

Anthropic accuses DeepSeek of Claude distillation

Anthropic says DeepSeek used more than 150,000 Claude exchanges across fraudulent accounts to distill reasoning and rubric-based grading behavior into its own systems. The video turns that claim into a hands-on demo, showing with a smaller open model why output-level capability copying is technically plausible even without stealing weights.

// ANALYSIS

This is bigger than one lab calling out another — it frames frontier-model APIs as attack surfaces, not just developer products.

  • Anthropic’s claim matters because it says DeepSeek targeted the exact premium behaviors developers pay for: reasoning, tool use, and coding quality
  • The demo is useful because it makes “distillation attack” concrete: repeated high-quality outputs can become training data fast enough to narrow capability gaps
  • If this framing sticks, labs will tighten identity checks, behavioral monitoring, rate controls, and protections around reasoning traces
  • The article also ties distillation directly to export-control and national-security debates, which raises the odds of policy spillover into ordinary API access rules
// TAGS
deepseekllmreasoningai-codingsafety

DISCOVERED

36d ago

2026-03-06

PUBLISHED

36d ago

2026-03-06

RELEVANCE

8/ 10

AUTHOR

Bijan Bowen