BACK_TO_FEEDAICRIER_2
ICML desk-rejects 2% papers over LLM reviews
OPEN_SOURCE ↗
HN · HACKER_NEWS// 23d agoPOLICY REGULATION

ICML desk-rejects 2% papers over LLM reviews

ICML says it desk-rejected 497 papers, about 2% of submissions, after detecting that 506 reciprocal reviewers violated the conference’s agreed LLM-use policy. The committee used hidden PDF watermarking to catch LLM-assisted reviews and says 795 reviews were flagged and removed.

// ANALYSIS

This is a rare case where conference policy enforcement looks both justified and technologically awkward: the line was explicit, the violations were real, and the punishment is severe enough to change behavior. At the same time, the detection method is a blunt instrument, which means the story is as much about reviewer accountability as it is about the limits of policing AI use.

  • ICML made LLM usage a consent-based policy choice, so violating the no-LLM track is closer to breaking an agreement than making a gray-area workflow decision.
  • Hidden watermark prompts are clever, but they mostly catch careless copy-paste usage; anyone trying to evade detection would likely slip past them.
  • Desk-rejecting authors’ papers for reviewer misconduct is harsh, but it creates pressure for reciprocal reviewers to treat their review obligations seriously.
  • This feels like an early template for conference integrity tooling: more auditing, more disclosure, and less tolerance for “everyone is doing it” behavior around LLMs.
  • For AI researchers, the practical takeaway is simple: peer review is becoming another place where AI usage has to be explicitly governed, not assumed.
// TAGS
icmlllmresearchregulationethicssafetyprompt-engineering

DISCOVERED

23d ago

2026-03-19

PUBLISHED

24d ago

2026-03-19

RELEVANCE

8/ 10

AUTHOR

sergdigon