ICML cracks down on LLM reviews
A Reddit thread says ICML rejected papers linked to reviewers who used LLMs despite signing up for a no-LLM track. The argument is really about enforcement: whether a hard penalty is justified when the detection method may not be perfect.
If ICML has strong, targeted evidence, the crackdown is a defensible integrity move; if it relies on vague AI detection, it risks punishing the wrong people and eroding trust. ICML’s published 2026 review policy already treats reviewer AI use as a serious integrity issue, but it also warns that automated flags are not the same as proven violations. A strict penalty is a strong deterrent against reviewers outsourcing judgment to chatbots, but due process matters if noisy detection can spill reputational damage onto innocent coauthors. The bigger signal is that major ML venues are moving from “please don’t” to active enforcement.
DISCOVERED
24d ago
2026-03-18
PUBLISHED
24d ago
2026-03-18
RELEVANCE
AUTHOR
S4M22