BACK_TO_FEEDAICRIER_2
ML Reviewers Push Back on Rebuttal Experiments
OPEN_SOURCE ↗
REDDIT · REDDIT// 14d agoNEWS

ML Reviewers Push Back on Rebuttal Experiments

A r/MachineLearning poster argues that rebuttal culture has swung too far toward demanding extra experiments, even when the paper already supports its main claims. The replies mostly back the complaint, though a few commenters defend broader exploratory checks as part of pushing the field forward.

// ANALYSIS

This is less about being soft on rigor than avoiding review theater: once rebuttal becomes a gotcha hunt, it rewards reviewer imagination more than scientific judgment. Major venue policies already draw the line at clarifications and small experiments, not substantial new revisions. The extra-what-if habit hits hardest when rebuttal time is short and compute is limited, because rushed results can muddy an otherwise clean story and favor better-resourced labs. Reviewers should distinguish rating-changing evidence from curiosity questions and say which bucket each request belongs in; the fair counterpoint is that some edge-case checks do uncover real limits, so the right norm is calibration, not zero extra experiments.

// TAGS
peer-reviewresearchethicstesting

DISCOVERED

14d ago

2026-03-28

PUBLISHED

15d ago

2026-03-27

RELEVANCE

5/ 10

AUTHOR

AffectionateLife5693