OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoNEWS
ICML 2026 reviewers score batches differently
This Reddit discussion asks why reviewer scores at ICML 2026 seem to vary so much across batches, with some people reporting mostly low scores and others seeing much higher averages. The thread raises the possibility of domain-specific effects, reviewer severity differences, and whether the conference normalizes or calibrates scores across batches.
// ANALYSIS
This looks less like a mystery and more like a classic reviewer-calibration problem, where local batch effects can swamp any global expectation of fairness.
- –Different topic areas can attract reviewers with very different norms for what counts as a strong paper.
- –Reviewer harshness is often uneven across batches, especially when reviewers are assigned in clusters or subfields.
- –Raw scores are usually noisy and not directly comparable across batches without calibration.
- –If ICML uses any normalization, it is likely limited and cannot fully erase batch-to-batch variance.
- –The main signal is probably relative ranking within a reviewer group, not the absolute score number.
// TAGS
icmlpeer-reviewllmacademic-conferencesreviewer-calibration
DISCOVERED
3h ago
2026-04-18
PUBLISHED
3h ago
2026-04-18
RELEVANCE
5/ 10
AUTHOR
Specialist-Manager67