OPEN_SOURCE ↗
REDDIT · REDDIT// 12d agoNEWS
TurboQuant paper sparks RaBitQ dispute
Google Research's TurboQuant paper is drawing pushback from RaBitQ authors, who say it underplays the method lineage and baseline setup. Their public clarification says the same concerns were raised privately in May 2025 and again on March 26, 2026, just as the paper heads toward ICLR 2026.
// ANALYSIS
This looks less like ordinary conference chatter and more like a real attribution-and-reproducibility fight. TurboQuant may still be a strong compression result, but the public conversation has shifted to whether the paper's framing gives a fair picture of novelty and speedups.
- –Google's official write-up positions TurboQuant as training-free KV-cache compression plus vector-search acceleration, with no fine-tuning required and strong benchmark results.
- –RaBitQ authors say the paper describes their method too narrowly, downplays the Johnson-Lindenstrauss/random-rotation connection, and moves key context into the appendix.
- –They also allege the RaBitQ baseline was run under weaker conditions than TurboQuant, which makes the performance comparison hard to read at face value.
- –If those complaints hold up, the damage is bigger than one paper: it chips away at trust in the benchmark narrative around fast LLM compression.
- –For developers, the practical move is to wait for code, exact setup details, and the final conference version before treating the headline gains as settled.
// TAGS
turboquantresearchllminferencevector-dbbenchmark
DISCOVERED
12d ago
2026-03-30
PUBLISHED
12d ago
2026-03-30
RELEVANCE
8/ 10
AUTHOR
Disastrous_Room_927