BACK_TO_FEEDAICRIER_2
Tanaka links multi-agent consensus to sampling noise
OPEN_SOURCE ↗
YT · YOUTUBE// 11d agoRESEARCH PAPER

Tanaka links multi-agent consensus to sampling noise

Hidenori Tanaka’s paper argues that some multi-agent LLM consensus is driven less by stable reasoning than by sampling noise that compounds across agents. It introduces Quantized Simplex Gossip to model memetic drift and derives scaling laws for when consensus becomes a lottery versus when weak biases dominate.

// ANALYSIS

Hot take: this is a strong caution against treating multi-agent agreement as evidence of truth or deliberation; in some regimes, consensus is just randomized path dependence with better branding.

  • The core claim is that group alignment can emerge from mutual in-context learning even when no agent had a real prior preference.
  • Quantized Simplex Gossip gives the paper a clean mechanistic story: sampled outputs feed back as evidence, so early randomness can lock in.
  • The scaling-law angle is the practical value here, since it suggests when larger groups or more bandwidth make drift worse or when weak signal starts to dominate.
  • The paper’s framing is useful for evaluating agent swarms, debate systems, and self-consistency setups where “agreement” may be an artifact.
  • Because it is a research paper, the impact is mainly conceptual and methodological rather than a shipping product feature.
// TAGS
llmmulti-agentcollective-intelligencememetic-driftconsensusscaling-lawsarxivai-research

DISCOVERED

11d ago

2026-03-31

PUBLISHED

11d ago

2026-03-31

RELEVANCE

9/ 10

AUTHOR

Discover AI