BACK_TO_FEEDAICRIER_2
Min-P Gains, Top-K/Top-P Stay Default
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoRESEARCH PAPER

Min-P Gains, Top-K/Top-P Stay Default

The thread asks why Min-P hasn’t displaced Top-K and Top-P in modern model configs, even though it’s increasingly discussed as the cleaner sampler. The short answer is that Min-P is promising, but it’s still an option rather than a universal default, so many model authors keep the older knobs for compatibility and predictability.

// ANALYSIS

Min-P looks like the more modern sampler, but it is not a clean replacement for Top-K/Top-P; it is an extra control that behaves well in some regimes and poorly in others. The ecosystem is still in coexistence mode, not a standards-migration phase.

  • Min-P scales its cutoff by the model’s confidence, which can make generation feel more coherent at higher temperatures and less prone to junk tails.
  • It is still newer than Top-K/Top-P, so many model cards and runtimes default to the settings users already understand and every backend can support.
  • Hugging Face now documents `min_p`, but it documents `top_k` and `top_p` as standard generation controls too, which is a sign that Min-P has been added to the toolbox rather than crowned as the replacement.
  • Model authors often optimize for lowest-common-denominator reproducibility, so shipped configs tend to stay conservative even when newer samplers exist.
  • The practical rule is still model-specific tuning: start with the model’s recommended config, then test Min-P if you care about creative output or high-temperature sampling.
// TAGS
llmresearchinferencemin-p-sampling

DISCOVERED

3h ago

2026-04-27

PUBLISHED

6h ago

2026-04-27

RELEVANCE

6/ 10

AUTHOR

bgravato