BACK_TO_FEEDAICRIER_2
Muon Hits 1D Parameter Snag
OPEN_SOURCE ↗
REDDIT · REDDIT// 12d agoTUTORIAL

Muon Hits 1D Parameter Snag

PyTorch’s Muon optimizer is not a drop-in AdamW replacement: it only accepts 2D tensors, so a plain model.parameters() fine-tune will fail on 1D biases and norm weights. The usual fix is to split parameter groups, sending matrix weights to Muon and keeping 1D tensors, embeddings, and other special cases on AdamW or a similar optimizer.

// ANALYSIS

Muon is a specialized matrix optimizer, not a universal optimizer swap, and the error is basically the library enforcing that boundary.

  • PyTorch’s Muon docs say it is for 2D hidden-layer parameters only; other parameters such as bias and embedding should use a standard optimizer like AdamW.
  • Passing `model.parameters()` blindly includes 1D tensors like `torch.Size([512])`, so the optimizer rejects them before training starts.
  • In practice, people build named parameter groups or filter by module/type, but they usually avoid relying only on `p.ndim == 2` because embeddings are also 2D and often should not go through Muon.
  • The `adjust_lr_fn="match_rms_adamw"` option is meant to preserve AdamW-like tuning behavior, but it only helps after the parameter split is correct.
  • The VRAM win comes from using Muon selectively on the large weight matrices, not from replacing every trainable tensor in the model.
// TAGS
muonfine-tuningllmmlopsopen-source

DISCOVERED

12d ago

2026-03-31

PUBLISHED

12d ago

2026-03-30

RELEVANCE

8/ 10

AUTHOR

Ok_Warning2146