BACK_TO_FEEDAICRIER_2
SCAO drops standalone LoRA optimizer
OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoOPENSOURCE RELEASE

SCAO drops standalone LoRA optimizer

SCAO is a new open-source PyTorch optimizer pitched as a one-line AdamW replacement for local PEFT/LoRA fine-tuning. The developer claims second-order curvature preconditioning improves early convergence while INT8 optimizer state and fallback paths keep VRAM low enough for sub-8GB local setups.

// ANALYSIS

SCAO is interesting, but still firmly in “prove it outside the author’s bench” territory.

  • The pitch targets a real pain point: AdamW can waste early fine-tuning steps, especially when local users are constrained by GPU memory and time.
  • The standalone `scao.py` path is the right adoption move after Hugging Face rejection because it lets PEFT users test without waiting for core library approval.
  • Reported results are promising, including 36.7% VRAM reduction, roughly 627 tokens/sec on TinyStories-1M full fine-tuning, and 25.8% GPT-2 perplexity improvement versus AdamW.
  • The weak signal is community validation: the Reddit thread is tiny and the repo is very new, so developers should treat claims as experimental until independent LoRA benchmarks appear.
// TAGS
scaollmfine-tuninggpuopen-sourceself-hostedbenchmark

DISCOVERED

5h ago

2026-04-22

PUBLISHED

6h ago

2026-04-21

RELEVANCE

7/ 10

AUTHOR

Jazzlike_Occasion_31