BACK_TO_FEEDAICRIER_2
DeepBlueDynamics Autoresearch Adds Weber Optimizer, SDR Entropy
OPEN_SOURCE ↗
REDDIT · REDDIT// 24d agoOPENSOURCE RELEASE

DeepBlueDynamics Autoresearch Adds Weber Optimizer, SDR Entropy

DeepBlueDynamics’ fork of karpathy/autoresearch adds a physics-inspired Weber-style optimizer, hardware entropy seeding from RTL-SDR noise, and a multi-provider agent harness for autonomous training runs. The repo also claims multi-GPU support and reports its best tuned setup improved validation bpb from 0.9979 to 0.9697.

// ANALYSIS

This is a clever research fork, but the real story is probably the stack around the optimizer, not the optimizer alone. The Weber bracket is the kind of idea that could be genuinely interesting, but it still needs clean ablations before anyone should treat it as more than an experimental twist.

  • The Weber update is novel enough to be worth testing, especially since it modifies step size based on both momentum and acceleration.
  • The SDR entropy seeding is fun and defensible engineering, but it is unlikely to move model quality unless the prior seed was unusually unlucky.
  • The reported gains likely bundle together many changes at once: depth, batch size, RoPE base, init scale, and weight decay.
  • The agent harness is arguably the most practically useful part because it turns the repo into a repeatable experiment loop, not just a one-off training script.
  • For anyone evaluating the optimizer itself, the key question is whether Weber still helps after controlling for the rest of the hyperparameter sweep.
// TAGS
autoresearchagentautomationopen-sourcemlopsgpuresearch

DISCOVERED

24d ago

2026-03-19

PUBLISHED

24d ago

2026-03-18

RELEVANCE

8/ 10

AUTHOR

kordlessss