OPEN_SOURCE ↗
REDDIT · REDDIT// 28d agoOPENSOURCE RELEASE
Autoresearch CPU fork runs on any hardware
A community fork of Andrej Karpathy's viral autoresearch project removes the H100/Flash Attention 3 requirement, enabling the autonomous ML experimentation agent to run on CPU, Apple Silicon, or any NVIDIA GPU. The fork adds Ollama-powered background research, demo chat scripts, and parameter scaling guidance for lower-resource hardware.
// ANALYSIS
Karpathy's autoresearch went viral for good reason — autonomous overnight ML research is a compelling idea — but locking it to H100s meant 99% of developers couldn't touch it. This fork fixes that.
- –The original hit 34.9k stars and required Flash Attention 3 on an H100; this fork swaps that out for standard PyTorch SDPA, opening it to consumer hardware
- –The added "Folding Mode" using a local Qwen 2.5 0.5b model is a clever low-resource twist — background research agents running during idle time
- –The reported val_bpb improvement (2.29 → 2.23) mirrors real gains users saw on the original, suggesting the CPU path isn't just a demo — it can produce meaningful results overnight
- –Multiple parallel forks for macOS MLX, Windows/RTX, and CPU show a coordinated community effort to democratize access — Karpathy himself signaled openness to linking them
- –The ~5 minute training loop on any hardware is the key unlock: developers can now iterate on autonomous ML experiments without cloud GPU budgets
// TAGS
autoresearchopen-sourceagentllmfine-tuningdevtoolmlops
DISCOVERED
28d ago
2026-03-15
PUBLISHED
28d ago
2026-03-15
RELEVANCE
7/ 10
AUTHOR
M4s4