BACK_TO_FEEDAICRIER_2
autoresearch-webgpu runs Claude agent training loop in browser
OPEN_SOURCE ↗
REDDIT · REDDIT// 28d agoOPENSOURCE RELEASE

autoresearch-webgpu runs Claude agent training loop in browser

autoresearch-webgpu brings Karpathy's autonomous AI research loop to the browser — Claude generates TypeScript training code, WebGPU executes it locally, and the results feed back to the agent for the next iteration. No Python, no cloud, no GPU hardware required beyond a modern desktop browser.

// ANALYSIS

Stripping the autoresearch concept down to a zero-install browser demo is the right move — it turns an intimidating GPU workflow into something anyone can poke at in an afternoon.

  • The core loop mirrors Karpathy's original: LLM writes training code → runs experiment → reads loss → proposes next hypothesis → repeat; the WebGPU port just removes every infrastructure prerequisite
  • Relies on Eric Zhang's `jax-js` for browser-native GPU-accelerated tensor math — the real technical enabler here
  • Claude is the agent generating and iterating on `train.ts`; the project is a concrete example of LLM-driven code synthesis in a tight feedback loop
  • Part of a broader mid-March 2026 wave of autoresearch forks (distributed swarm variants, Triton kernel optimizers, etc.) — the WebGPU fork stands out for accessibility, not raw capability
  • Early stage: ~10 GitHub stars, solo maintainer, crashes on mobile — more proof-of-concept than production tool
// TAGS
autoresearch-webgpullmagentopen-sourceinferencedevtool

DISCOVERED

28d ago

2026-03-15

PUBLISHED

28d ago

2026-03-14

RELEVANCE

7/ 10

AUTHOR

lucasgelfond