BACK_TO_FEEDAICRIER_2
Paper Lantern lifts coding-agent tuning 3.2%
OPEN_SOURCE ↗
REDDIT · REDDIT// 14d agoBENCHMARK RESULT

Paper Lantern lifts coding-agent tuning 3.2%

A controlled autoresearch run found that Claude Code performed better when it could query Paper Lantern's 2M-paper MCP server during TinyStories hyperparameter search. The paper-aware agent considered 520 papers, tried 25 paper-sourced techniques, and finished the 2-hour comparison at 0.4475 versus 0.4624 without papers.

// ANALYSIS

This feels less like a prompt hack and more like search-space expansion: Paper Lantern didn’t just give the agent more context, it gave it access to newer and more specialized optimization moves. The interesting part is that the gain came from better decisions under compute pressure, not from brute-force trial and error.

  • The paper-backed run surfaced post-cutoff ideas like AdaGC, WSD cooldown, REX, and the sqrt batch-scaling rule, then applied them correctly on the first attempt.
  • The no-paper run stayed inside the standard ML playbook and diverged when batch size was cut without the matching learning-rate adjustment.
  • Not every paper idea worked; DyT and SeeDNorm were rejected as architecture mismatches, which is a good sign that the system is synthesizing rather than blindly copying.
  • The evidence is still early: one run per condition, a tiny 7M-parameter model, and some of the gain may come from extra reasoning time spent on each technique.
// TAGS
paper-lanternmcpagentsearchllmresearchbenchmark

DISCOVERED

14d ago

2026-03-28

PUBLISHED

15d ago

2026-03-27

RELEVANCE

9/ 10

AUTHOR

kalpitdixit