BACK_TO_FEEDAICRIER_2
ShinkaEvolve tops AlphaEvolve on sample efficiency
OPEN_SOURCE ↗
REDDIT · REDDIT// 28d agoRESEARCH PAPER

ShinkaEvolve tops AlphaEvolve on sample efficiency

Sakana AI's open-source ShinkaEvolve framework uses LLMs as evolutionary mutation operators to automatically discover and improve scientific programs, reproducing AlphaEvolve's circle-packing result with orders of magnitude fewer evaluations. Accepted at ICLR 2026 and installable via PyPI, it adds a bandit-based LLM ensemble that dynamically picks the best model mid-run.

// ANALYSIS

ShinkaEvolve is the most credible open-source answer to AlphaEvolve yet — it not only matches the results but makes them accessible to researchers without Google-scale budgets.

  • Where AlphaEvolve needs a human to hand it the right problem, ShinkaEvolve co-evolves problems alongside solutions — a qualitatively different approach borrowed from POET and MAP-Elites
  • The bandit-based LLM ensemble (GPT-5, Sonnet 4.5, Gemini) solving the credit-assignment problem mid-run is a practically useful contribution beyond the evolutionary theory
  • Concrete wins are impressive: SOTA circle packing in ~150 evals, 2nd-place equivalent on AtCoder, a novel MoE loss function beating DeepSeek's approach, and helped win the 2025 ICFP Programming Contest
  • The honest caveat from author Robert Lange — "nothing interesting happens" when LLMs run fully autonomously — keeps expectations grounded: this is a co-pilot for researchers, not autonomous science
  • Apache 2.0 license and PyPI availability lower the barrier considerably compared to AlphaEvolve, which remains closed
// TAGS
shinka-evolvellmopen-sourceagentresearchbenchmark

DISCOVERED

28d ago

2026-03-15

PUBLISHED

29d ago

2026-03-14

RELEVANCE

8/ 10

AUTHOR

44th--Hokage