BACK_TO_FEEDAICRIER_2
Trace2Skill distills traces into agent skills
OPEN_SOURCE ↗
YT · YOUTUBE// 13d agoRESEARCH PAPER

Trace2Skill distills traces into agent skills

Alibaba's Qwen team introduces Trace2Skill, a framework that mines execution traces to deepen existing skills or synthesize new ones from scratch. The paper says it beats Anthropic's official xlsx skills on spreadsheet, VisionQA, and math tasks, while transferring across model scales without retraining.

// ANALYSIS

This feels less like prompt tinkering and more like skill compilation: the system extracts the stable lesson from traces, merges it once, and ships a portable artifact instead of a brittle patchwork.

  • The parallel sub-agent design is the key move; it avoids the order dependence and overfitting that come from editing skills sequentially after each trajectory.
  • Beating Anthropic's official xlsx skills is a meaningful signal, because those are already opinionated, domain-specific baselines rather than easy strawmen.
  • The cross-model result is the real headline: a skill distilled from Qwen3.5-35B helping Qwen3.5-122B by up to +57.65 on WikiTableQuestions suggests skills can behave like reusable software assets.
  • The downside is that the method still depends on good trace coverage and careful merge logic, so it will reward teams with lots of representative agent runs more than teams with sparse telemetry.
// TAGS
trace2skillllmagentautomationresearchbenchmark

DISCOVERED

13d ago

2026-03-29

PUBLISHED

13d ago

2026-03-29

RELEVANCE

9/ 10

AUTHOR

Discover AI