SAGE proposes self-evolving graph memory
SAGE introduces a writer-reader loop for long-term agent memory: a memory writer builds structured graph memory from interactions, while a Graph Foundation Model reader retrieves evidence and feeds feedback back into the graph. The paper reports stronger evidence recovery and transfer on multi-hop QA, Natural Questions, LongMemEval, and HaluMem.
This is a credible push past static RAG: memory is no longer just an index, it's something the system can improve through use. The big question is whether the gains come from the self-evolution loop itself or from extra iterations and compute.
- –The writer-reader feedback loop is the interesting part, because it turns retrieval errors into structural updates instead of one-off misses.
- –Graph memory should help most on partial cues and multi-hop evidence chains, where flat vector retrieval tends to drop key bridges.
- –The reported Natural Questions transfer numbers are strong, but the paper still needs sharper ablations to isolate what actually drives the improvement.
- –If this holds up, agent memory could become a trainable subsystem rather than a fixed retrieval layer.
- –The framing fits the current move toward long-horizon agents that remember, revise, and organize evidence over time.
DISCOVERED
3h ago
2026-05-16
PUBLISHED
3h ago
2026-05-16
RELEVANCE
AUTHOR
Discover AI