Karpathy's LLM Wiki challenges standard RAG
Developers are shifting from stateless RAG to a "system-level loop" approach inspired by Andrej Karpathy's LLM Wiki concept. This method compiles raw sources into a structured, persistent, and self-improving markdown library that AI agents continuously refine and query.
The LLM Wiki represents a shift from "search-as-memory" to "documentation-as-memory" where persistent markdown storage avoids vector database lock-in and provides a human-readable audit trail. Its "linting" mechanism allows agents to proactively identify contradictions and stale claims, creating a self-healing knowledge base. Early implementations like llm-wiki-compiler and CacheZero report up to 70x reduction in token usage by querying condensed summaries rather than raw chunks, mirroring human research workflows that build value over time.
DISCOVERED
2d ago
2026-04-10
PUBLISHED
2d ago
2026-04-10
RELEVANCE
AUTHOR
knlgeth