BACK_TO_FEEDAICRIER_2
Minerva turns memory into graph search
OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoOPENSOURCE RELEASE

Minerva turns memory into graph search

Minerva is an experimental, privacy-first local assistant that stores facts in a dynamically built knowledge graph, with each node and edge embedded for retrieval. It uses two tools, retrieve and manage_memory, and is designed to run locally on Qwen3 8B GGUF through llama.cpp with a few supporting models.

// ANALYSIS

This is closer to a real memory subsystem than yet another long-chat summary hack. By splitting reads and writes, Minerva gives the model a place to store facts and a separate path to fetch them when context matters.

  • Embedding every node and edge is a smart way to keep the graph searchable, but it only pays off if entity resolution stays clean.
  • The write path is the interesting part: the model emits `FactEdge` triplets directly and a background orchestrator resolves them, which is cleaner than post-hoc extraction.
  • Graph expansion after semantic retrieval is the right hybrid for personal memory: vector recall finds candidates, graph hops recover structure, and together they avoid flat-RAG amnesia.
  • The local stack is the biggest practical win here; running on llama.cpp and a small model set makes private memory plausible without sending your life story to a cloud API.
  • The repo is still experimental and says it is not production-ready, so the hard part now is evaluation, latency, and whether memory writes stay accurate over long sessions. [Reddit announcement](https://www.reddit.com/r/LocalLLaMA/comments/1s1t36i/local_assistant_with_tool_based_memory_knowledge/), [GitHub repo](https://github.com/RinMar/Minerva)
// TAGS
minervaagentembeddingragllmopen-sourceself-hosted

DISCOVERED

19d ago

2026-03-23

PUBLISHED

19d ago

2026-03-23

RELEVANCE

8/ 10

AUTHOR

Better_Carrot7158