Memoria Modular Memory Framework (Sarin et al., 2025)

URL: https://arxiv.org/abs/2512.12686

The paper proposes Memoria, a modular memory framework augmenting LLM-based conversational systems with persistent, interpretable, and context-rich memory. The framework integrates dynamic session-level summarization with a weighted knowledge-graph-based user-modelling engine. The contribution is in the modular composition: separable summarization, user-modelling, retrieval, and update components.

Adopted

Memoria is one node in the MemGPT-to-ClawVM agent-memory lineage; cited in this graph's recent-supporting-evidence section as evidence of the user-space substrate-rebuild trajectory's continued evolution. The weighted knowledge-graph user-modelling engine is structurally similar to the typed-edge graph this DeepContext convention uses; both reassemble substrate-shaped relational structure on top of substrates that do not natively carry it.

Not adopted (yet)

Memoria's modularity is a strength against the inadequate-substrate position: by separating concerns, the framework can be tuned per use case. The substrate-LAYER position is that this modularity is unnecessary if the substrate carries the relational structure as a primitive; the components Memoria modularizes are unified in the substrate's persistent state graph.

Sources

Relations