Skip to content

Theory Documentation

Deep technical analysis and theoretical foundations of Memoir's core components.

These documents provide comprehensive explanations of the algorithms, design decisions, and architectural patterns used throughout the Memoir system.

Component Theory

Overview

The theory documentation explores three fundamental aspects of Memoir:

Classifier Theory

Detailed analysis of the two classifier approaches (SemanticClassifier in classifier/semantic.py and IntelligentClassifier in classifier/intelligent.py), including their algorithms, performance characteristics, and use cases.

  • SemanticClassifier: High-performance, cache-optimized classification with pattern matching fallbacks
  • IntelligentClassifier: Advanced multi-stage classification with memory-worthiness detection and event extraction

Search Theory

In-depth exploration of the three retrieval pipelines: two in-engine (single-stage and tiered drill-down in IntelligentSearchEngine) and one caller-driven (LLM-free primitives consumed by an outer agent).

  • IntelligentSearchEnginemode="single": one-shot LLM path selection (215-570ms)
  • IntelligentSearchEnginemode="tiered": staged drill-down (L1 → optional L2 → key pick) that mirrors the caller-driven flow but runs inside the engine (~1-2s; better scaling with store size)
  • Caller-driven tiered retrieval: LLM-free summarize / get primitives, picking driven by an outer agent (~100-300ms, zero memoir-side tokens)

Memento Theory

Comprehensive examination of the memento pattern implementation for ProfileMemento, TimelineMemento, and LocationMemento, explaining dimensional memory organization.

  • ProfileMemento: Identity and biographical information with replacement semantics
  • TimelineMemento: Chronological event organization by date
  • LocationMemento: Spatial memory management with geographic normalization

Pointers to the academic and industry research that grounds Memoir's design — hierarchical text classification, agent memory architectures, episodic/semantic memory frameworks, and content-addressed storage. Use this page when situating Memoir against MemGPT, A-MEM, CoALA, Generative Agents, and related lines of work.

Key Insights

  • Performance vs Accuracy Trade-offs: Each component offers multiple implementations optimized for different use cases
  • Hierarchical Organization: Leveraging semantic paths for O(log n) operations instead of O(n) vector searches
  • Dimensional Separation: Organizing memories along natural human cognitive dimensions (identity, time, space)
  • Git-like Versioning: Bringing version control concepts to AI memory management

Performance Benchmarks

Component Fast Implementation Intelligent Implementation
Classifier 1-5ms (cached) 200-1000ms (LLM)
Search (single) N/A 215-570ms (1 LLM call)
Search (tiered) N/A ~1-2s (2-3 LLM calls)
Search (caller-driven) ~100-300ms (no LLM) N/A
Storage 20-30ms 20-30ms

Architecture Benefits

  1. 10-50x faster search than traditional vector approaches
  2. Transparent, interpretable ranking mechanisms
  3. Flexible trade-offs between speed and understanding
  4. Hierarchical exploration of memory spaces