Episodic Memory

Long-term storage for experiences, conversations, and events. Organized chronologically with rich metadata and hybrid retrieval. Entries are promoted from Working Memory during sleep consolidation.

Schema

Episodic Memory uses an auto-incrementing rowid with a unique text id:

ColumnTypeDescription
rowidINTEGERAuto-increment primary key
idTEXTUUIDv4 unique identifier
contentTEXTFull experience record
sourceTEXTOrigin (user, tool, observation)
timestampTEXTApplication-level timestamp
session_idTEXTSession grouping key (default: 'default')
importanceREAL0.0–1.0 importance score (default: 0.5)
metadata_jsonTEXTArbitrary JSON metadata blob
summary_ofTEXTIDs of original Working Memory entries this summarizes
created_atTIMESTAMPRow creation time
recall_countINTEGERNumber of times recalled (default: 0)
last_recalledTIMESTAMPLast recall time (default: NULL)
valid_untilTIMESTAMPTTL expiry time
superseded_byTEXTID of newer version
scopeTEXTVisibility scope (default: 'global')

Indexes

  • idx_episodic_session on session_id
  • idx_episodic_timestamp on timestamp
  • idx_episodic_source on source

FTS5

Full-text search via fts_episodes (content synced from episodic_memory), maintained by INSERT/UPDATE/DELETE triggers.

Embeddings stored in vec_episodes (sqlite-vec, int8[384]), generated at recall time.

Hybrid Scoring

Retrieval combines text and vector signals:

score = (vector_similarity * 0.5) + 
      (fts_rank * 0.3) + 
      (temporal_proximity * 0.2)
SignalWeightDescription
Vector similarity0.5Semantic closeness to query embedding
FTS5 rank0.3Full-text match quality via BM25
Temporal proximity0.2Recency of the memory

Consolidation Pipeline

Episodic entries are created during sleep consolidation, not directly by the agent:


flowchart TD
  WM[Working Memory] -->|age > TTL/2| C{Consolidation}
  C -->|group by source| G[Group entries]
  G -->|summarize| S[LLM or AAAK summary]
  S -->|promote| EM[Episodic Memory]
  C -->|evict originals| DEL[Evict from Working]

The summary_of column in the episodic record tracks which Working Memory entries were consolidated to produce it.

Session Grouping

Episodic entries are grouped by session_id for chronological recall:

# Store with session (goes to Working Memory first)
mem.remember(
  content="User approved the design mockup.",
  session_id="design-review-2026-04-25",
)

# Recall with session filter
results = mem.recall("design mockup", session_id="design-review-2026-04-25")

Performance

MetricValue
Median retrieval85ms
Storage per entry~2KB (compressed)
Queryable fieldscontent, source, session_id, timestamp, date range
Best Practice

Use descriptive source values and meaningful metadata_json fields for episodic memories. These are indexed and used in retrieval scoring. Good sources: "user-feedback", "tool-result", "agent-decision".