Python SDK

Complete reference for the Mnemosyne Python API.

Mnemosyne Class

The main interface for interacting with Mnemosyne's memory system.

Constructor

Mnemosyne(
  session_id: str = "default",
  db_path: Path | None = None,
  bank: str | None = None,
  author_id: str | None = None,
  author_type: str | None = None,
  channel_id: str | None = None,
) -> Mnemosyne
ParameterTypeDefaultDescription
session_idstr"default"Session identifier for scoping working memories
db_pathPath | NoneNonePath to the SQLite database file. Defaults to ~/.hermes/mnemosyne/data/mnemosyne.db
bankstr | NoneNoneNamed memory bank. Each bank gets its own SQLite file under data_dir/banks/<name>/
author_idstr | NoneNoneAuthor identifier for multi-agent identity (v2.1)
author_typestr | NoneNoneAuthor category: "human", "agent", or "system" (v2.1)
channel_idstr | NoneNoneChannel for cross-session shared memory. Defaults to session_id (v2.1)
from mnemosyne import Mnemosyne

# Default session
mem = Mnemosyne()

# Named session
mem = Mnemosyne(session_id="agent-42")

# Custom database path
mem = Mnemosyne(db_path="./my-memories.db", session_id="agent-42")

# Named memory bank (isolated SQLite)
mem = Mnemosyne(bank="work")

remember

Store a memory in Working Memory. Returns the generated memory ID.

remember(
  content: str,
  source: str = "conversation",
  importance: float = 0.5,
  trust_tier: str | None = None,
  metadata: dict | None = None,
  valid_until: str | None = None,
  scope: str = "session",
  extract_entities: bool = False,
  extract: bool = False,
) -> str
ParameterTypeDefaultDescription
contentstrThe text content to store
sourcestr"conversation"Origin of the memory (e.g. "user", "system", "inferred")
importancefloat0.5Importance score from 0.0 to 1.0, affects retrieval ranking
trust_tierstr | NoneNoneTrust tier label for memory provenance tracking
metadatadict | NoneNoneOptional arbitrary metadata dictionary
valid_untilstr | NoneNoneOptional expiry timestamp (ISO 8601)
scopestr"session"Scope for the memory: "session" or "global"
extract_entitiesboolFalseIf True, extract named entities (mentions, hashtags, proper nouns) via regex + Levenshtein matching and store as TripleStore triples
extractboolFalseIf True, extract 2–5 factual statements via LLM and store as triple facts
from mnemosyne import Mnemosyne

mem = Mnemosyne()

# Basic storage
mem.remember("User prefers dark mode.")

# With entity extraction — captures "PostgreSQL" as a triple
mem.remember("We decided to use PostgreSQL.", extract_entities=True)

# With fact extraction — LLM pulls structured facts
mem.remember(
  "The project deadline moved to June 15th. Alice is the lead.",
  extract=True,
  importance=0.8,
)

recall

Retrieve relevant memories using hybrid vector + full-text search with configurable scoring weights and temporal decay.

recall(
  query: str,
  top_k: int = 5,
  *,
  from_date: str | None = None,
  to_date: str | None = None,
  source: str | None = None,
  topic: str | None = None,
  author_id: str | None = None,
  author_type: str | None = None,
  channel_id: str | None = None,
  veracity: str | None = None,
  memory_type: str | None = None,
  temporal_weight: float = 0.0,
  query_time: str | None = None,
  temporal_halflife: float | None = None,
  vec_weight: float | None = None,
  fts_weight: float | None = None,
  importance_weight: float | None = None,
) -> list[dict]
ParameterTypeDefaultDescription
querystrNatural language query to match against stored memories
top_kint5Maximum number of results to return
from_datestr | NoneNoneISO 8601 timestamp — only return memories on or after this date
to_datestr | NoneNoneISO 8601 timestamp — only return memories on or before this date
sourcestr | NoneNoneFilter by source label (e.g. "conversation", "user")
topicstr | NoneNoneFilter by topic keyword
author_idstr | NoneNoneFilter by specific author (v2.1)
author_typestr | NoneNoneFilter by author category: "human", "agent", "system" (v2.1)
channel_idstr | NoneNoneFilter by channel for cross-session recall (v2.1)
veracitystr | NoneNoneFilter by confidence level: "stated", "inferred", "tool", "imported", "unknown" (v2.3)
memory_typestr | NoneNoneFilter by memory type: "FACT", "PREFERENCE", "DECISION", etc. (v2.3)
temporal_weightfloat0.0How much recency decay influences the final score (0.0–1.0). Default 0.0 (disabled)
query_timestr | NoneNoneISO 8601 timestamp to use as "now" for temporal decay calculations
temporal_halflifefloat | NoneNoneHalf-life in hours for the exponential recency decay. Default from MNEMOSYNE_TEMPORAL_HALFLIFE_HOURS (24h)
vec_weightfloat | NoneNoneWeight for vector similarity score. Default from MNEMOSYNE_VEC_WEIGHT (0.5)
fts_weightfloat | NoneNoneWeight for FTS5 text relevance score. Default from MNEMOSYNE_FTS_WEIGHT (0.3)
importance_weightfloat | NoneNoneWeight for the memory's importance rating. Default from MNEMOSYNE_IMPORTANCE_WEIGHT (0.2)

Each result dict includes keys like id, content, source, importance, timestamp, score, and tier (indicating working vs. episodic).

get_context

Retrieve recent memories for context building, useful for injecting into prompts.

get_context(
  limit: int = 10,
) -> list[dict]

Returns the most recent memories across working memory, ordered by recency. Only returns valid, non-superseded entries.

get_stats

Return statistics about the memory store.

get_stats(
  author_id: str | None = None,
  author_type: str | None = None,
  channel_id: str | None = None,
) -> dict

Returns a dictionary with keys such as total_memories, total_sessions, sources, last_memory, database, mode, banks, and beam (with working_memory, episodic_memory, and triples sub-stats). Supports optional identity filters.

forget

Delete a specific memory by ID.

forget(memory_id: str) -> bool

Returns True if the memory was found and deleted, False otherwise.

update

Update an existing memory's content or importance.

update(
  memory_id: str,
  content: str | None = None,
  importance: float | None = None,
) -> bool

Only the fields you provide will be changed. Returns True on success.

invalidate

Mark a memory as superseded by another.

invalidate(
  memory_id: str,
  replacement_id: str | None = None,
) -> bool

Sets the superseded_by field on the target memory, signalling that it is no longer the canonical version. Pass a replacement_id to link to the new memory.

sleep

Run the sleep/consolidation cycle — groups working memories by source, summarizes them, promotes to episodic memory, and evicts originals.

sleep(dry_run: bool = False) -> dict
ParameterTypeDefaultDescription
dry_runboolFalseIf True, reports what would happen without making changes

Returns a summary dict of consolidation actions performed.

sleep_all_sessions

Run consolidation and degradation across all sessions (not just the current one).

sleep_all_sessions(dry_run: bool = False) -> dict

Returns a dict with per-session consolidation and degradation summaries. (v2.3)

degrade_episodic

Run the tiered degradation cycle that compresses old episodic memories.

degrade_episodic(dry_run: bool = False) -> dict

Returns a dict with tier1_to_tier2 and tier2_to_tier3 counts. Called automatically during sleep(). Configure via MNEMOSYNE_TIER2_DAYS, MNEMOSYNE_TIER3_DAYS, and tier weight env vars. (v2.3)

get_contaminated

Surface non-stated memories for audit and review.

get_contaminated(limit: int = 50, min_importance: float = 0.0) -> list[dict]

Returns episodic memories where veracity is NOT "stated": everything inferred, tool-generated, imported, or unknown. Sorted by importance descending. Use for periodic cleanup of AI-inferred memories. (v2.3)

enable_streaming

Enable event streaming for this memory instance. Wires the stream into BeamMemory so all write operations emit events.

enable_streaming() -> Mnemosyne

Call once after construction to activate the streaming subsystem. Returns self for chaining.

compress

Compress memory content using dictionary, RLE, or semantic strategies.

compress(
  content: str,
  method: str = "auto",
) -> tuple[str, CompressionStats]
ParameterTypeDefaultDescription
contentstrThe memory content to compress
methodstr"auto"Compression method: "dict", "rle", "semantic", or "auto" (tries dict first, falls back to RLE)

Returns a tuple of (compressed_content, CompressionStats). CompressionStats includes original_size, compressed_size, ratio, method, and savings_percent.

decompress

Decompress content that was compressed with a given method.

decompress(
  content: str,
  method: str = "dict",
) -> str

compress_memories

Compress a batch of memories.

compress_memories(
  memories: list,
  method: str = "auto",
) -> tuple[list, CompressionStats]

Returns a tuple of (compressed_memories, aggregate_stats). Each memory in the list is a dict with at least a "content" key. Compressed memories get _compressed and _compression_method metadata keys.

detect_patterns

Detect patterns across all working + episodic memories (or a provided list).

detect_patterns(memories: list | None = None) -> list[DetectedPattern]

If memories is None, uses all valid working + episodic memories. Returns a list of DetectedPattern objects, each with pattern_type ("temporal", "content", "sequence"), description, confidence, samples, and metadata.

summarize_patterns

Generate a human-readable summary of detected patterns.

summarize_patterns(memories: list | None = None) -> dict

Returns a dict with total_memories, patterns_found, temporal_patterns, content_patterns, sequence_patterns, and top_pattern.

get_all_memories

Return all working + episodic rows for analysis.

get_all_memories() -> list[dict]

Returns all valid, non-superseded, non-expired memories across working and episodic tiers, scoped to the current session and global memories. Useful as input to detect_patterns() and summarize_patterns().

sync_to

Compute a delta for a peer — returns changes since the last sync checkpoint.

sync_to(
  peer_id: str,
  table: str = "working_memory",
) -> dict

Returns {peer_id, table, delta, count}. The delta is a list of memory dicts that have changed since the last sync.

sync_from

Apply a delta received from a peer.

sync_from(
  peer_id: str,
  delta: list,
  table: str = "working_memory",
) -> dict

Returns {peer_id, table, stats, checkpoint} with import statistics and an updated sync checkpoint.

V2 Properties

The Mnemosyne instance exposes several v2 subsystems as lazy-initialized properties:

PropertyTypeDescription
streamMemoryStreamThread-safe event stream with push (on/on_any callbacks) and pull (listen iterator) patterns for real-time memory notifications
compressorMemoryCompressorDictionary-based, RLE, and semantic compression strategies for reducing memory footprint
patternsPatternDetectorDetects temporal (hour/weekday), content (keyword frequency, co-occurrence), and sequence patterns across memories
delta_syncDeltaSyncIncremental synchronization between Mnemosyne instances with checkpointed resume support
pluginsPluginManagerDiscovers and manages plugins from ~/.hermes/mnemosyne/plugins/. Built-in: LoggingPlugin, MetricsPlugin, FilterPlugin
from mnemosyne import Mnemosyne

mem = Mnemosyne()

# Access the stream for real-time notifications
mem.stream.on(EventType.MEMORY_ADDED, lambda event: print(f"New memory: {event}"))

# Listen to stream as an iterator
for event in mem.stream.listen():
  print(f"Event: {event.event_type.name} - {event.memory_id}")

# Detect patterns across stored memories
patterns = mem.detect_patterns()

# Or get a summary
summary = mem.summarize_patterns()

# Incremental sync to a peer
delta = mem.sync_to(peer_id="remote-1")

# Apply a sync from a peer
result = mem.sync_from(peer_id="remote-1", delta=received_delta)

MemoryStream API

The stream property exposes a MemoryStream with these methods:

MethodSignatureDescription
onon(event_type: EventType, callback: Callable)Register a callback for a specific event type
on_anyon_any(callback: Callable)Register a callback for all event types
offoff(event_type: EventType, callback: Callable)Remove a specific callback
off_anyoff_any(callback: Callable)Remove an any-event callback
emitemit(event: MemoryEvent)Emit an event to all callbacks and iterators
listenlisten(event_types=None) -> Iterator[MemoryEvent]Return an iterator that yields events as they occur
get_bufferget_buffer(event_types=None, since=None) -> list[MemoryEvent]Get buffered events, optionally filtered
clear_bufferclear_buffer()Clear the event buffer

DeltaSync API

The delta_sync property exposes a DeltaSync with these methods:

MethodSignatureDescription
get_checkpointget_checkpoint(peer_id) -> Optional[SyncCheckpoint]Get the last sync checkpoint for a peer
set_checkpointset_checkpoint(peer_id, checkpoint)Set the sync checkpoint for a peer
compute_deltacompute_delta(peer_id, table='working_memory') -> list[dict]Compute changes since last sync
apply_deltaapply_delta(peer_id, delta, table='working_memory') -> dictApply a received delta
sync_tosync_to(peer_id, table='working_memory') -> dictCompute and checkpoint a delta (convenience)
sync_fromsync_from(peer_id, delta, table='working_memory') -> dictApply and checkpoint a delta (convenience)

PatternDetector API

The patterns property exposes a PatternDetector with these methods:

MethodSignatureDescription
detect_temporaldetect_temporal(memories) -> list[DetectedPattern]Detect hour-of-day and day-of-week patterns
detect_contentdetect_content(memories) -> list[DetectedPattern]Detect frequent keywords and co-occurrence
detect_sequencedetect_sequence(memories) -> list[DetectedPattern]Detect source-based sequence patterns
detect_alldetect_all(memories) -> list[DetectedPattern]Run all detectors, sorted by confidence
summarize_patternssummarize_patterns(memories) -> dictHuman-readable summary of all patterns

Scratchpad

A lightweight temporary scratchpad for transient notes that don't belong in the memory hierarchy.

scratchpad_write

scratchpad_write(content: str) -> str

Appends a note to the scratchpad. Returns the note ID.

scratchpad_read

scratchpad_read() -> list[dict]

Returns all scratchpad entries for the current session.

scratchpad_clear

scratchpad_clear() -> None

Clears all scratchpad entries for the current session.

Maintenance & Export

consolidation_log

View recent consolidation activity.

consolidation_log(limit: int = 10) -> list[dict]

Returns the most recent consolidation log entries, describing which memories were grouped, summarized, and promoted.

export_to_file

Export the entire memory database to a JSON file.

export_to_file(output_path: str) -> dict

Returns a dict with export metadata (status, path, working_memory_count, episodic_memory_count, scratchpad_count, legacy_memories_count, triples_count).

import_from_file

Import memories from a previously exported JSON file.

import_from_file(
  input_path: str,
  force: bool = False,
) -> dict
ParameterTypeDefaultDescription
input_pathstrPath to the export file
forceboolFalseIf True, overwrite existing entries on conflict

Returns a dict with import statistics (beam, legacy, triples).

TripleStore Class

A separate class for managing structured subject-predicate-object triples (temporal knowledge graph).

from mnemosyne import TripleStore

ts = TripleStore()

add

add(
  subject: str,
  predicate: str,
  object: str,
  valid_from: str | None = None,
  source: str = "inferred",
  confidence: float = 1.0,
) -> int

Adds a triple and returns its ID. Auto-invalidates previous triples with the same (subject, predicate) pair.

query

query(
  subject: str | None = None,
  predicate: str | None = None,
  object: str | None = None,
  as_of: str | None = None,
) -> list[dict]

Query triples by any combination of subject, predicate, or object. Pass as_of for temporal point-in-time queries.

export_all / import_all

export_all() -> list[dict]
import_all(triples: list[dict], force: bool = False) -> dict

Bulk export and import of triples.

Module-Level Convenience Functions

Mnemosyne provides module-level functions that operate on a default instance:

from mnemosyne import remember, recall, get_context, get_stats
from mnemosyne import forget, update, sleep, sleep_all_sessions
from mnemosyne import scratchpad_write, scratchpad_read, scratchpad_clear

# These are equivalent to using Mnemosyne() with session_id="default"
remember("Some important fact")
results = recall("important fact")

Hermes Plugin Tools

When using Mnemosyne as a Hermes plugin, the following tools are exposed:

ToolDescription
mnemosyne_rememberStore a new memory with optional entity/fact extraction
mnemosyne_recallHybrid search with configurable weights and filters
mnemosyne_statsGet memory system statistics
mnemosyne_triple_addAdd a structured subject-predicate-object triple
mnemosyne_triple_queryQuery the temporal knowledge graph
mnemosyne_sleepRun the consolidation sleep cycle
mnemosyne_scratchpad_writeWrite a note to the agent scratchpad
mnemosyne_scratchpad_readRead all scratchpad entries
mnemosyne_scratchpad_clearClear the scratchpad
mnemosyne_invalidateMark a memory as expired or superseded
mnemosyne_exportExport all memories to a JSON file
mnemosyne_updateUpdate an existing memory's content or importance
mnemosyne_forgetDelete a specific memory by ID
mnemosyne_importImport memories from a file or external provider
mnemosyne_diagnoseRun PII-safe system diagnostics
Type Hints

The Python SDK includes full type hints. Use a type checker (mypy, pyright) for static analysis and IDE autocomplete.