Python SDK
Complete reference for the Mnemosyne Python API.
Mnemosyne Class
The main interface for interacting with Mnemosyne's memory system.
Constructor
Mnemosyne(
session_id: str = "default",
db_path: Path | None = None,
bank: str | None = None,
author_id: str | None = None,
author_type: str | None = None,
channel_id: str | None = None,
) -> Mnemosyne
| Parameter | Type | Default | Description |
|---|---|---|---|
session_id | str | "default" | Session identifier for scoping working memories |
db_path | Path | None | None | Path to the SQLite database file. Defaults to ~/.hermes/mnemosyne/data/mnemosyne.db |
bank | str | None | None | Named memory bank. Each bank gets its own SQLite file under data_dir/banks/<name>/ |
author_id | str | None | None | Author identifier for multi-agent identity (v2.1) |
author_type | str | None | None | Author category: "human", "agent", or "system" (v2.1) |
channel_id | str | None | None | Channel for cross-session shared memory. Defaults to session_id (v2.1) |
from mnemosyne import Mnemosyne
# Default session
mem = Mnemosyne()
# Named session
mem = Mnemosyne(session_id="agent-42")
# Custom database path
mem = Mnemosyne(db_path="./my-memories.db", session_id="agent-42")
# Named memory bank (isolated SQLite)
mem = Mnemosyne(bank="work")
remember
Store a memory in Working Memory. Returns the generated memory ID.
remember(
content: str,
source: str = "conversation",
importance: float = 0.5,
trust_tier: str | None = None,
metadata: dict | None = None,
valid_until: str | None = None,
scope: str = "session",
extract_entities: bool = False,
extract: bool = False,
) -> str
| Parameter | Type | Default | Description |
|---|---|---|---|
content | str | — | The text content to store |
source | str | "conversation" | Origin of the memory (e.g. "user", "system", "inferred") |
importance | float | 0.5 | Importance score from 0.0 to 1.0, affects retrieval ranking |
trust_tier | str | None | None | Trust tier label for memory provenance tracking |
metadata | dict | None | None | Optional arbitrary metadata dictionary |
valid_until | str | None | None | Optional expiry timestamp (ISO 8601) |
scope | str | "session" | Scope for the memory: "session" or "global" |
extract_entities | bool | False | If True, extract named entities (mentions, hashtags, proper nouns) via regex + Levenshtein matching and store as TripleStore triples |
extract | bool | False | If True, extract 2–5 factual statements via LLM and store as triple facts |
from mnemosyne import Mnemosyne
mem = Mnemosyne()
# Basic storage
mem.remember("User prefers dark mode.")
# With entity extraction — captures "PostgreSQL" as a triple
mem.remember("We decided to use PostgreSQL.", extract_entities=True)
# With fact extraction — LLM pulls structured facts
mem.remember(
"The project deadline moved to June 15th. Alice is the lead.",
extract=True,
importance=0.8,
)
recall
Retrieve relevant memories using hybrid vector + full-text search with configurable scoring weights and temporal decay.
recall(
query: str,
top_k: int = 5,
*,
from_date: str | None = None,
to_date: str | None = None,
source: str | None = None,
topic: str | None = None,
author_id: str | None = None,
author_type: str | None = None,
channel_id: str | None = None,
veracity: str | None = None,
memory_type: str | None = None,
temporal_weight: float = 0.0,
query_time: str | None = None,
temporal_halflife: float | None = None,
vec_weight: float | None = None,
fts_weight: float | None = None,
importance_weight: float | None = None,
) -> list[dict]
| Parameter | Type | Default | Description |
|---|---|---|---|
query | str | — | Natural language query to match against stored memories |
top_k | int | 5 | Maximum number of results to return |
from_date | str | None | None | ISO 8601 timestamp — only return memories on or after this date |
to_date | str | None | None | ISO 8601 timestamp — only return memories on or before this date |
source | str | None | None | Filter by source label (e.g. "conversation", "user") |
topic | str | None | None | Filter by topic keyword |
author_id | str | None | None | Filter by specific author (v2.1) |
author_type | str | None | None | Filter by author category: "human", "agent", "system" (v2.1) |
channel_id | str | None | None | Filter by channel for cross-session recall (v2.1) |
veracity | str | None | None | Filter by confidence level: "stated", "inferred", "tool", "imported", "unknown" (v2.3) |
memory_type | str | None | None | Filter by memory type: "FACT", "PREFERENCE", "DECISION", etc. (v2.3) |
temporal_weight | float | 0.0 | How much recency decay influences the final score (0.0–1.0). Default 0.0 (disabled) |
query_time | str | None | None | ISO 8601 timestamp to use as "now" for temporal decay calculations |
temporal_halflife | float | None | None | Half-life in hours for the exponential recency decay. Default from MNEMOSYNE_TEMPORAL_HALFLIFE_HOURS (24h) |
vec_weight | float | None | None | Weight for vector similarity score. Default from MNEMOSYNE_VEC_WEIGHT (0.5) |
fts_weight | float | None | None | Weight for FTS5 text relevance score. Default from MNEMOSYNE_FTS_WEIGHT (0.3) |
importance_weight | float | None | None | Weight for the memory's importance rating. Default from MNEMOSYNE_IMPORTANCE_WEIGHT (0.2) |
Each result dict includes keys like id, content, source, importance, timestamp, score, and tier (indicating working vs. episodic).
get_context
Retrieve recent memories for context building, useful for injecting into prompts.
get_context(
limit: int = 10,
) -> list[dict]
Returns the most recent memories across working memory, ordered by recency. Only returns valid, non-superseded entries.
get_stats
Return statistics about the memory store.
get_stats(
author_id: str | None = None,
author_type: str | None = None,
channel_id: str | None = None,
) -> dict
Returns a dictionary with keys such as total_memories, total_sessions, sources, last_memory, database, mode, banks, and beam (with working_memory, episodic_memory, and triples sub-stats). Supports optional identity filters.
forget
Delete a specific memory by ID.
forget(memory_id: str) -> bool
Returns True if the memory was found and deleted, False otherwise.
update
Update an existing memory's content or importance.
update(
memory_id: str,
content: str | None = None,
importance: float | None = None,
) -> bool
Only the fields you provide will be changed. Returns True on success.
invalidate
Mark a memory as superseded by another.
invalidate(
memory_id: str,
replacement_id: str | None = None,
) -> bool
Sets the superseded_by field on the target memory, signalling that it is no longer the canonical version. Pass a replacement_id to link to the new memory.
sleep
Run the sleep/consolidation cycle — groups working memories by source, summarizes them, promotes to episodic memory, and evicts originals.
sleep(dry_run: bool = False) -> dict
| Parameter | Type | Default | Description |
|---|---|---|---|
dry_run | bool | False | If True, reports what would happen without making changes |
Returns a summary dict of consolidation actions performed.
sleep_all_sessions
Run consolidation and degradation across all sessions (not just the current one).
sleep_all_sessions(dry_run: bool = False) -> dict
Returns a dict with per-session consolidation and degradation summaries. (v2.3)
degrade_episodic
Run the tiered degradation cycle that compresses old episodic memories.
degrade_episodic(dry_run: bool = False) -> dict
Returns a dict with tier1_to_tier2 and tier2_to_tier3 counts. Called automatically during sleep(). Configure via MNEMOSYNE_TIER2_DAYS, MNEMOSYNE_TIER3_DAYS, and tier weight env vars. (v2.3)
get_contaminated
Surface non-stated memories for audit and review.
get_contaminated(limit: int = 50, min_importance: float = 0.0) -> list[dict]
Returns episodic memories where veracity is NOT "stated": everything inferred, tool-generated, imported, or unknown. Sorted by importance descending. Use for periodic cleanup of AI-inferred memories. (v2.3)
enable_streaming
Enable event streaming for this memory instance. Wires the stream into BeamMemory so all write operations emit events.
enable_streaming() -> Mnemosyne
Call once after construction to activate the streaming subsystem. Returns self for chaining.
compress
Compress memory content using dictionary, RLE, or semantic strategies.
compress(
content: str,
method: str = "auto",
) -> tuple[str, CompressionStats]
| Parameter | Type | Default | Description |
|---|---|---|---|
content | str | — | The memory content to compress |
method | str | "auto" | Compression method: "dict", "rle", "semantic", or "auto" (tries dict first, falls back to RLE) |
Returns a tuple of (compressed_content, CompressionStats). CompressionStats includes original_size, compressed_size, ratio, method, and savings_percent.
decompress
Decompress content that was compressed with a given method.
decompress(
content: str,
method: str = "dict",
) -> str
compress_memories
Compress a batch of memories.
compress_memories(
memories: list,
method: str = "auto",
) -> tuple[list, CompressionStats]
Returns a tuple of (compressed_memories, aggregate_stats). Each memory in the list is a dict with at least a "content" key. Compressed memories get _compressed and _compression_method metadata keys.
detect_patterns
Detect patterns across all working + episodic memories (or a provided list).
detect_patterns(memories: list | None = None) -> list[DetectedPattern]
If memories is None, uses all valid working + episodic memories. Returns a list of DetectedPattern objects, each with pattern_type ("temporal", "content", "sequence"), description, confidence, samples, and metadata.
summarize_patterns
Generate a human-readable summary of detected patterns.
summarize_patterns(memories: list | None = None) -> dict
Returns a dict with total_memories, patterns_found, temporal_patterns, content_patterns, sequence_patterns, and top_pattern.
get_all_memories
Return all working + episodic rows for analysis.
get_all_memories() -> list[dict]
Returns all valid, non-superseded, non-expired memories across working and episodic tiers, scoped to the current session and global memories. Useful as input to detect_patterns() and summarize_patterns().
sync_to
Compute a delta for a peer — returns changes since the last sync checkpoint.
sync_to(
peer_id: str,
table: str = "working_memory",
) -> dict
Returns {peer_id, table, delta, count}. The delta is a list of memory dicts that have changed since the last sync.
sync_from
Apply a delta received from a peer.
sync_from(
peer_id: str,
delta: list,
table: str = "working_memory",
) -> dict
Returns {peer_id, table, stats, checkpoint} with import statistics and an updated sync checkpoint.
V2 Properties
The Mnemosyne instance exposes several v2 subsystems as lazy-initialized properties:
| Property | Type | Description |
|---|---|---|
stream | MemoryStream | Thread-safe event stream with push (on/on_any callbacks) and pull (listen iterator) patterns for real-time memory notifications |
compressor | MemoryCompressor | Dictionary-based, RLE, and semantic compression strategies for reducing memory footprint |
patterns | PatternDetector | Detects temporal (hour/weekday), content (keyword frequency, co-occurrence), and sequence patterns across memories |
delta_sync | DeltaSync | Incremental synchronization between Mnemosyne instances with checkpointed resume support |
plugins | PluginManager | Discovers and manages plugins from ~/.hermes/mnemosyne/plugins/. Built-in: LoggingPlugin, MetricsPlugin, FilterPlugin |
from mnemosyne import Mnemosyne
mem = Mnemosyne()
# Access the stream for real-time notifications
mem.stream.on(EventType.MEMORY_ADDED, lambda event: print(f"New memory: {event}"))
# Listen to stream as an iterator
for event in mem.stream.listen():
print(f"Event: {event.event_type.name} - {event.memory_id}")
# Detect patterns across stored memories
patterns = mem.detect_patterns()
# Or get a summary
summary = mem.summarize_patterns()
# Incremental sync to a peer
delta = mem.sync_to(peer_id="remote-1")
# Apply a sync from a peer
result = mem.sync_from(peer_id="remote-1", delta=received_delta)
MemoryStream API
The stream property exposes a MemoryStream with these methods:
| Method | Signature | Description |
|---|---|---|
on | on(event_type: EventType, callback: Callable) | Register a callback for a specific event type |
on_any | on_any(callback: Callable) | Register a callback for all event types |
off | off(event_type: EventType, callback: Callable) | Remove a specific callback |
off_any | off_any(callback: Callable) | Remove an any-event callback |
emit | emit(event: MemoryEvent) | Emit an event to all callbacks and iterators |
listen | listen(event_types=None) -> Iterator[MemoryEvent] | Return an iterator that yields events as they occur |
get_buffer | get_buffer(event_types=None, since=None) -> list[MemoryEvent] | Get buffered events, optionally filtered |
clear_buffer | clear_buffer() | Clear the event buffer |
DeltaSync API
The delta_sync property exposes a DeltaSync with these methods:
| Method | Signature | Description |
|---|---|---|
get_checkpoint | get_checkpoint(peer_id) -> Optional[SyncCheckpoint] | Get the last sync checkpoint for a peer |
set_checkpoint | set_checkpoint(peer_id, checkpoint) | Set the sync checkpoint for a peer |
compute_delta | compute_delta(peer_id, table='working_memory') -> list[dict] | Compute changes since last sync |
apply_delta | apply_delta(peer_id, delta, table='working_memory') -> dict | Apply a received delta |
sync_to | sync_to(peer_id, table='working_memory') -> dict | Compute and checkpoint a delta (convenience) |
sync_from | sync_from(peer_id, delta, table='working_memory') -> dict | Apply and checkpoint a delta (convenience) |
PatternDetector API
The patterns property exposes a PatternDetector with these methods:
| Method | Signature | Description |
|---|---|---|
detect_temporal | detect_temporal(memories) -> list[DetectedPattern] | Detect hour-of-day and day-of-week patterns |
detect_content | detect_content(memories) -> list[DetectedPattern] | Detect frequent keywords and co-occurrence |
detect_sequence | detect_sequence(memories) -> list[DetectedPattern] | Detect source-based sequence patterns |
detect_all | detect_all(memories) -> list[DetectedPattern] | Run all detectors, sorted by confidence |
summarize_patterns | summarize_patterns(memories) -> dict | Human-readable summary of all patterns |
Scratchpad
A lightweight temporary scratchpad for transient notes that don't belong in the memory hierarchy.
scratchpad_write
scratchpad_write(content: str) -> str
Appends a note to the scratchpad. Returns the note ID.
scratchpad_read
scratchpad_read() -> list[dict]
Returns all scratchpad entries for the current session.
scratchpad_clear
scratchpad_clear() -> None
Clears all scratchpad entries for the current session.
Maintenance & Export
consolidation_log
View recent consolidation activity.
consolidation_log(limit: int = 10) -> list[dict]
Returns the most recent consolidation log entries, describing which memories were grouped, summarized, and promoted.
export_to_file
Export the entire memory database to a JSON file.
export_to_file(output_path: str) -> dict
Returns a dict with export metadata (status, path, working_memory_count, episodic_memory_count, scratchpad_count, legacy_memories_count, triples_count).
import_from_file
Import memories from a previously exported JSON file.
import_from_file(
input_path: str,
force: bool = False,
) -> dict
| Parameter | Type | Default | Description |
|---|---|---|---|
input_path | str | — | Path to the export file |
force | bool | False | If True, overwrite existing entries on conflict |
Returns a dict with import statistics (beam, legacy, triples).
TripleStore Class
A separate class for managing structured subject-predicate-object triples (temporal knowledge graph).
from mnemosyne import TripleStore
ts = TripleStore()
add
add(
subject: str,
predicate: str,
object: str,
valid_from: str | None = None,
source: str = "inferred",
confidence: float = 1.0,
) -> int
Adds a triple and returns its ID. Auto-invalidates previous triples with the same (subject, predicate) pair.
query
query(
subject: str | None = None,
predicate: str | None = None,
object: str | None = None,
as_of: str | None = None,
) -> list[dict]
Query triples by any combination of subject, predicate, or object. Pass as_of for temporal point-in-time queries.
export_all / import_all
export_all() -> list[dict]
import_all(triples: list[dict], force: bool = False) -> dict
Bulk export and import of triples.
Module-Level Convenience Functions
Mnemosyne provides module-level functions that operate on a default instance:
from mnemosyne import remember, recall, get_context, get_stats
from mnemosyne import forget, update, sleep, sleep_all_sessions
from mnemosyne import scratchpad_write, scratchpad_read, scratchpad_clear
# These are equivalent to using Mnemosyne() with session_id="default"
remember("Some important fact")
results = recall("important fact")
Hermes Plugin Tools
When using Mnemosyne as a Hermes plugin, the following tools are exposed:
| Tool | Description |
|---|---|
mnemosyne_remember | Store a new memory with optional entity/fact extraction |
mnemosyne_recall | Hybrid search with configurable weights and filters |
mnemosyne_stats | Get memory system statistics |
mnemosyne_triple_add | Add a structured subject-predicate-object triple |
mnemosyne_triple_query | Query the temporal knowledge graph |
mnemosyne_sleep | Run the consolidation sleep cycle |
mnemosyne_scratchpad_write | Write a note to the agent scratchpad |
mnemosyne_scratchpad_read | Read all scratchpad entries |
mnemosyne_scratchpad_clear | Clear the scratchpad |
mnemosyne_invalidate | Mark a memory as expired or superseded |
mnemosyne_export | Export all memories to a JSON file |
mnemosyne_update | Update an existing memory's content or importance |
mnemosyne_forget | Delete a specific memory by ID |
mnemosyne_import | Import memories from a file or external provider |
mnemosyne_diagnose | Run PII-safe system diagnostics |
The Python SDK includes full type hints. Use a type checker (mypy, pyright) for static analysis and IDE autocomplete.
Mnemosyne