Skip to main content

Persistent Memory for AI Agents

Because embeddings, graph connections, and raw data all live in the same database, ApertureDB is a natural fit for agent memory. An agent can store observations as embeddings, retrieve the most relevant context at query time, and associate memories with graph entities like users, sessions, or documents.

import numpy as np
from aperturedb.CommonLibrary import create_connector
from aperturedb.Descriptors import Descriptors

client = create_connector()
descriptors = Descriptors(client)

# Store a memory
memory_vector = embed("User asked about invoice #4521") # your embedding function
q = [{
"AddDescriptor": {
"set": "agent_memory",
"properties": {
"session_id": "abc123",
"timestamp": "2025-03-25T10:00:00Z",
"type": "observation"
}
}
}]
client.query(q, [memory_vector.astype("float32").tobytes()])

# Retrieve relevant memories scoped to a session
current_context = embed("What did we discuss about invoices?")
descriptors.find_similar(
set="agent_memory",
vector=current_context,
k_neighbors=5,
distances=True,
constraints={"session_id": ["==", "abc123"]}
)
results = descriptors.response

The MCP Server workflow exposes ApertureDB directly to MCP-compatible agents (Claude, Cursor, etc.) without writing retrieval code manually.

Expose ApertureDB as a tool for AI agents via the Model Context Protocol.

Real-World Example

Ayesha Imran built a two-part series engineering an AI agent that navigates large-scale event data using ApertureDB as its memory layer:


What's Next