GDPR applies to AI agent memory — most developers ignore this
When developers think about GDPR compliance, they typically focus on their database, their analytics tools, and their marketing emails. They don't think about their AI agent's memory layer — but they should. Every fact an AI agent stores about a user is, by definition, personal data under GDPR if it relates to an identified or identifiable natural person.
The GDPR definition of personal data (Article 4) is intentionally broad: "any information relating to an identified or identifiable natural person." An agent memory that stores "user prefers Python over JavaScript" is personal data if it's associated with an identifiable user ID. An agent that stores "user works at Acme Corp, Paris, team of 10" is storing personal data. Vector embeddings of personal statements are personal data — they are a transformed representation of personal information, not anonymized.
Vector embeddings are personal data. A common misconception is that because embeddings are not human-readable, they're anonymized. The EDPB (European Data Protection Board) has clarified that pseudonymization — including vector representations — does not exempt data from GDPR obligations when the original data can be reconstructed or the subject identified through other means. Treat embeddings as personal data.
The practical implication: your memory API is a data processor under GDPR Article 4(8) if you're using it on behalf of your users (who are data subjects) or your customers (whose users are data subjects). This triggers specific obligations — documented processing, restricted data flows, right to erasure, and a Data Processing Agreement with the memory API provider.
What counts as personal data in agent memory
In practice, almost everything an AI agent usefully stores about a user will qualify as personal data. Here are the categories with examples:
Direct identifiers
- Name, email, phone number mentioned in conversation
- Company name combined with role ("CTO at Acme Corp")
- Location ("Paris office", "EU-based team")
Behavioral and preference data
- Communication preferences ("prefers concise answers", "speaks French")
- Technical preferences ("uses Python", "prefers PostgreSQL")
- Work patterns ("works on weekends", "responds late evening")
Business context linked to an individual
- Company information ("50-person team", "migrating to Kubernetes")
- Project context ("building a CRM integration", "Q3 deadline")
- Decision history ("chose vendor X", "rejected approach Y")
Special category data (extra protection required)
If your agent stores health-related information, political opinions, religious beliefs, or trade union membership — even incidentally mentioned in conversation — this is special category data under Article 9, requiring explicit consent and stricter processing conditions.
Safe harbor only applies at true anonymization. If memories are stored without any user identifier, with no possibility of re-identification, they may fall outside GDPR scope. In practice, this is nearly impossible for an agent memory layer — the memory is valuable precisely because it's associated with a specific user. Do not rely on anonymization as a compliance strategy for memory.
Right to erasure: Article 17 and what it means for memory APIs
GDPR Article 17 grants users the right to request deletion of all personal data you hold about them, subject to limited exceptions. The exceptions that might apply to agent memory are narrow: legal obligation compliance, establishment or defense of legal claims, or public interest in archiving — none of which typically apply to B2B SaaS agent memory.
For practical purposes: when a user requests erasure, you must delete all their memories from your memory layer. This must be complete — not just "soft deleted" with a flag, but genuinely removed from storage and any backups within your retention schedule.
The compliance requirements for erasure:
- Within 30 days: GDPR requires erasure within one month of the request (extendable to 3 months for complex cases)
- Cascade to processors: You must notify downstream processors (including your memory API provider) of erasure requests and confirm they've been actioned
- Confirmation to the user: You must confirm to the user that erasure has been completed
- Audit trail: Keep a record of the request, your actions, and the confirmation — not the deleted data itself
- Backups: Data deleted from live systems must also be removed from backups within your normal backup rotation schedule
The right to erasure is frequently tested. EU data protection authorities regularly audit companies' erasure procedures. A memory layer that doesn't support atomic per-user deletion is a compliance liability. Verify that your memory API supports complete agent deletion before you ship to production.
Data residency: why US-hosted memory APIs are a legal risk for EU companies
GDPR Chapter V restricts transfers of personal data to countries outside the EEA (European Economic Area) unless one of the following conditions is met:
- Adequacy decision: The European Commission has determined that the destination country offers adequate protection. The EU-US Data Privacy Framework (DPF) provides this for certified US companies, but it has been challenged twice before (Privacy Shield I and II were struck down) and remains legally fragile.
- Standard Contractual Clauses (SCCs): A specific contract between EU data exporter and US data importer that meets Commission-approved terms. This is the most common mechanism for US API providers, but it requires a Transfer Impact Assessment (TIA) and adds legal overhead.
- Explicit consent: The data subject has explicitly consented to the transfer. This is difficult to obtain reliably at scale and cannot be the sole compliance mechanism for systematic transfers.
For a B2B SaaS company with EU end users, using a US-hosted memory API (Mem0, US-based vector databases, US-region OpenAI) means:
- You are transferring EU personal data to the US on every
remember()call - You need a DPA with the API provider that includes Module 2 SCCs
- You need to conduct a Transfer Impact Assessment documenting the legal risks
- Your privacy policy must disclose the US transfer and its legal basis
- If the DPF is invalidated again (as Privacy Shield I and II were), you immediately become non-compliant until alternative mechanisms are in place
EU-hosted memory eliminates this entire category of risk. Data stays within the EEA; no Chapter V transfer mechanism is needed; your legal basis for processing doesn't depend on a fragile diplomatic framework.
Kronvex's approach: EU-only storage, per-agent deletion, TTL
Kronvex was built with GDPR compliance as a first-class requirement, not an afterthought. The design decisions:
- EU-only hosting: All data is stored on Supabase Frankfurt (Germany, EEA). No data is ever transferred to US infrastructure. Your memories never leave the EU.
- Per-agent deletion: A single API call deletes an agent and all associated memories, including vector embeddings. Deletion is hard (not soft). Suitable for Article 17 erasure responses.
- TTL support: Set expiry on individual memories. Useful for implementing data minimization (Article 5(1)(e)) — don't retain data longer than necessary for the purpose it was collected.
- No training on your data: Kronvex does not use your agents' memories to train models. Your data is not shared between customers. Multi-tenant isolation is enforced at the API key level.
- DPA available: A Data Processing Agreement is available for customers who require one (Starter and above plans). Contact hello@kronvex.io.
Data Processing Agreements (DPA) — who needs one and when
Under GDPR Article 28, if you engage a data processor (any third party that processes personal data on your behalf), you must have a written contract — a Data Processing Agreement — that covers specific mandatory terms: purpose and duration of processing, nature of the processing, type of personal data and categories of data subjects, and the controller's obligations and rights.
You need a DPA with Kronvex (or any memory API) if:
- Your users are natural persons (not just legal entities)
- You are established in the EEA, or your users are in the EEA
- The memories stored relate to identified or identifiable individuals
For B2B SaaS companies serving EU business customers whose employees use your AI agent: you almost certainly need a DPA. Your customers' employees are natural persons; their preferences and work context stored in agent memory are personal data.
DPA vs. standard terms: Many API providers include DPA terms in their standard ToS. Review the actual terms rather than assuming coverage. A valid Article 28 DPA must include specific provisions: sub-processor lists, audit rights, deletion obligations, security measures, and international transfer mechanisms if applicable.
Practical checklist for GDPR-compliant AI agent memory
- Confirm your memory API provider hosts data in the EEA (or has adequate SCCs if US-hosted)
- Sign a Data Processing Agreement with your memory API provider
- Update your privacy policy to disclose AI agent memory as a processing activity
- Document your lawful basis for processing agent memories (legitimate interest, contract performance, or consent)
- Implement a per-user erasure endpoint in your product that triggers deletion of Kronvex agent + all memories
- Implement a user data export endpoint that includes exported agent memories (right of access, Article 15)
- Set TTLs on temporary or speculative memories (data minimization, Article 5(1)(e))
- Add your memory API provider to your sub-processor list (required in DPAs with your customers)
- Document what categories of personal data may be stored in agent memory in your Records of Processing Activities (ROPA)
- Ensure your memory extraction logic doesn't store special category data (health, religion, politics) without explicit Article 9 basis
- Test your erasure flow: create a test agent, add memories, delete the agent, confirm memories are gone
- Set a maximum retention policy: no memories older than [your defined period] should be kept without active renewal
Code: implementing erasure via the Kronvex API
from kronvex import Kronvex
from datetime import datetime
import logging
kv = Kronvex(api_key="kv-your-key")
logger = logging.getLogger(__name__)
def handle_erasure_request(
user_id: str,
request_id: str,
requested_by: str
) -> dict:
"""
Handle a GDPR Article 17 erasure request.
Returns a compliance record suitable for audit logging.
"""
started_at = datetime.utcnow().isoformat()
agent = kv.agent(user_id)
# 1. Enumerate memories before deletion (for audit record)
try:
memories = agent.list_memories()
memory_count = len(memories)
except Exception as e:
logger.warning(f"Could not enumerate memories for {user_id}: {e}")
memory_count = -1 # Unknown count
# 2. Delete agent and all associated memories
try:
agent.delete()
deletion_status = "SUCCESS"
error_detail = None
except Exception as e:
deletion_status = "FAILED"
error_detail = str(e)
logger.error(f"Erasure failed for user {user_id}: {e}")
raise
completed_at = datetime.utcnow().isoformat()
# 3. Build audit record (do NOT store this in a system that retains PII)
audit_record = {
"request_id": request_id,
"user_id": user_id, # Pseudonymize this in your audit log if possible
"requested_by": requested_by,
"started_at": started_at,
"completed_at": completed_at,
"memories_deleted": memory_count,
"status": deletion_status,
"processor": "kronvex",
"error": error_detail,
"gdpr_article": "17",
"compliance_note": "Agent and all associated memories deleted from EU-hosted storage"
}
logger.info(f"Erasure request {request_id} completed: {deletion_status}")
return audit_record
# Example usage
audit = handle_erasure_request(
user_id="user_42",
request_id="erasure-req-2026-001",
requested_by="user_42@email.com"
)
print(audit)
import json
from datetime import datetime
def export_user_data(user_id: str) -> dict:
"""
Export all memories for a user (GDPR Article 15 right of access).
Returns a structured export suitable for delivery to the data subject.
"""
agent = kv.agent(user_id)
try:
memories = agent.list_memories()
memory_data = [
{
"id": m.id,
"content": m.content,
"created_at": m.created_at,
"expires_at": m.expires_at,
"access_count": m.access_count
}
for m in memories
]
except Exception as e:
memory_data = []
# Log error but don't fail the export request
export = {
"export_generated_at": datetime.utcnow().isoformat(),
"data_controller": "Your Company Name",
"data_processor": "Kronvex (kronvex.io)",
"processing_location": "EU (Frankfurt, Germany)",
"user_id": user_id,
"memory_count": len(memory_data),
"memories": memory_data,
"rights_notice": (
"You have the right to request correction (Art. 16) or "
"deletion (Art. 17) of this data. Contact privacy@yourcompany.com"
)
}
return export
# Deliver as JSON to the user
export = export_user_data("user_42")
print(json.dumps(export, indent=2, default=str))
def store_memory_with_minimization_policy(
agent,
fact: str,
category: str = "general"
) -> None:
"""
Store a memory with TTL based on data minimization policy.
Only retain data for as long as necessary (GDPR Art. 5(1)(e)).
"""
TTL_POLICY = {
"permanent": None, # Core preferences, identity
"long_term": 365, # Technical stack, business context
"medium_term": 90, # Active projects, goals
"short_term": 30, # Tactical context, current evaluations
"ephemeral": 7, # Temporary states
"general": 180, # Default: 6 months
}
ttl_days = TTL_POLICY.get(category, TTL_POLICY["general"])
agent.remember(fact, ttl_days=ttl_days)
# Usage
agent = kv.agent("user_42")
store_memory_with_minimization_policy(
agent, "User speaks French", "permanent"
)
store_memory_with_minimization_policy(
agent, "User evaluating vendor X for Q3 decision", "short_term"
)
store_memory_with_minimization_policy(
agent, "User currently comparing PostgreSQL vs MySQL", "ephemeral"
)