Comparison · Agent Memory API
MemGPT vs Kronvex
Managed API vs self-hosted framework
MemGPT (now Letta) is a powerful self-hosted framework — but it comes with a Postgres instance to manage, a server to keep running, and DevOps overhead. Kronvex is a fully managed memory API from €29/mo, EU-hosted, GDPR-native, with no infrastructure to operate. Here's an honest comparison.
TL;DR
Side-by-side in 30 seconds
| Feature / Question | MemGPT / Letta | Kronvex |
|---|---|---|
| Deployment model | ⚠ Self-hosted — Postgres + running server required | ✓ Fully managed — zero infra to run |
| GDPR compliance | ⚠ Your responsibility depending on deployment | ✓ Native — erasure, TTL, export built in |
| Pricing | Free software + infra cost ($30–100+/mo self-hosted) | from €29/mo, fully managed |
| Free tier | ⚠ Open source, but infra cost applies | ✓ Demo key — 1 agent, 100 memories, no expiry |
| Recall mechanism | LLM-driven context window management | ✓ pgvector cosine + confidence scoring (no LLM) |
| inject-context endpoint | ✗ Manual prompt assembly required | ✓ One call returns formatted system prompt |
| Python & Node.js SDK | ✓ Python SDK (letta-client) | ✓ pip install kronvex · npm install kronvex |
| Time to first memory | Hours (Postgres + server setup + config) | Under 5 min (POST /remember) |
| EU data residency | ⚠ Depends on where you deploy | ✓ Always EU — Supabase Frankfurt |
Data accurate as of March 2026. Sources: letta.ai, kronvex.io/docs.
Honest comparison
Which one to use?
You need full control over the memory architecture
- You want to self-host everything — data never leaves your infrastructure
- You need LLM-driven context window management and persona persistence
- You have DevOps capacity to operate Postgres and a long-running server
- You are doing research or need deep customisation of the memory architecture
You want memory that just works without any infra
- You want to ship agent memory without provisioning or maintaining infrastructure
- EU data residency and GDPR compliance are non-negotiable
- You want predictable flat-rate pricing instead of variable cloud infra bills
- You need inject-context — one call returns a ready system prompt block
- You want to be live in under 5 minutes with a free demo key
- Fast, deterministic recall (<40ms) matters more than LLM-driven summarisation
Code comparison
The same task, two approaches
LETTA (MemGPT) — Self-hosted, infra required# pip install letta-client # Requires: Postgres running + letta server running from letta_client import Letta # Connect to your self-hosted Letta server client = Letta(base_url="http://localhost:8283") # Create an agent (sets up memory blocks) agent = client.agents.create( name="user-123", memory_blocks=[ {"label": "human", "value": "user info here"}, {"label": "persona", "value": "assistant persona"} ], model="openai/gpt-4o-mini" ) # Store memory (LLM processes and stores internally) response = client.agents.messages.create( agent_id=agent.id, messages=[{"role": "user", "content": "I prefer dark mode"}] ) # Context window managed by the LLM internally # No inject-context endpoint — build prompt yourself
KRONVEX — Managed API, EU-hosted, no infra# pip install kronvex from kronvex import Kronvex # No server to run. No Postgres to manage. client = Kronvex(api_key="kv-...") # Store memory — pgvector embedding, no LLM await client.agent("user-123").remember( "user prefers dark mode" ) # Recall with confidence scoring memories = await client.agent("user-123").recall( "display preferences" ) # inject-context: one call → formatted system prompt context = await client.agent("user-123").inject_context( "How should I respond?" ) # → "User prefers dark mode (confidence: 0.94)" # EU-hosted. GDPR-native. No infra to manage.
Why Kronvex instead of MemGPT
Three reasons teams choose Kronvex
Zero infrastructure
Running MemGPT/Letta yourself means provisioning Postgres, keeping a server alive, handling upgrades, and paying cloud infra bills. Kronvex is a fully managed API — you POST memories and GET them back. No servers, no containers, no on-call for your memory layer.
No Postgres. No server ops.EU data residency, guaranteed
With Letta self-hosted, GDPR compliance is entirely your responsibility — it depends on where you deploy. Kronvex stores all data in Supabase Frankfurt. GDPR compliance — right to erasure, per-agent memory TTL, full data export — is built into every plan, not a configuration task for your team.
Frankfurt · GDPR · right to erasureDeterministic, fast recall
MemGPT uses an LLM to manage context windows — this means recall latency depends on your LLM provider and introduces non-deterministic behaviour. Kronvex recall is pure pgvector cosine similarity with a deterministic confidence formula. Under 40ms p99, no LLM in the read path, no surprise token costs at retrieval time.
<40ms recall · no LLM in read pathQuestions
Frequently asked
remember, recall, and inject-context. You keep your LLM provider and your existing agent code; Kronvex handles memory persistence. Kronvex does not replicate MemGPT's LLM-driven context window manager, but covers the vast majority of persistent memory needs without that overhead.
confidence = similarity × 0.6 + recency × 0.2 + frequency × 0.2. No LLM in the recall path means fast (<40ms p99), predictable costs, and no non-deterministic behaviour at retrieval time.
DELETE /memories, per-agent memory TTL, and full data export — are first-class API features on every plan. With Letta self-hosted, EU residency and GDPR compliance depend entirely on your own deployment choices.
Ship memory without the infra headache
EU-hosted. No credit card. Start with a free demo key and your first memory in under 5 minutes — no Postgres, no server to run.
Get your demo key →Demo key · 1 agent · 100 memories · No expiry