LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Benchmark
Pricing Blog DASHBOARD → LOG IN →

Comparison · Agent Memory API

MemGPT vs Kronvex
Managed API vs self-hosted framework

MemGPT (now Letta) is a powerful self-hosted framework — but it comes with a Postgres instance to manage, a server to keep running, and DevOps overhead. Kronvex is a fully managed memory API from €29/mo, EU-hosted, GDPR-native, with no infrastructure to operate. Here's an honest comparison.

EU-hosted · Frankfurt · GDPR-native No infra to manage — fully managed API Demo key — 100 memories free

TL;DR

Side-by-side in 30 seconds

Feature / Question MemGPT / Letta Kronvex
Deployment model Self-hosted — Postgres + running server required Fully managed — zero infra to run
GDPR compliance Your responsibility depending on deployment Native — erasure, TTL, export built in
Pricing Free software + infra cost ($30–100+/mo self-hosted) from €29/mo, fully managed
Free tier Open source, but infra cost applies Demo key — 1 agent, 100 memories, no expiry
Recall mechanism LLM-driven context window management pgvector cosine + confidence scoring (no LLM)
inject-context endpoint Manual prompt assembly required One call returns formatted system prompt
Python & Node.js SDK Python SDK (letta-client) pip install kronvex · npm install kronvex
Time to first memory Hours (Postgres + server setup + config) Under 5 min (POST /remember)
EU data residency Depends on where you deploy Always EU — Supabase Frankfurt

Data accurate as of March 2026. Sources: letta.ai, kronvex.io/docs.

Honest comparison

Which one to use?

Consider MemGPT / Letta when…

You need full control over the memory architecture

  • You want to self-host everything — data never leaves your infrastructure
  • You need LLM-driven context window management and persona persistence
  • You have DevOps capacity to operate Postgres and a long-running server
  • You are doing research or need deep customisation of the memory architecture
Use Kronvex when…

You want memory that just works without any infra

  • You want to ship agent memory without provisioning or maintaining infrastructure
  • EU data residency and GDPR compliance are non-negotiable
  • You want predictable flat-rate pricing instead of variable cloud infra bills
  • You need inject-context — one call returns a ready system prompt block
  • You want to be live in under 5 minutes with a free demo key
  • Fast, deterministic recall (<40ms) matters more than LLM-driven summarisation

Code comparison

The same task, two approaches

letta_memory.py
LETTA (MemGPT) — Self-hosted, infra required# pip install letta-client
# Requires: Postgres running + letta server running
from letta_client import Letta

# Connect to your self-hosted Letta server
client = Letta(base_url="http://localhost:8283")

# Create an agent (sets up memory blocks)
agent = client.agents.create(
    name="user-123",
    memory_blocks=[
        {"label": "human", "value": "user info here"},
        {"label": "persona", "value": "assistant persona"}
    ],
    model="openai/gpt-4o-mini"
)

# Store memory (LLM processes and stores internally)
response = client.agents.messages.create(
    agent_id=agent.id,
    messages=[{"role": "user", "content": "I prefer dark mode"}]
)
# Context window managed by the LLM internally
# No inject-context endpoint — build prompt yourself
kronvex_memory.py
KRONVEX — Managed API, EU-hosted, no infra# pip install kronvex
from kronvex import Kronvex

# No server to run. No Postgres to manage.
client = Kronvex(api_key="kv-...")

# Store memory — pgvector embedding, no LLM
await client.agent("user-123").remember(
    "user prefers dark mode"
)

# Recall with confidence scoring
memories = await client.agent("user-123").recall(
    "display preferences"
)

# inject-context: one call → formatted system prompt
context = await client.agent("user-123").inject_context(
    "How should I respond?"
)
# → "User prefers dark mode (confidence: 0.94)"
# EU-hosted. GDPR-native. No infra to manage.

Why Kronvex instead of MemGPT

Three reasons teams choose Kronvex

🚀

Zero infrastructure

Running MemGPT/Letta yourself means provisioning Postgres, keeping a server alive, handling upgrades, and paying cloud infra bills. Kronvex is a fully managed API — you POST memories and GET them back. No servers, no containers, no on-call for your memory layer.

No Postgres. No server ops.
🇪🇺

EU data residency, guaranteed

With Letta self-hosted, GDPR compliance is entirely your responsibility — it depends on where you deploy. Kronvex stores all data in Supabase Frankfurt. GDPR compliance — right to erasure, per-agent memory TTL, full data export — is built into every plan, not a configuration task for your team.

Frankfurt · GDPR · right to erasure

Deterministic, fast recall

MemGPT uses an LLM to manage context windows — this means recall latency depends on your LLM provider and introduces non-deterministic behaviour. Kronvex recall is pure pgvector cosine similarity with a deterministic confidence formula. Under 40ms p99, no LLM in the read path, no surprise token costs at retrieval time.

<40ms recall · no LLM in read path

Questions

Frequently asked

MemGPT is an open-source research framework from UC Berkeley for building LLM agents with persistent memory, originally presented at NeurIPS 2023. In 2024 it was rebranded as Letta. It works by managing an agent's context window as virtual memory — using an LLM to decide what to keep in context, what to archive, and what to retrieve. It is self-hosted and requires a Postgres database and a running server.
For the core use case — storing and retrieving agent memories across sessions — yes. Kronvex replaces the Letta server with three endpoints: remember, recall, and inject-context. You keep your LLM provider and your existing agent code; Kronvex handles memory persistence. Kronvex does not replicate MemGPT's LLM-driven context window manager, but covers the vast majority of persistent memory needs without that overhead.
No. Kronvex is a fully managed API hosted on EU infrastructure. You authenticate with your API key on every request. There is no Postgres to provision, no Docker containers to orchestrate, no server to keep running or upgrade. Infrastructure, backups, and scaling are all handled by Kronvex. Your team ships features, not memory infrastructure.
MemGPT uses an LLM to manage what stays in the active context window and what gets archived — retrieval triggers LLM calls. Kronvex uses pgvector cosine similarity with a deterministic confidence formula: confidence = similarity × 0.6 + recency × 0.2 + frequency × 0.2. No LLM in the recall path means fast (<40ms p99), predictable costs, and no non-deterministic behaviour at retrieval time.
Letta is free software, but running it yourself requires a Postgres instance, a VPS or cloud server to keep alive, monitoring, and ongoing maintenance. A basic self-hosted setup on a cloud provider typically runs $30–100+/mo in infrastructure alone, plus engineering time for setup and maintenance. Kronvex Builder is €29/mo — fully managed, EU-hosted, with no infrastructure overhead and a free demo tier to get started immediately.
Yes — always. All data is stored in Supabase Frankfurt (eu-central-1, Germany). The API runs on Railway EU. No data ever leaves the EU. GDPR features — right to erasure via DELETE /memories, per-agent memory TTL, and full data export — are first-class API features on every plan. With Letta self-hosted, EU residency and GDPR compliance depend entirely on your own deployment choices.

Ship memory without the infra headache

EU-hosted. No credit card. Start with a free demo key and your first memory in under 5 minutes — no Postgres, no server to run.

Get your demo key →

Demo key · 1 agent · 100 memories · No expiry

Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →