1. The problem: AI coding tools forget everything

Claude Code, Cursor, Windsurf, GitHub Copilot — all start fresh every session. Your coding preferences, project decisions, recurring patterns — gone. Every day you start from zero.

You explain your project stack again. You re-specify that you use PostgreSQL not MongoDB. You remind your assistant that you prefer functional React components. You re-establish context that took weeks to build.

The cognitive overhead is real. A senior developer using an AI coding assistant spends an estimated 10–20% of each session re-establishing context that already existed in a previous session. At scale — multiple developers, multiple projects, multiple tools — this is a significant productivity drain.

This is solvable in 2026 with Model Context Protocol (MCP) and a persistent memory API.

TL;DR: You will build a local MCP server in ~50 lines of Python that wraps the Kronvex API. Once configured, Claude Code and Cursor can call remember, recall, and inject_context tools natively — no plugins, no middleware.

2. What is MCP (Model Context Protocol)?

MCP is an open standard introduced by Anthropic that lets language models communicate with external tools in a standardised way. Claude Code, Cursor, and other tools support MCP natively — no custom plugins or browser extensions required.

An MCP server exposes a list of tools that the model can call at runtime, just like a function call. Each tool has a name, a description (which the model uses to decide when to invoke it), and a JSON Schema for its input parameters.

For persistent memory, the three essential tools are:

The MCP server runs locally as a subprocess — it has no open port, no server to maintain. It communicates with the AI tool via stdio. The Kronvex API is the only network call it makes, storing and retrieving memories from a PostgreSQL + pgvector database hosted in the EU.

Why not just use a CLAUDE.md file? Static context files are read every session but they do not grow, search semantically, or update automatically. Kronvex memories are dynamic — the model decides what to store, and retrieval is ranked by semantic relevance + recency + frequency of access.

3. Setting up Kronvex as an MCP memory server

1 Get a Kronvex API key

Sign up at kronvex.io/auth — the free demo tier gives you 100 memories, no credit card required. Your key starts with kv-.

Bash
export KRONVEX_API_KEY="kv-your-api-key"
export KRONVEX_AGENT_ID="claude-code-agent"  # any identifier for this agent

2 Create the MCP server (Python)

Create a file called mcp_memory_server.py anywhere on your machine. This file is the entire MCP server — it wraps the Kronvex REST API and exposes three tools to the model.

Python — mcp_memory_server.py
import os, httpx
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types

KRONVEX_BASE = "https://api.kronvex.io"
API_KEY = os.environ["KRONVEX_API_KEY"]
AGENT_ID = os.environ.get("KRONVEX_AGENT_ID", "mcp-agent")

app = Server("kronvex-memory")

@app.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="remember",
            description="Store a memory about the project, user preferences, or decisions",
            inputSchema={
                "type": "object",
                "properties": {"content": {"type": "string", "description": "What to remember"}},
                "required": ["content"]
            }
        ),
        types.Tool(
            name="recall",
            description="Search memories semantically — find relevant stored information",
            inputSchema={
                "type": "object",
                "properties": {"query": {"type": "string", "description": "What to search for"}},
                "required": ["query"]
            }
        ),
        types.Tool(
            name="inject_context",
            description="Get a formatted context block with relevant memories for the current task",
            inputSchema={
                "type": "object",
                "properties": {"query": {"type": "string"}},
                "required": ["query"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    headers = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
    async with httpx.AsyncClient() as client:
        if name == "remember":
            await client.post(
                f"{KRONVEX_BASE}/api/v1/agents/{AGENT_ID}/remember",
                headers=headers,
                json={"content": arguments["content"]}
            )
            return [types.TextContent(type="text", text=f"Remembered: {arguments['content'][:60]}...")]

        elif name == "recall":
            r = await client.post(
                f"{KRONVEX_BASE}/api/v1/agents/{AGENT_ID}/recall",
                headers=headers,
                json={"query": arguments["query"], "top_k": 5}
            )
            memories = r.json().get("memories", [])
            result = "\n".join(
                [f"- [{m['confidence']:.2f}] {m['content']}" for m in memories]
            )
            return [types.TextContent(type="text", text=result or "No relevant memories found.")]

        elif name == "inject_context":
            r = await client.post(
                f"{KRONVEX_BASE}/api/v1/agents/{AGENT_ID}/inject-context",
                headers=headers,
                json={"query": arguments["query"]}
            )
            return [types.TextContent(type="text", text=r.json().get("context", ""))]

if __name__ == "__main__":
    import asyncio
    asyncio.run(stdio_server(app))

3 Install dependencies

Bash
pip install mcp httpx

The mcp package is the official Python SDK for the Model Context Protocol. httpx is the async HTTP client used to call the Kronvex API.

4 Configure Claude Code

Add the following to .claude/settings.json inside your project (project-scoped memory) or to ~/.claude/settings.json for global memory across all projects.

Download mcp_server.py from https://kronvex.io/mcp_server.py or get it from your dashboard.

JSON — .claude/settings.json
{
  "mcpServers": {
    "memory": {
      "command": "python",
      "args": ["/path/to/mcp_server.py"],
      "env": {
        "KRONVEX_API_KEY": "kv-your-api-key",
        "KRONVEX_AGENT_ID": "claude-code-agent"
      }
    }
  }
}

Restart Claude Code after saving. You will see memory listed as an available MCP server in the session output. Claude can now call the three tools automatically when it judges them relevant.

5 Configure Cursor

Create or edit .cursor/mcp.json at the root of your project:

Download mcp_server.py from https://kronvex.io/mcp_server.py or get it from your dashboard.

JSON — .cursor/mcp.json
{
  "mcpServers": {
    "memory": {
      "command": "python",
      "args": ["/path/to/mcp_server.py"],
      "env": {
        "KRONVEX_API_KEY": "kv-your-api-key",
        "KRONVEX_AGENT_ID": "cursor-agent"
      }
    }
  }
}

Cursor will pick up this configuration automatically on the next session. The MCP server starts as a subprocess when Cursor launches — there is nothing to run manually.

Windsurf: Windsurf also supports MCP via its ~/.codeium/windsurf/mcp_config.json file. The configuration format is identical to Cursor's.

4. Usage examples

Storing project preferences

At the start of a project, tell your AI tool what to remember. It will call the remember tool in the background:

Conversation
User: "Remember that this project uses PostgreSQL 16 with pgvector,
       FastAPI + SQLAlchemy async, and we never use ORMs for complex queries.
       Also remember that all API responses follow the envelope pattern:
       { data: ..., meta: ... }"

Claude: [calls remember tool]
→ Stored: "project uses PostgreSQL 16 with pgvector, FastAPI + SQLAlchemy async..."
→ Stored: "all API responses follow envelope pattern: { data: ..., meta: ... }"

Recalling context automatically

In the next session — or any future session — the model retrieves relevant memories before writing code:

Conversation
User: "Add a memory search endpoint"

Claude: [calls recall("database setup") + inject_context("API endpoint patterns")]
→ Retrieves: PostgreSQL + pgvector setup, existing endpoint patterns, response envelope
→ Writes correct endpoint without being re-briefed, using the right DB driver
   and the right response format

Confidence scoring

The recall tool returns memories sorted by a composite confidence score:

Formula
confidence = similarity × 0.6 + recency × 0.2 + frequency × 0.2

Recency uses a sigmoid centred at 30 days. Frequency is log-scaled. This means frequently-accessed and recently-updated memories naturally bubble to the top, while stale or rarely-used ones fade without being deleted.

Score range Meaning Typical use
0.85 – 1.00 High confidence match Core architecture decisions, stack choices
0.60 – 0.84 Relevant context Coding conventions, naming patterns
0.40 – 0.59 Loosely related Background domain knowledge
< 0.40 Low relevance Typically filtered out by top_k

5. Advanced: per-project agents

By default, all sessions share the same agent ID — which means all memories are shared across projects. For multi-project setups, use the project directory name as the agent ID to get fully isolated memory namespaces:

Python
# Use project name as agent ID — memories are fully isolated per project
AGENT_ID = os.environ.get(
    "KRONVEX_AGENT_ID",
    os.path.basename(os.getcwd())  # e.g., "my-saas-project", "api-v2"
)

With this pattern, you can have separate memory stores for every project without any additional configuration — just set KRONVEX_AGENT_ID in the MCP server environment, or let it default to the project directory name.

On Pro and higher plans, you get multiple agents — which maps directly to multiple projects. The Starter plan (1 agent) is sufficient for solo developers working on a single project.

Team setups: Each developer can use their own API key and agent ID. Since agent IDs are scoped to an API key, there is no risk of memory collision between team members — even if they use the same agent ID string.

6. What memories are worth storing?

Not everything should be stored as a persistent memory. The rule of thumb: store anything you would have to repeat across sessions, not things that belong in a conversation.

Store these

Do not store these

Tip: At the end of a productive session, ask your AI tool: "Summarise the key architectural decisions we made today and store them." This is one of the highest-ROI uses of persistent memory — capturing decisions that would otherwise be lost when the context window closes.

7. GDPR considerations

If you are building AI agents for clients — not just personal developer tooling — the memory layer becomes subject to data protection regulations.

Kronvex is designed with this in mind:

For a deeper dive on GDPR compliance for AI agents, read our article: GDPR Compliance for AI Agents: What You Need to Know in 2026.

In short: For personal developer tooling (storing your own coding preferences), there are no GDPR concerns. For production agents processing end-user data, Kronvex's EU hosting and per-agent isolation give you a solid compliance foundation.

Add memory to your AI tools in 15 minutes

Free tier includes 100 memories. No credit card required.

Get your free API key →