LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Benchmark
Pricing Blog DASHBOARD → LOG IN →
Tutorial Python LangChain March 18, 2026 · 9 min read

LangChain Persistent Memory:
Cross-session context that actually works

LangChain's built-in memory is session-scoped. The moment a user closes the tab, their conversation history vanishes. Here's how to attach persistent, semantic memory to any LangChain chain — using Kronvex as a drop-in memory store.

In this article
  1. Why LangChain memory isn't enough
  2. Setup — pip install in 60 seconds
  3. Building a custom BaseMemory class
  4. Wiring it into a ConversationChain
  5. Session scoping — multiple users, no collisions
  6. Production checklist

Why LangChain memory isn't enough

LangChain ships with ConversationBufferMemory, ConversationSummaryMemory, and a handful of others. They all share the same fundamental limitation: they live in RAM. When the process restarts, the memory is gone.

For most chatbots this is fine. But if you're building:

...then you need a memory store that persists across sessions, processes, and deployments. That's what Kronvex provides.

What makes Kronvex different from a vector store? A vector DB like Pinecone stores documents for RAG retrieval. Kronvex stores agent memories — typed (episodic/semantic/procedural), scored by recency and access frequency, with session scoping and TTL decay. It's a memory layer, not a retrieval layer.

Setup — pip install in 60 seconds

Install
pip install "kronvex[langchain]" langchain langchain-openai

Get your API key from the Kronvex dashboard — free plan available, no credit card needed. Create an agent and note its ID.

Quick test
from kronvex import Kronvex

client = Kronvex("kv-your-api-key")
agent  = client.agent("your-agent-id")

# Store a memory
agent.remember("User prefers formal tone", memory_type="semantic")

# Recall — semantic search
result = agent.recall(query="user preferences", top_k=3)
print(result.memories[0].content)
# → "User prefers formal tone"

Building a custom BaseMemory class

LangChain's memory interface requires two methods: load_memory_variables() (called before the LLM) and save_context() (called after). We implement both using Kronvex:

SDK — built-in (no extra file needed)
from kronvex import Kronvex
from langchain.memory import BaseMemory
from typing import Dict, Any, List
from pydantic import Field

class KronvexMemory(BaseMemory):
    """Persistent cross-session memory powered by Kronvex."""

    api_key: str = Field(default="")
    agent_id: str = Field(default="")
    memory_key: str = Field(default="history")
    session_id: str | None = Field(default=None)
    top_k: int = Field(default=5)
    _client: Any = None
    _agent: Any = None

    def __init__(self, api_key: str, agent_id: str, **kwargs):
        super().__init__(api_key=api_key, agent_id=agent_id, **kwargs)
        self._client = Kronvex(api_key)
        self._agent  = self._client.agent(agent_id)

    @property
    def memory_variables(self) -> List[str]:
        return [self.memory_key]

    def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
        """Called before each LLM invocation — inject relevant context."""
        query = inputs.get("input", "")
        ctx = self._agent.inject_context(
            query=query,
            top_k=self.top_k,
            session_id=self.session_id,
        )
        return {self.memory_key: ctx.context}

    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
        """Called after each LLM response — store the exchange."""
        user_msg = inputs.get("input", "")
        ai_msg   = outputs.get("output", "")
        kwargs = dict(session_id=self.session_id) if self.session_id else {}
        if user_msg:
            self._agent.remember(
                content=user_msg, memory_type="episodic", **kwargs
            )
        if ai_msg:
            self._agent.remember(
                content=ai_msg, memory_type="episodic", **kwargs
            )

    def clear(self) -> None:
        pass  # Manage memories via Kronvex dashboard

Wiring it into a ConversationChain

main.py
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from kronvex.integrations.langchain import KronvexMemory

memory = KronvexMemory(
    api_key="kv-your-api-key",
    agent_id="your-agent-id",
    top_k=5,
)

chain = ConversationChain(
    llm=ChatOpenAI(model="gpt-4o"),
    memory=memory,
    verbose=True,
)

# Session 1 — user tells us their preference
chain.predict(input="I prefer concise answers, no bullet points.")

# ... process restarts, new chain instance ...
memory2 = KronvexMemory(api_key=..., agent_id=...)
chain2  = ConversationChain(llm=ChatOpenAI(...), memory=memory2)

# Session 2 — agent still knows the user's preference
response = chain2.predict(input="Summarize our project status.")
# LLM receives: "Context: User prefers concise answers, no bullet points."
# → Response is automatically formatted correctly ✓

Session scoping — multiple users, no collisions

In a multi-user app, pass a session_id to scope memories per user. This prevents User A's context from leaking into User B's responses:

Per-user session scoping
def get_chain_for_user(user_id: str):
    memory = KronvexMemory(
        api_key=os.environ["KRONVEX_API_KEY"],
        agent_id=os.environ["KRONVEX_AGENT_ID"],
        session_id=user_id,  # ← scoped to this user
        top_k=6,
    )
    return ConversationChain(llm=ChatOpenAI(...), memory=memory)

# FastAPI endpoint example
@app.post("/chat/{user_id}")
async def chat(user_id: str, msg: str):
    chain = get_chain_for_user(user_id)
    return {"response": await chain.apredict(input=msg)}
Pro tip: Use memory_type="semantic" for long-term preferences (tone, format, context) and memory_type="episodic" for conversation turns. Kronvex retrieves both but weights them differently in the confidence score.

Production checklist

LangChain version note: This tutorial uses LangChain ≥ 0.2 with the new langchain-openai package. If you're on an older version, ChatOpenAI is imported from langchain.chat_models instead.
Related articles
Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →