CrewAI Persistent Memory:
Agents that remember across runs
CrewAI crews forget everything when a run finishes. The next time they start, every agent is back to zero context. Here's how to give your crews persistent, semantic memory — using Kronvex as a drop-in memory backend.
Why CrewAI's built-in memory isn't enough
CrewAI ships with a built-in Memory system using ShortTermMemory, LongTermMemory, and EntityMemory. These are helpful during a single run — but they all reset when the process ends.
For production AI agents this is a fundamental problem. A sales agent that can't remember last week's call. A support agent that asks the same user the same questions. A research agent that re-reads documents it already analyzed.
Kronvex solves this with a persistent memory store that agents can read and write via simple tool calls — surviving process restarts, deployments, and crew configuration changes.
Setup in 60 seconds
pip install "kronvex[crewai]" crewai crewai-tools
Get your API key from the Kronvex dashboard. Create one agent — it'll hold all memories for your crew by default (or one agent per crew member for isolation).
from kronvex import Kronvex client = Kronvex("kv-your-api-key") mem = client.agent("your-agent-id") mem.remember("Acme Corp is evaluating our Enterprise plan", memory_type="episodic") result = mem.recall(query="Acme deal status") print(result.memories[0].content) # → "Acme Corp is evaluating our Enterprise plan"
Building Recall and Remember tools
In CrewAI, agents interact with the world through tools. We create two: one for storing memories, one for recalling them. The @tool decorator is all you need.
from kronvex.integrations.crewai import recall_memory, store_memory, get_contextfrom crewai.tools import tool from kronvex import Kronvex import os _client = Kronvex(os.environ["KRONVEX_API_KEY"]) _mem = _client.agent(os.environ["KRONVEX_AGENT_ID"]) @tool("Recall from long-term memory") def recall_memory(query: str) -> str: """Search long-term memory for information relevant to the query. Use this before starting any task to check what's already known.""" result = _mem.recall(query=query, top_k=6) if not result.memories: return "No relevant memories found." return "\n---\n".join([ f"[{m.memory_type}] {m.content}" for m in result.memories ]) @tool("Store to long-term memory") def store_memory(content: str, memory_type: str = "episodic") -> str: """Store important information to long-term memory. memory_type: 'episodic' (events), 'semantic' (facts), 'procedural' (how-to)""" _mem.remember(content=content, memory_type=memory_type) return f"Stored to {memory_type} memory: {content[:80]}..." @tool("Inject context from memory") def get_context(topic: str) -> str: """Get formatted context block from memory for a given topic. Returns a ready-to-use context string for the current task.""" ctx = _mem.inject_context(query=topic, top_k=5) return ctx.context or "No context available for this topic."
Wiring memory into a full crew
from crewai import Agent, Task, Crew, Process from memory_tools import recall_memory, store_memory, get_context # Sales Research Agent — always checks memory before researching researcher = Agent( role="Sales Researcher", goal="Research prospects and store findings for future reference", backstory="Expert at gathering intel on companies and storing it for the team.", tools=[recall_memory, store_memory], verbose=True, ) # Sales Writer — retrieves context before drafting writer = Agent( role="Sales Copywriter", goal="Write personalized outreach using accumulated memory about the prospect", backstory="Crafts hyper-personalized messages based on everything we know.", tools=[recall_memory, get_context], verbose=True, ) research_task = Task( description="""Research {company} — check memory first for existing intel, then find new information and store key findings.""", expected_output="Research summary with all findings stored to memory", agent=researcher, ) write_task = Task( description="""Write a personalized outreach email for {contact} at {company}. Use get_context to load everything we know about them first.""", expected_output="Personalized outreach email", agent=writer, ) crew = Crew( agents=[researcher, writer], tasks=[research_task, write_task], process=Process.sequential, ) # Run 1 — stores findings to Kronvex crew.kickoff(inputs={"company": "Acme Corp", "contact": "John Smith"}) # Run 2 next week — automatically recalls everything stored in Run 1 crew.kickoff(inputs={"company": "Acme Corp", "contact": "Sarah Jones"})
Shared memory across multiple agents
For large crews where different specialists should share a common knowledge base, use the same Kronvex agent ID across all tools. For isolated per-agent memory, create one Kronvex agent per crew role:
# Each crew role has its own memory space researcher_mem = client.agent(os.environ["RESEARCHER_AGENT_ID"]) writer_mem = client.agent(os.environ["WRITER_AGENT_ID"]) # Shared team memory pool team_mem = client.agent(os.environ["TEAM_AGENT_ID"]) # Tools scoped to specific memory spaces @tool("Store to team knowledge base") def store_team_knowledge(content: str) -> str: """Store verified facts to the shared team knowledge base.""" team_mem.remember(content=content, memory_type="semantic") return "Stored to team knowledge base."
episodic for events ("Met with Acme CEO on March 5"), semantic for facts ("Acme has 500 employees, uses Salesforce"), and procedural for patterns ("Acme always needs 3 sign-offs for >$50k deals").
Production checklist
- Store API key + agent IDs in environment variables — never in code
- Give agents clear instructions: "Always call recall_memory before starting any research task"
- Use
session_id=run_idto scope memories per crew run if needed - Add
ttl_days=90for ephemeral intel that should expire - Monitor memory growth in the Kronvex Memories page
- EU-hosted — all data stays in Europe (Frankfurt), GDPR compliant by default