LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Benchmark
Pricing Blog DASHBOARD → LOG IN →
GUIDE March 20, 2026 · 7 min read

Multi-Agent Memory Sharing: Patterns for Sharing Context in a Crew

When multiple agents collaborate on a task — researcher, writer, reviewer, orchestrator — they need to share context. But shared memory without isolation creates noise and conflicts. Here are the patterns that work.

The multi-agent memory problem

Single-agent memory is straightforward: one agent, one memory namespace, one recall context. Multi-agent systems break this assumption. Consider a sales crew with three specialized agents:

These agents need to share customer context — what one learns should inform the others. But the prospecting agent doesn't need the CRM agent's call notes in its context, and vice versa. Naive memory sharing floods every agent's context with irrelevant information from sibling agents.

The core question: when should memory be isolated to a single agent, and when should it be shared across the crew?

Pattern 1: Shared knowledge base + isolated working memory

The cleanest architecture uses two memory tiers:

import kronvex

kv = kronvex.Kronvex("kv-your-api-key")

# Shared knowledge base — all agents can read
kb = kv.agent("sales-crew-knowledge-base")

# Role-specific agents
prospecting = kv.agent("sales-prospecting-agent")
outreach = kv.agent("sales-outreach-agent")
crm = kv.agent("sales-crm-agent")

# Prospecting agent discovers something important
kb.remember(
    "ACME Corp expanded to 3 new markets in Q1 2026 — hiring aggressively in Paris and Berlin",
    memory_type="semantic",
    metadata={"account": "acme-corp", "source": "prospecting"}
)

# Outreach agent prepares email — pulls from shared KB
ctx = kb.inject_context(
    query="ACME Corp expansion context for outreach",
    metadata_filter={"account": "acme-corp"},
    top_k=5
)

# Also pulls from its own outreach-specific memory
outreach_ctx = outreach.inject_context(
    query="email style preferences for ACME",
    metadata_filter={"account": "acme-corp"},
    top_k=3
)
When to write to shared vs private: A fact becomes shared when it's been verified and is useful to more than one agent. Working hypotheses, intermediate results, and role-specific heuristics stay in the private agent. Promote to shared after validation.

Pattern 2: Memory tagging with visibility scope

If you prefer a single memory namespace per customer entity, use metadata tags to control visibility at recall time:

# Store with visibility metadata
kb.remember(
    "Prospect is evaluating three vendors including us",
    memory_type="semantic",
    metadata={
        "account": "acme-corp",
        "visibility": "all",        # all agents can read
        "source_agent": "prospecting"
    }
)

kb.remember(
    "Email open rate: 2/5. Best day: Tuesday mornings",
    memory_type="procedural",
    metadata={
        "account": "acme-corp",
        "visibility": "outreach",   # outreach-only
        "source_agent": "outreach"
    }
)

# Outreach agent recalls only what it can see
outreach_memories = kb.recall(
    query="email engagement patterns for ACME",
    metadata_filter={
        "account": "acme-corp",
        "visibility": "outreach"
    }
)

# Orchestrator recalls everything
all_memories = kb.recall(
    query="full account context for ACME",
    metadata_filter={"account": "acme-corp"},
    top_k=15
)

Pattern 3: Handoff memory

When an orchestrator delegates a task to a sub-agent, the handoff should include a memory snapshot. Rather than letting the sub-agent recall cold, the orchestrator prepares a focused context block:

async def delegate_with_context(
    orchestrator_agent_id: str,
    sub_agent_id: str,
    task: str,
    customer_id: str
) -> str:
    kv = kronvex.AsyncKronvex("kv-your-api-key")

    async with kv:
        # Orchestrator retrieves broad context
        orch = kv.agent(orchestrator_agent_id)
        ctx = await orch.inject_context(
            query=task,
            metadata_filter={"customer_id": customer_id},
            top_k=8
        )

        # Write a handoff summary to the sub-agent's namespace
        sub = kv.agent(sub_agent_id)
        await sub.remember(
            f"Task handoff context:\n{ctx.context}",
            memory_type="episodic",
            metadata={
                "customer_id": customer_id,
                "handoff_from": orchestrator_agent_id,
                "task": task
            },
            ttl=3600  # Expire after 1 hour
        )

        return ctx.context

Pattern 4: Cross-agent memory queries (advanced)

Kronvex agents are independent namespaces. For true cross-agent search, you can query multiple agents and merge results at application level:

async def multi_agent_recall(
    agent_ids: list[str],
    query: str,
    top_k_per_agent: int = 3
) -> list[dict]:
    """Recall from multiple agents and merge by confidence."""
    kv = kronvex.AsyncKronvex("kv-your-api-key")
    all_results = []

    async with kv:
        import asyncio

        async def recall_one(agent_id):
            agent = kv.agent(agent_id)
            result = await agent.recall(query=query, top_k=top_k_per_agent)
            return [(m, agent_id) for m in result.results]

        tasks = [recall_one(aid) for aid in agent_ids]
        results = await asyncio.gather(*tasks)

        for agent_results in results:
            all_results.extend(agent_results)

    # Sort by confidence across all agents
    all_results.sort(key=lambda x: x[0].confidence, reverse=True)
    return all_results[:top_k_per_agent * 2]  # Return top merged results

Designing for scale: the two decisions

Before building your multi-agent memory architecture, answer two questions:

  1. What's the primary isolation unit? Customer (all agents share one namespace per customer)? Role (each agent type has its own namespace)? Both (tiered with shared + private)? The right answer depends on how often agents need each other's memories.
  2. Who writes to shared memory? Any agent, or only a designated "memory curator" (orchestrator or dedicated knowledge agent)? Open writes cause conflicts; curated writes stay clean but add latency.

For most B2B use cases, the two-tier approach — shared knowledge base agent + private role agents — is the right starting point. It's simple, debuggable, and scales naturally as the crew grows.

Related articles
Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →