LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Benchmark
Pricing Blog DASHBOARD → LOG IN →
Tutorial Dify March 22, 2026 · 8 min read

Dify AI Memory:
Adding Persistent State to Dify Workflows

Dify is one of the most popular no-code/low-code AI workflow builders. But its built-in memory resets every session. Here's how to wire Kronvex into your Dify workflows to add true cross-session, per-user persistent memory.

In this article
  1. Dify's memory limitations
  2. Using Kronvex with the Dify HTTP node
  3. Step-by-step: remember endpoint integration
  4. Step-by-step: recall endpoint integration
  5. Example workflow: customer support bot
  6. Production tips

Dify's memory limitations

Dify comes with a "Conversation Memory" feature. It stores the message history of a given conversation thread. When you start a new conversation, that history is gone.

This is sufficient for simple chatbots where each session is independent. But most B2B use cases require more:

Dify also doesn't support semantic search across memory. Even if you store data, you can only retrieve it with exact variable lookups — not "find memories most relevant to this query."

The gap: Dify is excellent for orchestrating LLM workflows. It's not designed to be a memory store. Kronvex fills that gap — it's a dedicated memory layer that any HTTP-capable workflow can call.

Using Kronvex with the Dify HTTP node

Dify's HTTP Request node lets you call any REST API from within a workflow. This is the integration point. You'll use two Kronvex endpoints:

Your Kronvex API key goes in the X-API-Key header. You can store it as a Dify environment variable (Settings → Environment Variables) and reference it as {{env.KRONVEX_API_KEY}}.

Prerequisites: You need a Kronvex account (free tier available), an agent ID from your dashboard, and a Dify instance (cloud or self-hosted). Dify ≥ 0.6 supports HTTP nodes in workflows.

Step-by-step: remember endpoint integration

Add a HTTP Request node at the end of your workflow (after the LLM response is generated). This stores the conversation turn as memory.

Step 1 — Configure the HTTP node

Set method to POST, URL to the remember endpoint:

HTTP Node — URL
https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember
Headers
{
  "X-API-Key": "{{env.KRONVEX_API_KEY}}",
  "Content-Type": "application/json"
}
Request body (JSON)
{
  "content": "{{user_message}}",
  "memory_type": "episodic",
  "session_id": "{{conversation.id}}"
}

Use {{conversation.id}} as the session_id to scope memories per conversation thread. For cross-conversation user memory, use a stable user identifier like {{sys.user_id}}.

Memory types in Dify workflows: Use "episodic" for conversation turns (what was said). Use "semantic" for facts extracted from the conversation (user preferences, account details). Use "procedural" for completed steps (onboarding progress, task completion).

Step-by-step: recall endpoint integration

Add a HTTP Request node at the beginning of your workflow, before the LLM node. This fetches the most relevant memories to inject as context.

HTTP Node — Recall URL
https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/recall
Request body (JSON)
{
  "query": "{{sys.query}}",
  "top_k": 5,
  "session_id": "{{sys.user_id}}"
}

The response will contain a memories array. Extract the content with a Code node (JavaScript or Python) to build a context string:

Code node — build context string (JavaScript)
// Input: recall_response (from HTTP node output)
const memories = recall_response.memories || [];
const context = memories
  .map(m => m.content)
  .join('\n');

return {
  memory_context: context || 'No previous context available.'
};

Then pass {{memory_context}} into your LLM node's system prompt:

LLM System Prompt template
You are a helpful support assistant. Use the following context about
this user to personalize your response:

---
{{memory_context}}
---

Answer the user's question based on this context and your knowledge.

Example workflow: customer support bot with memory

Here is the complete node sequence for a production-grade customer support workflow:

1. Start node

Receives user_message, user_id, and conversation_id as inputs.

2. HTTP node — Kronvex Recall

Calls /recall with query={{user_message}}, session_id={{user_id}}, top_k=5. Returns relevant memory objects.

3. Code node — Format context

Joins memory content strings into a single memory_context variable.

4. LLM node — Generate response

System prompt includes {{memory_context}}. User message is {{user_message}}. Output: ai_response.

5. HTTP node (parallel) — Kronvex Remember ×2

Two parallel HTTP nodes: one stores user_message, one stores ai_response. Both as memory_type="episodic".

6. Answer node

Returns ai_response to the user.

Production tips: session IDs, TTL, memory types

Related articles
Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →