Flowise Memory:
Persistent state for your chatflows
Flowise's built-in buffer memory is session-only — the moment you restart a chatflow, every user preference, every conversation turn, every decision is gone. Here's how to wire Kronvex into Flowise using the HTTP Request node so your chatflows remember users across sessions, restarts, and deployments.
Flowise memory limitations
Flowise ships with Buffer Memory, Conversation Summary Memory, and Zep Memory as built-in options. They all share a critical constraint: they are either in-memory (lost on restart) or tightly coupled to a specific external service with its own schema.
The core issues you will hit in production:
- No persistence across restarts: Buffer Memory lives in the Flowise process. Redeploy your stack and the state vanishes.
- No semantic retrieval: Buffer memory returns the last N messages — not the most relevant ones. For long conversations, this is noise, not signal.
- No cross-chatflow sharing: A user who switches from your support chatflow to your sales chatflow starts from zero.
- No TTL or decay: Old, irrelevant memories from months ago carry the same weight as last week's conversation.
Using the HTTP Request node in Flowise
Flowise's HTTP Request node lets you call any REST API from within a chatflow. It supports custom headers, JSON bodies, and can pass output to downstream nodes. This is the bridge between Flowise and Kronvex.
You will need two things before wiring:
- A Kronvex API key — get one free at kronvex.io/dashboard
- An agent ID — created in the dashboard, represents one "persona" with its own memory store
kv-your-key directly in the node body. Use Flowise's credential manager to inject it via {{$credentials.kronvex_key}}.
Step-by-step: wire the remember endpoint
The remember call should fire after your LLM produces a response — you want to store what the user said and what the agent replied. Here is the chatflow structure for the remember step:
Configure the HTTP Request node with these settings:
// Method POST // URL https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember // Headers "X-API-Key": "kv-your-api-key" "Content-Type": "application/json" // Body (JSON) { "content": "{{$input.message}}", "memory_type": "episodic", "session_id": "{{$session.id}}" }
Set memory_type to "semantic" when storing long-term facts about the user (preferences, company name, role). Use "episodic" for conversation turns. Both are retrieved by the recall endpoint but weighted differently in the confidence score.
Step-by-step: wire recall and inject-context
The recall step should fire before the LLM node — you fetch relevant memories and inject them into the system prompt. Use the inject-context endpoint which returns a pre-formatted context string ready to prepend.
// Method POST // URL https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/inject-context // Headers "X-API-Key": "kv-your-api-key" "Content-Type": "application/json" // Body (JSON) { "query": "{{$input.message}}", "top_k": 5, "session_id": "{{$session.id}}" } // Response — use in Prompt Template node: // response.context → pre-formatted memory block // response.memories → array of individual memory objects
context field to your system prompt: You are a helpful assistant. Here is what you know about this user:\n\n{{context}}\n\nAnswer their question below. The inject-context endpoint formats the memories as clean bullet points ready to embed.
Complete chatflow JSON snippet
Export-ready Flowise chatflow JSON. Import this via Flowise's "Load Chatflow" button, then replace YOUR_AGENT_ID and add your API key to the credential store.
{
"nodes": [
{
"id": "recall-node",
"type": "httpRequest",
"data": {
"method": "POST",
"url": "https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/inject-context",
"headers": {
"X-API-Key": "{{$credentials.kronvex_key}}",
"Content-Type": "application/json"
},
"body": "{\"query\": \"{{$input.message}}\", \"top_k\": 5, \"session_id\": \"{{$session.id}}\"}"
}
},
{
"id": "prompt-node",
"type": "promptTemplate",
"data": {
"systemMessage": "You are a helpful AI assistant.\n\nRelevant memory about this user:\n{{recall-node.context}}\n\nRespond helpfully based on this context."
}
},
{
"id": "remember-node",
"type": "httpRequest",
"data": {
"method": "POST",
"url": "https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember",
"headers": {
"X-API-Key": "{{$credentials.kronvex_key}}",
"Content-Type": "application/json"
},
"body": "{\"content\": \"{{$input.message}}\", \"memory_type\": \"episodic\", \"session_id\": \"{{$session.id}}\"}"
}
}
],
"edges": [
{ "source": "input", "target": "recall-node" },
{ "source": "recall-node", "target": "prompt-node" },
{ "source": "prompt-node", "target": "llm-node" },
{ "source": "llm-node", "target": "remember-node" },
{ "source": "llm-node", "target": "output" }
]
}
Production tips
Session IDs per user
Use a stable identifier as the session_id — your user's email, database ID, or hashed cookie. This scopes all memories to that user and prevents cross-user contamination. In Flowise, you can inject this from the chat widget's custom metadata field.
Memory types strategy
episodic— conversation turns, what was discussed, decisions madesemantic— facts about the user: their role, company, preferences, recurring topicsprocedural— workflows the user has been through, steps completed
Store both types: episodic after each turn, semantic when your LLM extracts a key fact. You can add a second remember node after your LLM that fires a tool call to extract and store semantic facts.
TTL and cleanup
Pass "ttl_days": 30 in the remember body to auto-expire episodic memories. Keep semantic memories without TTL — they represent long-term user identity. Monitor memory growth per agent in the Kronvex dashboard.