LIVE DEMO → Home Product
Features Use Cases Compare Enterprise
Docs
Documentation Quickstart MCP Server Integrations Benchmark
Pricing Blog DASHBOARD → LOG IN →
Tutorial No-Code Flowise March 22, 2026 · 8 min read

Flowise Memory:
Persistent state for your chatflows

Flowise's built-in buffer memory is session-only — the moment you restart a chatflow, every user preference, every conversation turn, every decision is gone. Here's how to wire Kronvex into Flowise using the HTTP Request node so your chatflows remember users across sessions, restarts, and deployments.

In this article
  1. Flowise memory limitations
  2. Using the HTTP Request node
  3. Wiring the remember endpoint
  4. Wiring recall and inject-context
  5. Complete chatflow JSON
  6. Production tips

Flowise memory limitations

Flowise ships with Buffer Memory, Conversation Summary Memory, and Zep Memory as built-in options. They all share a critical constraint: they are either in-memory (lost on restart) or tightly coupled to a specific external service with its own schema.

The core issues you will hit in production:

What Kronvex adds: A REST API that stores memories as vector embeddings in PostgreSQL. When your chatflow calls recall, it gets the most semantically relevant memories — scored by similarity, recency, and access frequency. It persists forever, across any number of Flowise restarts.

Using the HTTP Request node in Flowise

Flowise's HTTP Request node lets you call any REST API from within a chatflow. It supports custom headers, JSON bodies, and can pass output to downstream nodes. This is the bridge between Flowise and Kronvex.

You will need two things before wiring:

Store your API key as a Flowise credential — never hardcode kv-your-key directly in the node body. Use Flowise's credential manager to inject it via {{$credentials.kronvex_key}}.

Step-by-step: wire the remember endpoint

The remember call should fire after your LLM produces a response — you want to store what the user said and what the agent replied. Here is the chatflow structure for the remember step:

┌─────────────────────────────────────────────────────────────┐ │ FLOWISE CHATFLOW — REMEMBER BRANCH │ ├─────────────────────────────────────────────────────────────┤ │ │ │ [Chat Input] │ │ │ │ │ ▼ │ │ [ChatOpenAI / Claude node] ◄── system prompt │ │ │ │ │ ├──────────────────────────────────────────────┐ │ │ ▼ ▼ │ │ [Chat Output] [HTTP Request — remember] │ │ POST https://api.kronvex.io │ │ /api/v1/agents/{id}/remember │ │ Header: X-API-Key: kv-... │ │ Body: { "content": "{{input}}", │ │ "memory_type": "episodic",│ │ "session_id": "{{sid}}"} │ └─────────────────────────────────────────────────────────────┘

Configure the HTTP Request node with these settings:

HTTP Request node — remember
// Method
POST

// URL
https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember

// Headers
"X-API-Key": "kv-your-api-key"
"Content-Type": "application/json"

// Body (JSON)
{
  "content": "{{$input.message}}",
  "memory_type": "episodic",
  "session_id": "{{$session.id}}"
}

Set memory_type to "semantic" when storing long-term facts about the user (preferences, company name, role). Use "episodic" for conversation turns. Both are retrieved by the recall endpoint but weighted differently in the confidence score.

Step-by-step: wire recall and inject-context

The recall step should fire before the LLM node — you fetch relevant memories and inject them into the system prompt. Use the inject-context endpoint which returns a pre-formatted context string ready to prepend.

┌─────────────────────────────────────────────────────────────┐ │ FLOWISE CHATFLOW — RECALL BRANCH │ ├─────────────────────────────────────────────────────────────┤ │ │ │ [Chat Input] │ │ │ │ │ ▼ │ │ [HTTP Request — inject-context] │ │ POST /api/v1/agents/{id}/inject-context │ │ Body: { "query": "{{$input.message}}", │ │ "top_k": 5, "session_id": "{{sid}}" } │ │ │ │ │ ▼ │ │ [Prompt Template] │ │ System: "{{$http.context}}\n\nUser: {{$input.message}}" │ │ │ │ │ ▼ │ │ [ChatOpenAI / Claude node] │ │ │ │ │ ▼ │ │ [Chat Output] + [HTTP Request — remember] │ └─────────────────────────────────────────────────────────────┘
HTTP Request node — inject-context
// Method
POST

// URL
https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/inject-context

// Headers
"X-API-Key": "kv-your-api-key"
"Content-Type": "application/json"

// Body (JSON)
{
  "query": "{{$input.message}}",
  "top_k": 5,
  "session_id": "{{$session.id}}"
}

// Response — use in Prompt Template node:
// response.context → pre-formatted memory block
// response.memories → array of individual memory objects
Tip: combine with a Prompt Template node. Map the HTTP response's context field to your system prompt: You are a helpful assistant. Here is what you know about this user:\n\n{{context}}\n\nAnswer their question below. The inject-context endpoint formats the memories as clean bullet points ready to embed.

Complete chatflow JSON snippet

Export-ready Flowise chatflow JSON. Import this via Flowise's "Load Chatflow" button, then replace YOUR_AGENT_ID and add your API key to the credential store.

flowise-kronvex-memory.json (excerpt)
{
  "nodes": [
    {
      "id": "recall-node",
      "type": "httpRequest",
      "data": {
        "method": "POST",
        "url": "https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/inject-context",
        "headers": {
          "X-API-Key": "{{$credentials.kronvex_key}}",
          "Content-Type": "application/json"
        },
        "body": "{\"query\": \"{{$input.message}}\", \"top_k\": 5, \"session_id\": \"{{$session.id}}\"}"
      }
    },
    {
      "id": "prompt-node",
      "type": "promptTemplate",
      "data": {
        "systemMessage": "You are a helpful AI assistant.\n\nRelevant memory about this user:\n{{recall-node.context}}\n\nRespond helpfully based on this context."
      }
    },
    {
      "id": "remember-node",
      "type": "httpRequest",
      "data": {
        "method": "POST",
        "url": "https://api.kronvex.io/api/v1/agents/YOUR_AGENT_ID/remember",
        "headers": {
          "X-API-Key": "{{$credentials.kronvex_key}}",
          "Content-Type": "application/json"
        },
        "body": "{\"content\": \"{{$input.message}}\", \"memory_type\": \"episodic\", \"session_id\": \"{{$session.id}}\"}"
      }
    }
  ],
  "edges": [
    { "source": "input", "target": "recall-node" },
    { "source": "recall-node", "target": "prompt-node" },
    { "source": "prompt-node", "target": "llm-node" },
    { "source": "llm-node", "target": "remember-node" },
    { "source": "llm-node", "target": "output" }
  ]
}

Production tips

Session IDs per user

Use a stable identifier as the session_id — your user's email, database ID, or hashed cookie. This scopes all memories to that user and prevents cross-user contamination. In Flowise, you can inject this from the chat widget's custom metadata field.

Memory types strategy

Store both types: episodic after each turn, semantic when your LLM extracts a key fact. You can add a second remember node after your LLM that fires a tool call to extract and store semantic facts.

TTL and cleanup

Pass "ttl_days": 30 in the remember body to auto-expire episodic memories. Keep semantic memories without TTL — they represent long-term user identity. Monitor memory growth per agent in the Kronvex dashboard.

Flowise version note: The HTTP Request node behaviour varies between Flowise versions. This guide was written for Flowise ≥ 1.8. In older versions, use the Custom Tool node with an inline JavaScript fetch call instead.
Related articles
Free access
Get your API key

100 free memories. No credit card required.

Already have an account? Sign in →