Changelog

Oct 29, 2025

New Feature: Web Search Mode

Backboard now supports live web search directly through both the API and the demo chat. This update gives any Backboard-powered system access to current, real-world information while maintaining the same consistent memory, thread, and model routing structure.

How it works

The new web_search parameter enables on-demand access to live data sources.
You can now toggle it just like memory or send_to_llm in your API calls.

Example usage

POST /v1/chat
Content-Type: multipart/form-data
{
  "model_name": "gpt-4o",
  "memory": "Auto",
  "web_search": "Auto",
  "send_to_llm": true,
  "content": "What are the latest results for AI memory benchmarks in 2025?"
}

When web_search is set to “Auto”, the LLM can automatically perform web lookups to retrieve live information before responding.
When set to “off”, all responses come only from memory and model context.

Parameter Summary



Parameter

Type

Description

Default

web_search

Web Search

"Auto" (enable live search) or "off" (disable).

"off"

memory

Memory

"Auto" (read/write), "Readonly" (read-only), or "off".

"off"

send_to_llm

Boolean

Whether to generate a response using the LLM.

true

model_name

String

LLM used (e.g., gpt-4o, claude-3-sonnet, etc.)

gpt-4o

metadata

JSON

Optional structured context, timestamps, or custom fields.

null

In the Demo Chat

The same capability is available through Backboard’s demo chat. Simply toggle Web Search Globe icon and ask any live query, Backboard will fetch real-time data and merge it with contextual memory before generating a response.

Why it matters

Web Search makes Backboard agents contextual and current.
Instead of relying solely on static data, developers can now:

  • Retrieve fresh, verifiable web results for any query

  • Combine retrieval and persistent memory in a single request

  • Build research assistants, monitoring agents, and contextual chatbots with zero extra infrastructure

Changelog