
Changelog
Oct 29, 2025
New Feature: Web Search Mode
Backboard now supports live web search directly through both the API and the demo chat. This update gives any Backboard-powered system access to current, real-world information while maintaining the same consistent memory, thread, and model routing structure.
How it works
The new web_search parameter enables on-demand access to live data sources.
You can now toggle it just like memory or send_to_llm in your API calls.
Example usage
When web_search is set to “Auto”, the LLM can automatically perform web lookups to retrieve live information before responding.
When set to “off”, all responses come only from memory and model context.
Parameter Summary
Parameter | Type | Description | Default |
|---|---|---|---|
| Web Search |
|
|
| Memory |
|
|
| Boolean | Whether to generate a response using the LLM. |
|
| String | LLM used (e.g., |
|
| JSON | Optional structured context, timestamps, or custom fields. |
|
In the Demo Chat
The same capability is available through Backboard’s demo chat. Simply toggle Web Search Globe icon and ask any live query, Backboard will fetch real-time data and merge it with contextual memory before generating a response.
Why it matters
Web Search makes Backboard agents contextual and current.
Instead of relying solely on static data, developers can now:
Retrieve fresh, verifiable web results for any query
Combine retrieval and persistent memory in a single request
Build research assistants, monitoring agents, and contextual chatbots with zero extra infrastructure
Changelog

