8
Stateful Agents, Real Orchestration: 6 Backboard Releases You Can Use Today
ON THIS PAGE
CATEGORY
Announcement
PUBLISHED
Mar 24, 2026
This is a walkthrough of 6 updates we just shipped at Backboard.io.
And if you’re coming from Major League Hacking or DEV, there’s a serious perk: we’re releasing a free state management for life tier on Backboard (limited to state features) plus $5 in dev credits (about one free month).
No catch. No expiration on the state tier. Powered by MLH.
Combined with our existing BYOK (bring your own keys) feature, this means every major platform’s API is now stateful for free:
OpenAI
Anthropic
OpenRouter
Cohere
Stateful. Free. Yup. LFG.
Now, the actual shipping.
The 6 Updates
Adaptive context management – truncate, summarize, reshape, automatically.
Memory tiers (Light vs Pro) – cost, latency, accuracy tradeoffs you control.
New navigation + organizations + docs overhaul – faster to build, fewer dead ends.
Custom memory orchestration per assistant – natural language rules for memory.
Manual memory search via API – inspect and query what your agent stored.
Portable parallel stateful tool calling – the orchestration layer nobody else ships.
If you only read one section, read #6.
1) Adaptive Context Management (Stop Losing the Plot)
The crisis: context windows are finite, and your product is not.
Long-running threads silently degrade. The model still sounds confident, but it’s missing key facts. So teams hack around it:
truncating old messages manually
bolting on custom summarizers
re-injecting user profile facts on every call
praying the important stuff stays in-window
We shipped adaptive context management so your agent can truncate, summarize, and reshape the payload automatically before it hits the model.
What that gives you:
Less token waste on irrelevant or redundant history
Fewer hallucinations caused by missing context
Better performance on long-running conversations
Less custom logic in your app to manage history
Your agent keeps the story straight without you hand-authoring context pipelines.
Docs: Backboard docs
Hook for what’s next: context control is useless if memory is too expensive or too slow. That’s why we shipped tiers.
2) New Memory Versions: Light vs Pro (Cost, Latency, Accuracy)
Most teams hit the same wall:
“You want memory everywhere… until you see the bill or feel the latency.”
So we shipped two memory versions:
Memory Light
~1/10th the cost of Pro
Still message-level memory
Built for teams that want speed and affordability without giving up persistent behavior
Memory Pro
Our highest accuracy and depth
Built for mission‑critical use cases where memory precision actually matters
For workflows where “close enough” is not acceptable
You choose what matters per product stage and per assistant:
Early: default to Light to explore and ship fast
Later: graduate to Pro where correctness and recall quality are critical
Docs: Backboard docs
Hook for what’s next: even with good memory, teams fail if they can’t find the right knobs quickly. So we rebuilt the surface area.
3) New Navigation, Organizations, and a Docs Overhaul (So You Can Actually Ship)
This one came straight from user feedback.
Organizations
You can now create and manage organizations in the dashboard:
Structured workspaces for teams
Cleaner separation between projects
No more awkward account sharing
New Navigation
We rebuilt navigation so you can get to what matters, fast:
Assistants
Conversations
Documents
Memory
Keys
Settings
The goal: reduce the “wait, where was that again?” factor and make it obvious how to move from idea → prototype → shipped agent.
Documentation Overhaul
We also went through the docs and made them:
More detailed
More example-heavy
Clearer in architecture (what talks to what, and when)
Less confusing for first-time integration
We want you to be able to open the docs and wire something up in one sitting.
Docs: Backboard docs
Hook for what’s next: even with great docs, memory feels like a black box unless you control the rules. That’s the next shipment.
4) Custom Memory Orchestration (Per Assistant, Natural Language)
Most platforms give you “memory” as a checkbox.
We treat memory as a system you can design.
You can now define custom memory rules per assistant, using natural language prompts.
When you create an assistant, you can pass:
custom_fact_extraction_prompt(string)How to extract durable facts from interaction history
custom_update_memory_prompt(string)How to decide when to create / update / ignore facts
This is the difference between:
“My assistant stores random stuff sometimes”
and
“My assistant stores exactly what I consider durable, useful signal”
Examples
Support agent
Remembers: plan, product, environment, known bugs
Ignores: jokes, sarcasm, off-topic chatter
Sales agent
Remembers: stakeholders, objections, decision criteria, timeline
Ignores: banter, filler, irrelevant side conversations
Recruiting agent
Remembers: location, tech stack, compensation targets, notice period
Can justify why it updated a candidate’s profile
You get a controllable memory plane, not just a magical “it remembers stuff” toggle.
Docs: Backboard docs
Hook for what’s next: once you write memory rules, you’ll ask the obvious question: “What did it actually store?” So we shipped search.
5) Manual Memory Search via API (Stop Guessing What Your Agent Knows)
If you’ve ever debugged memory, you know the pain:
“Why is it bringing that up?”
“Why did it forget that?”
“Did it store the wrong fact?”
We shipped manual memory search via API so you can directly inspect what your agent knows.
What this unlocks:
Debugging and QA
See the exact facts your agent stored
Validate your extraction and update prompts
Internal tooling and admin dashboards
Build control panels where ops and support can inspect and edit memory
User-facing “what I remember about you” views
Let users see and manage what the system has learned
Compliance and governance
Audit and review stored data
Reason about retention and deletion with real visibility
In short: memory becomes queryable, not mystical.
Docs: Backboard docs
Hook for what’s next: memory is half the battle. The other half is orchestration, tool calling, and state. That’s where most agents break.
6) Portable Parallel Stateful Tool Calling (The Thing Big Providers Still Do Not Offer)
This is the upgrade that changes what “agent” even means.
As of right now, no major AI provider offers portable, parallel, stateful tool calling as a first-class capability.
We do.
Here is what that actually means, in plain terms.
Parallel
Your assistant can request multiple tool calls at the same time, each with a unique tool_call_id.
If the agent needs to:
query a CRM
pull documents
check a billing system
run a heavy calculation
…it doesn’t have to do those serially. It can do them concurrently.
Result: faster, more realistic workflows without you manually jugging multiple concurrent conversations.
Stateful
The assistant keeps the chain of reasoning intact across:
tool calls
multiple rounds
parallel branches
That state does not live in your app code.
You are not rebuilding workflow state machines in your backend just to keep track of:
which tools have run
which branch succeeded
what still needs to be done
Backboard carries that state for you as a first-class capability.
Portable
This state is not trapped inside one provider’s ecosystem.
It travels with the assistant across:
environments
model providers
model versions
You can move between OpenAI, Anthropic, OpenRouter, etc., without rewriting your orchestration logic.
Loop Until COMPLETED
The assistant can chain tool calls across rounds and keep going until:
text
status == COMPLETED
That means:
Multi-step workflows
Multi-tool dependencies
Long-running jobs
…without you writing glue code and polling loops around each step.
This is the difference between:
“a chat that can call one tool”
and
“a system that can actually execute a workflow”
Docs: Backboard docs
Hook for what’s next: if you want to try this without over-committing, the on-ramp is free.
Free State Management for Life (Powered by MLH and DEV)
We partnered with Major League Hacking and DEV because builders need a real environment to ship in, not a 7‑day trial that dies mid‑project.
Through this partnership, participants get:
Free state management on Backboard for life
Limited to state management features
No expiration
$5 in dev credits
Roughly one free month on the full platform for many small projects
If you’re building for hackathons, hack weeks, or DEV challenges, this is designed to:
remove friction
keep your agents stateful
let you focus on shipping, not budgeting every API call
Start here: Backboard.io
Docs here: Backboard docs
Why This Matters (If You Are Building Under Pressure)
If you’re in an “information crisis” building AI products, it’s usually not because you “can’t prompt.”
It’s because you’re drowning in:
context limits
memory ambiguity
orchestration glue
tool call complexity
state bugs and race conditions
These six shipments are us taking that burden off your plate:
smarter context
tunable memory
better surface area
controllable memory rules
inspectable storage
real orchestration and state
If you want help:
picking the right memory tier,
designing orchestration prompts, or
validating an agent workflow,
build something small and send it to us.
We’re optimizing for builders who ship.

Rob Imbeault