PERSISTENT STATE NEWS

Cached Thoughts

Our thoughts and updates, cached for later retrieval.

PERSISTENT STATE NEWS

Cached Thoughts

Our thoughts and updates, cached for later retrieval.

All Posts

Announcements

Changelog

Announcement

Mar 31, 2026

Why Hackathon Organizers Love Working With Backboard

Over the past year, we’ve supported a couple dozen hackathons across Canada and beyond. From UofT Hacks and McHacks to KingHacks, SF Hacks, Waterloo events, and AI Collective Paris, we’ve seen the same pattern again and again:

When hackers get real AI infra on day one, they build way more than they thought possible by day three.

This post is for hackathon organizers who want their participants to ship ambitious AI projects without spending half the weekend debugging infrastructure.

What We Bring to Hackathons

Our goal at every event is simple:
Make it possible for teams to build stateful, multi-agent, RAG-powered apps in a weekend.

We’ve brought this to:

  • UofT Hacks

  • McHacks (McGill)

  • KingHacks

  • SF Hacks

  • AI Collective Paris

  • Waterloo hackathons (like CxC Waterloo and UW Listen)

  • …and many more university and community events

Our typical support has included:

  • Free Backboard credits for participants

  • Quickstart templates so teams start with working code

  • Workshops and office hours to get hackers unstuck fast

  • Judging and special prizes for best use of Backboard

Under the hood, the thing hackers actually feel is:

Backboard is a stateful AI platform: built‑in state, memory, and RAG so teams can ship multi-step, multi-agent apps fast.

Instead of wiring multiple providers, standing up custom vector stores, and bolting on “memory” at 3 a.m., teams call one API and focus on their idea.

How Participants Benefit

The comments we hear from participants are remarkably consistent:

  • “We had our entire infra up and running in minutes.”

  • “We were able to build way more than we thought!”

  • “I wouldn’t have pulled this off without Backboard.”

In a 24–48 hour sprint, that matters.

Infra in Minutes, Not Hours

With Backboard, teams can:

  • Spin up stateful conversations with built-in memory

  • Plug in RAG over their own docs and data

  • Orchestrate multi-step, multi-agent flows from a single API

That means less time fighting setup, and more time on UX, features, and polish.

From “Simple Bot” to Multi-Agent Swarms

At events like UofT Hacks, McHacks, and Waterloo hackathons, we’ve seen the same journey:

  • Teams show up planning to build a basic chatbot.

  • A few hours in, they realize they can build multi-agent swarms where:

    • Each agent has its own LLM and role,

    • All agents share a persistent memory,

    • The system improves iteratively over the weekend.

In one project, a team used Backboard’s shared memory to train and refine agents over the course of the hackathon. That kind of system is non-trivial to wire from scratch. With Backboard, they just pointed their agents at the same memory and iterated on behavior instead of infrastructure.

Those are the projects that tend to land on the podium.

How Organizers Benefit

For organizers, the value is direct:

When builders can ship more, your hackathon looks better.

Better, More Polished AI Projects

Because teams start with production-grade AI infra, final demos are:

  • More robust (fewer “it worked on my laptop at 5 a.m.” moments)

  • More feature-complete (real flows, real RAG, real state)

  • Easier to showcase in recap posts and highlight reels

Your hackathon becomes known as the place where serious AI projects get started, not just toy demos.

Happier, More Empowered Builders

Participants tell us they:

  • Got farther than they expected in the time they had

  • Built something they’re excited to keep hacking on

  • Became genuine Backboard superfans—and remember the event that introduced them to it

Organizers see the knock-on effects: better feedback, stronger community, and more repeat participation.

A Partner That Actually Shows Up

We don’t just drop credits and a logo.

At many events, we’ve:

  • Run live workshops on building stateful, multi-agent, and RAG-powered apps

  • Hosted office hours during crunch time

  • Judged final projects and sponsored “Best Use of Backboard” prizes

  • Shared standout projects and events across our channels

We can also share high-level usage insights (how many teams built on Backboard, what they built) and extend credits for teams that want to keep going post-hackathon.

The end result: your community gets more out of the weekend, and your event brand gets stronger.

Why Backboard Is a Great Fit for AI-Heavy Hackathons

If you’re running an AI-focused event—or you know most teams will be using LLMs—just handing out API keys isn’t enough anymore.

Backboard gives your hackers:

  • The Stateful AI Platform
    Built-in state and memory, so they can build multi-step workflows, persistent assistants, and multi-agent systems without standing up their own infra.

  • RAG Out of the Box
    Let teams attach their own docs and data and build genuinely “smart” apps in a weekend.

  • A Unified Surface for Ambitious Ideas
    One place to experiment with agents, tools, memory, and retrieval—and still have a demo-ready project by closing ceremony.

That combination is why Backboard keeps showing up in winning and finalist projects at the hackathons we support.

Want Backboard at Your Next Hackathon?

If you’re organizing a university or community hackathon and want your builders to ship more advanced AI projects in less time, we’d love to talk.

We offer:

  • Free credits for your participants

  • Hackathon-ready templates and quickstarts

  • Optional workshops, office hours, and judging for select events

  • Extra visibility for standout projects and events via our networks

You can:

  • Email us directly at: hackathon at backboard.io

Tell us a bit about your event and dates, and we’ll see how we can give your builders the stateful AI platform they deserve.

Announcement

Mar 24, 2026

New: Adaptive Context Window Management Across 17,000+ Models

Backboard now includes Adaptive Context Management, a system that automatically manages conversation state when your application moves between models with different context window sizes.

With access to 17,000+ LLMs on the platform, model switching is common. But context limits vary widely across models. What fits in one model may overflow another.

Until now, developers had to handle that manually.

Adaptive Context Management removes that burden. And it’s included for free with Backboard.

The Problem: Context Windows Are Inconsistent

Different models support different context window sizes. Some allow large conversations. Others are much smaller.

If an application starts a session on a large-context model and later routes a request to a smaller one, the total state can exceed what the new model can handle.

That state typically includes more than just chat messages:

  • system prompts

  • recent conversation turns

  • tool calls and tool responses

  • RAG context

  • web search results

  • runtime metadata

When that information exceeds the model’s limit, something must be removed or compressed.

Most platforms leave this responsibility to developers. That means writing logic for truncation, prioritization, summarization, and overflow handling.

In multi-model systems, that quickly becomes fragile.

Introducing Adaptive Context Management

Backboard now automatically handles context transitions when models change.

When a request is routed to a new model, Backboard dynamically budgets the available context window.

The system works as follows:

  • 20% of the model’s context window is reserved for raw state

  • 80% is freed through intelligent summarization

Backboard first calculates how many tokens fit inside the 20% allocation. Within that space we prioritize the most important live inputs:

  • system prompt

  • recent messages

  • tool calls

  • RAG results

  • web search context

Whatever fits inside this budget is passed directly to the model.

Everything else is compressed.

Intelligent Summarization

When compression is required, Backboard summarizes the remaining conversation automatically.

The summarization pipeline follows a simple rule:

  1. First we attempt summarization using the model the user is switching to.

  2. If the summary still cannot fit within the available context, we fall back to the larger model previously in use to generate a more efficient summary.

This approach preserves the most important information while ensuring the final state fits inside the new model’s limits.

The process happens automatically inside the Backboard runtime.

You Should Rarely Hit 100% Context Again

Because Adaptive Context Management runs continuously during requests and tool calls, the system proactively reshapes the state before a context window is exhausted.

In practice this means your application should rarely reach the full limit of a model’s context window, even when switching models mid conversation.

Backboard keeps the system stable so developers do not need to constantly monitor token overflow.

Developers Can See Exactly What Is Happening

We also expose context usage directly in the msg endpoint so developers can track how their application is using context in real time.

Example response:

"context_usage": {

 "used_tokens": 1302,

 "context_limit": 8191,

 "percent": 19.9,

 "summary_tokens": 0,

 "model": "gpt-4"

}

This makes it easy to monitor:

  • how much context is currently being used

  • how close a request is to the model’s limit

  • how many tokens were generated by summarization

  • which model is currently managing the context

Developers gain visibility without needing to build their own tracking systems.

The Bigger Idea

Backboard was designed so developers can treat models as interchangeable infrastructure.

But that only works if state moves safely with the user.

Adaptive Context Management is another step toward that goal. Applications can move freely across thousands of models while Backboard ensures the conversation state always fits the model being used.

Developers focus on building. Backboard handles the context.

Next Steps

Adaptive Context Management is available today through the Backboard API.

Start building at docs.backboard.io

Announcement

Feb 19, 2026

Understanding Backboard's AI Ecosystem: State, RAG, and Memory

We get this question a lot and so I thought I'd put together a brief definition and distinction between state, RAG, and memory.

In the rapidly evolving world of AI, understanding the core components that power advanced systems is crucial. At Backboard, we're building on a foundation of sophisticated AI capabilities, and three key concepts are central to our approach: State, RAG (Retrieval-Augmented Generation), and Memory. While these terms are often used in AI discussions, their specific application and integration within Backboard's ecosystem are what set our technology apart.

What is State?

In essence, State refers to the current condition or status of an application or system at any given moment. Think of it as the immediate context. In the realm of AI, this often pertains to the ongoing conversation, the current configuration of an AI agent, or the immediate data it's processing. Our recent launch of Alpha (Stateful API + RAG) in late 2025 underscores Backboard's commitment to effectively managing and utilizing this dynamic state, ensuring our AI can operate with real-time awareness.

What is RAG (Retrieval-Augmented Generation)?

RAG is a powerful technique that significantly enhances the knowledge base of Large Language Models (LLMs). It works by allowing an LLM to retrieve relevant information from an external data source before it generates a response. This is critical because it enables our LLMs to access and incorporate up-to-date, domain-specific, or proprietary information that they weren't originally trained on. For Backboard, integrating RAG means our AI can provide more accurate, relevant, and contextually aware outputs, drawing from the most pertinent information available.

What is Memory?

Memory is a broader and more encompassing concept than RAG. In AI, Memory refers to a system's ability to store, process, and recall past information, interactions, or experiences. This capability is fundamental for enabling:

  • Conversational Continuity: Remembering previous turns in a dialogue.

  • Personalized Interactions: Tailoring responses based on past user preferences or behaviors.

  • Learning Over Time: Improving performance and understanding through accumulated experience.


Backboard's strategic roadmap prominently features advancements in Memory, with the planned releases of Portable Memory in October 2025 and Infinite Memory in December 2025. These initiatives highlight our dedication to developing sophisticated memory systems that allow our AI to learn, adapt, and retain context over extended periods.

The Interplay: How They Differ and Work Together

While RAG, State, and Memory are distinct, they are deeply interconnected and essential for building intelligent AI systems:

  • RAG is a specific method for enriching an LLM's immediate response by accessing external data.

  • Memory is a more comprehensive system for preserving and recalling past information, enabling long-term context and learning.

  • State describes the current condition of the system at any given point in time, which is influenced by both RAG's retrieval and Memory's recall.

Backboard leverages the synergy between these components. RAG provides immediate, relevant data, while Memory ensures that the AI understands the ongoing context and can recall past interactions. The State of the system is continuously updated by these processes, allowing Backboard's AI to be both knowledgeable in the moment and contextually aware over time.

By mastering the interplay of State, RAG, and Memory, Backboard is building AI that is not only intelligent but also deeply understanding and continuously learning. This forms the backbone of our mission to deliver unparalleled AI solutions.

All Posts

Announcement

Mar 31, 2026

Why Hackathon Organizers Love Working With Backboard

Over the past year, we’ve supported a couple dozen hackathons across Canada and beyond. From UofT Hacks and McHacks to KingHacks, SF Hacks, Waterloo events, and AI Collective Paris, we’ve seen the same pattern again and again:

When hackers get real AI infra on day one, they build way more than they thought possible by day three.

This post is for hackathon organizers who want their participants to ship ambitious AI projects without spending half the weekend debugging infrastructure.

What We Bring to Hackathons

Our goal at every event is simple:
Make it possible for teams to build stateful, multi-agent, RAG-powered apps in a weekend.

We’ve brought this to:

  • UofT Hacks

  • McHacks (McGill)

  • KingHacks

  • SF Hacks

  • AI Collective Paris

  • Waterloo hackathons (like CxC Waterloo and UW Listen)

  • …and many more university and community events

Our typical support has included:

  • Free Backboard credits for participants

  • Quickstart templates so teams start with working code

  • Workshops and office hours to get hackers unstuck fast

  • Judging and special prizes for best use of Backboard

Under the hood, the thing hackers actually feel is:

Backboard is a stateful AI platform: built‑in state, memory, and RAG so teams can ship multi-step, multi-agent apps fast.

Instead of wiring multiple providers, standing up custom vector stores, and bolting on “memory” at 3 a.m., teams call one API and focus on their idea.

How Participants Benefit

The comments we hear from participants are remarkably consistent:

  • “We had our entire infra up and running in minutes.”

  • “We were able to build way more than we thought!”

  • “I wouldn’t have pulled this off without Backboard.”

In a 24–48 hour sprint, that matters.

Infra in Minutes, Not Hours

With Backboard, teams can:

  • Spin up stateful conversations with built-in memory

  • Plug in RAG over their own docs and data

  • Orchestrate multi-step, multi-agent flows from a single API

That means less time fighting setup, and more time on UX, features, and polish.

From “Simple Bot” to Multi-Agent Swarms

At events like UofT Hacks, McHacks, and Waterloo hackathons, we’ve seen the same journey:

  • Teams show up planning to build a basic chatbot.

  • A few hours in, they realize they can build multi-agent swarms where:

    • Each agent has its own LLM and role,

    • All agents share a persistent memory,

    • The system improves iteratively over the weekend.

In one project, a team used Backboard’s shared memory to train and refine agents over the course of the hackathon. That kind of system is non-trivial to wire from scratch. With Backboard, they just pointed their agents at the same memory and iterated on behavior instead of infrastructure.

Those are the projects that tend to land on the podium.

How Organizers Benefit

For organizers, the value is direct:

When builders can ship more, your hackathon looks better.

Better, More Polished AI Projects

Because teams start with production-grade AI infra, final demos are:

  • More robust (fewer “it worked on my laptop at 5 a.m.” moments)

  • More feature-complete (real flows, real RAG, real state)

  • Easier to showcase in recap posts and highlight reels

Your hackathon becomes known as the place where serious AI projects get started, not just toy demos.

Happier, More Empowered Builders

Participants tell us they:

  • Got farther than they expected in the time they had

  • Built something they’re excited to keep hacking on

  • Became genuine Backboard superfans—and remember the event that introduced them to it

Organizers see the knock-on effects: better feedback, stronger community, and more repeat participation.

A Partner That Actually Shows Up

We don’t just drop credits and a logo.

At many events, we’ve:

  • Run live workshops on building stateful, multi-agent, and RAG-powered apps

  • Hosted office hours during crunch time

  • Judged final projects and sponsored “Best Use of Backboard” prizes

  • Shared standout projects and events across our channels

We can also share high-level usage insights (how many teams built on Backboard, what they built) and extend credits for teams that want to keep going post-hackathon.

The end result: your community gets more out of the weekend, and your event brand gets stronger.

Why Backboard Is a Great Fit for AI-Heavy Hackathons

If you’re running an AI-focused event—or you know most teams will be using LLMs—just handing out API keys isn’t enough anymore.

Backboard gives your hackers:

  • The Stateful AI Platform
    Built-in state and memory, so they can build multi-step workflows, persistent assistants, and multi-agent systems without standing up their own infra.

  • RAG Out of the Box
    Let teams attach their own docs and data and build genuinely “smart” apps in a weekend.

  • A Unified Surface for Ambitious Ideas
    One place to experiment with agents, tools, memory, and retrieval—and still have a demo-ready project by closing ceremony.

That combination is why Backboard keeps showing up in winning and finalist projects at the hackathons we support.

Want Backboard at Your Next Hackathon?

If you’re organizing a university or community hackathon and want your builders to ship more advanced AI projects in less time, we’d love to talk.

We offer:

  • Free credits for your participants

  • Hackathon-ready templates and quickstarts

  • Optional workshops, office hours, and judging for select events

  • Extra visibility for standout projects and events via our networks

You can:

  • Email us directly at: hackathon at backboard.io

Tell us a bit about your event and dates, and we’ll see how we can give your builders the stateful AI platform they deserve.

Announcement

Mar 24, 2026

New: Adaptive Context Window Management Across 17,000+ Models

Backboard now includes Adaptive Context Management, a system that automatically manages conversation state when your application moves between models with different context window sizes.

With access to 17,000+ LLMs on the platform, model switching is common. But context limits vary widely across models. What fits in one model may overflow another.

Until now, developers had to handle that manually.

Adaptive Context Management removes that burden. And it’s included for free with Backboard.

The Problem: Context Windows Are Inconsistent

Different models support different context window sizes. Some allow large conversations. Others are much smaller.

If an application starts a session on a large-context model and later routes a request to a smaller one, the total state can exceed what the new model can handle.

That state typically includes more than just chat messages:

  • system prompts

  • recent conversation turns

  • tool calls and tool responses

  • RAG context

  • web search results

  • runtime metadata

When that information exceeds the model’s limit, something must be removed or compressed.

Most platforms leave this responsibility to developers. That means writing logic for truncation, prioritization, summarization, and overflow handling.

In multi-model systems, that quickly becomes fragile.

Introducing Adaptive Context Management

Backboard now automatically handles context transitions when models change.

When a request is routed to a new model, Backboard dynamically budgets the available context window.

The system works as follows:

  • 20% of the model’s context window is reserved for raw state

  • 80% is freed through intelligent summarization

Backboard first calculates how many tokens fit inside the 20% allocation. Within that space we prioritize the most important live inputs:

  • system prompt

  • recent messages

  • tool calls

  • RAG results

  • web search context

Whatever fits inside this budget is passed directly to the model.

Everything else is compressed.

Intelligent Summarization

When compression is required, Backboard summarizes the remaining conversation automatically.

The summarization pipeline follows a simple rule:

  1. First we attempt summarization using the model the user is switching to.

  2. If the summary still cannot fit within the available context, we fall back to the larger model previously in use to generate a more efficient summary.

This approach preserves the most important information while ensuring the final state fits inside the new model’s limits.

The process happens automatically inside the Backboard runtime.

You Should Rarely Hit 100% Context Again

Because Adaptive Context Management runs continuously during requests and tool calls, the system proactively reshapes the state before a context window is exhausted.

In practice this means your application should rarely reach the full limit of a model’s context window, even when switching models mid conversation.

Backboard keeps the system stable so developers do not need to constantly monitor token overflow.

Developers Can See Exactly What Is Happening

We also expose context usage directly in the msg endpoint so developers can track how their application is using context in real time.

Example response:

"context_usage": {

 "used_tokens": 1302,

 "context_limit": 8191,

 "percent": 19.9,

 "summary_tokens": 0,

 "model": "gpt-4"

}

This makes it easy to monitor:

  • how much context is currently being used

  • how close a request is to the model’s limit

  • how many tokens were generated by summarization

  • which model is currently managing the context

Developers gain visibility without needing to build their own tracking systems.

The Bigger Idea

Backboard was designed so developers can treat models as interchangeable infrastructure.

But that only works if state moves safely with the user.

Adaptive Context Management is another step toward that goal. Applications can move freely across thousands of models while Backboard ensures the conversation state always fits the model being used.

Developers focus on building. Backboard handles the context.

Next Steps

Adaptive Context Management is available today through the Backboard API.

Start building at docs.backboard.io

Announcement

Feb 19, 2026

Understanding Backboard's AI Ecosystem: State, RAG, and Memory

We get this question a lot and so I thought I'd put together a brief definition and distinction between state, RAG, and memory.

In the rapidly evolving world of AI, understanding the core components that power advanced systems is crucial. At Backboard, we're building on a foundation of sophisticated AI capabilities, and three key concepts are central to our approach: State, RAG (Retrieval-Augmented Generation), and Memory. While these terms are often used in AI discussions, their specific application and integration within Backboard's ecosystem are what set our technology apart.

What is State?

In essence, State refers to the current condition or status of an application or system at any given moment. Think of it as the immediate context. In the realm of AI, this often pertains to the ongoing conversation, the current configuration of an AI agent, or the immediate data it's processing. Our recent launch of Alpha (Stateful API + RAG) in late 2025 underscores Backboard's commitment to effectively managing and utilizing this dynamic state, ensuring our AI can operate with real-time awareness.

What is RAG (Retrieval-Augmented Generation)?

RAG is a powerful technique that significantly enhances the knowledge base of Large Language Models (LLMs). It works by allowing an LLM to retrieve relevant information from an external data source before it generates a response. This is critical because it enables our LLMs to access and incorporate up-to-date, domain-specific, or proprietary information that they weren't originally trained on. For Backboard, integrating RAG means our AI can provide more accurate, relevant, and contextually aware outputs, drawing from the most pertinent information available.

What is Memory?

Memory is a broader and more encompassing concept than RAG. In AI, Memory refers to a system's ability to store, process, and recall past information, interactions, or experiences. This capability is fundamental for enabling:

  • Conversational Continuity: Remembering previous turns in a dialogue.

  • Personalized Interactions: Tailoring responses based on past user preferences or behaviors.

  • Learning Over Time: Improving performance and understanding through accumulated experience.


Backboard's strategic roadmap prominently features advancements in Memory, with the planned releases of Portable Memory in October 2025 and Infinite Memory in December 2025. These initiatives highlight our dedication to developing sophisticated memory systems that allow our AI to learn, adapt, and retain context over extended periods.

The Interplay: How They Differ and Work Together

While RAG, State, and Memory are distinct, they are deeply interconnected and essential for building intelligent AI systems:

  • RAG is a specific method for enriching an LLM's immediate response by accessing external data.

  • Memory is a more comprehensive system for preserving and recalling past information, enabling long-term context and learning.

  • State describes the current condition of the system at any given point in time, which is influenced by both RAG's retrieval and Memory's recall.

Backboard leverages the synergy between these components. RAG provides immediate, relevant data, while Memory ensures that the AI understands the ongoing context and can recall past interactions. The State of the system is continuously updated by these processes, allowing Backboard's AI to be both knowledgeable in the moment and contextually aware over time.

By mastering the interplay of State, RAG, and Memory, Backboard is building AI that is not only intelligent but also deeply understanding and continuously learning. This forms the backbone of our mission to deliver unparalleled AI solutions.