Feature

Docs, MCP, Multimodal RAG are Live

What's New

Three major platform additions ship today: comprehensive API documentation, Model Context Protocol (MCP) integration, and multimodal RAG support for images, PDFs, Word documents, and PowerPoint files.

New Documentation
  • Interactive API playground — test every endpoint directly in the browser with your API key.

  • Quickstart guides for Python, TypeScript, and cURL with copy-paste examples.

  • Architecture diagrams showing how memory, RAG, and routing interact internally.

  • Migration guides from OpenAI Assistants, LangChain, and LlamaIndex.

MCP Integration
  • Backboard now exposes a Model Context Protocol server for tool-use workflows.

  • Connect any MCP-compatible client (Claude Desktop, Cursor, custom agents) to Backboard.

  • Expose memory search, document retrieval, and thread management as MCP tools.

  • Agents can read and write memories, query RAG, and manage state through MCP.

Multimodal RAG

RAG now supports non-text content natively:

  • Images: Extract text via OCR, describe visual content, embed for semantic search.

  • PDFs: Full text extraction with layout preservation, table parsing, and figure detection.

  • Word documents: .docx parsing with heading structure, styles, and embedded images.

  • PowerPoint: Slide-by-slide content extraction including speaker notes and embedded media.

Processing Pipeline
  • Automatic chunking optimized per file type (paragraph-level for docs, slide-level for PPT).

  • Vision model analysis for charts, diagrams, and screenshots.

  • Metadata extraction (author, date, page count) stored alongside embeddings.

  • Processing status webhooks for async file ingestion.

ON THIS PAGE

No headings found on page

CATEGORY

Feature

PUBLISHED

SHARE