Blueberry MCP
Blueberry includes a built-in MCP server and CLI that give any AI coding agent full context and control over your entire workspace: your editor, terminal, preview browser, canvas, and pinned apps.
What is Blueberry MCP?
Blueberry MCP is a local Model Context Protocol server that runs inside Blueberry. It provides workspace context through an MCP tool and a full CLI for taking actions.
When you run an AI agent from a Blueberry terminal, the MCP server connects automatically and the blueberry CLI is available on PATH. The agent can then read your code, check terminal output, capture screenshots, draw on the canvas, and more, all without you copying, pasting, or switching windows.
This works with any MCP-compatible coding agent, including Claude Code, Codex, and Gemini CLI.
What Your AI Can See
When an AI agent runs from a Blueberry terminal, it has visibility into every part of your workspace:
Editor
- The currently open file and its contents
- All open tabs
- Cursor position and selection
- Active diff views
Terminal
- All terminal tabs and their output
- Running processes and server status
- Build errors and test results
Preview Browser
- The active URL and page content
- All preview tabs with their URLs and titles
- Console logs from the page
- Visual screenshots of your running app
Canvas
- All drawing elements and their properties
- Mermaid diagrams rendered as native elements
Pinned Apps
- Your pinned web apps (GitHub, Vercel, PostHog, etc.)
- Their current content and state
How It Works
Blueberry uses a two-part architecture for AI integration:
- MCP Context Tool (
blueberry_get_context): returns the full workspace state as structured data, including available CLI commands - CLI Commands (
blueberry): 16+ commands for taking actions like opening files, running terminal commands, drawing on the canvas, and capturing screenshots
The typical agent workflow is:
- Call
blueberry_get_contextto understand the workspace - Use
blueberryCLI commands (via bash) for all further actions
Key CLI Commands
| Command | What it does |
|---|---|
blueberry open <file> | Open a file in the editor |
blueberry preview <url> | Open a URL in the preview browser |
blueberry terminal <command> | Send a command to a terminal tab |
blueberry capture:webview | Screenshot the preview or a pinned app |
blueberry capture:content | Get page text as an accessibility tree |
blueberry canvas:mermaid <file> | Render a Mermaid diagram on the canvas |
blueberry canvas:draw <file> | Draw elements on the canvas from JSON |
blueberry canvas:data | Get canvas elements as JSON |
blueberry help | List all available commands |
See the MCP & CLI Reference for full command documentation.
Why This Matters
No More Copy-Paste
Stop copying code snippets, error logs, and screenshots into chat. Your AI can see your workspace directly and pull the context it needs.
Better Responses
With full visibility into your editor, terminal, preview, and canvas, your AI understands the complete picture. It sees the error in your terminal, the code causing it, and the result in your browser, all at once.
Direct Actions
Your AI doesn’t just tell you what to do: it can do it. Open files, run commands, navigate your preview, draw diagrams, and check your deployments without you lifting a finger.
Getting Started
- Open a terminal tab in Blueberry
- Start your AI agent (e.g.
claude,codex,gemini) - The agent automatically has MCP access and the
blueberryCLI on PATH
See Agent Setup for detailed setup instructions.
Learn More
- MCP & CLI Reference: full command documentation
- Best Practices: tips for getting the most out of AI agents in Blueberry