diff --git a/docs/research-agent.md b/docs/research-agent.md
new file mode 100644
index 000000000..7f7276fd5
--- /dev/null
+++ b/docs/research-agent.md
@@ -0,0 +1,245 @@
+# Research Agent: Key Features
+
+The `examples/research-agent` example shows how to build a research agent on top of the MCP Python SDK. It demonstrates three features that are common in production agent workflows:
+
+- **Collaborative Planning** — the agent presents a step-by-step research plan for user approval before doing any work.
+- **MCP Support** — the agent connects to remote MCP servers at startup and incorporates their tools into the research workflow.
+- **Visualizations** — the agent extracts numeric findings from research results and returns a chart as a base64-encoded SVG image.
+
+All three features are implemented using standard MCP primitives already present in the SDK (tasks, elicitation, sampling, `ClientSessionGroup`, and `ImageContent`).
+
+---
+
+## Running the example
+
+```bash
+cd examples/research-agent
+uv run mcp-research-agent --port 8000
+```
+
+The server listens at `http://127.0.0.1:8000/mcp`.
+
+---
+
+## Feature 1 — Collaborative Planning
+
+### What it does
+
+When a client calls the `research` tool with `"collaborative_planning": true`, the agent:
+
+1. Uses LLM sampling (`create_message`) to draft a step-by-step research plan.
+2. Presents the plan to the user via **elicitation** and waits for approval.
+3. Proceeds with research only if the user approves; otherwise returns a cancellation message.
+
+This gives users full visibility into — and control over — what the agent intends to do before it expends any effort.
+
+### How to use it
+
+```json
+{
+ "name": "research",
+ "arguments": {
+ "query": "Latest advances in battery technology",
+ "collaborative_planning": true
+ }
+}
+```
+
+The client receives an elicitation request with a schema like:
+
+```json
+{
+ "type": "object",
+ "properties": {
+ "approved": { "type": "boolean" },
+ "feedback": { "type": "string" }
+ },
+ "required": ["approved"]
+}
+```
+
+Set `"approved": true` (and optionally provide `"feedback"`) to continue; any other response cancels the task.
+
+### Implementation
+
+Collaborative planning uses the existing [Tasks and Elicitation](experimental/tasks.md) infrastructure:
+
+```python
+elicit_result = await task.elicit(
+ message=f"Research Plan\n{'─' * 40}\n{plan_text}\n\nApprove to begin?",
+ requestedSchema={
+ "type": "object",
+ "properties": {
+ "approved": {"type": "boolean"},
+ "feedback": {"type": "string"},
+ },
+ "required": ["approved"],
+ },
+)
+
+if not (elicit_result.action == "accept" and elicit_result.content.get("approved")):
+ return types.CallToolResult(content=[types.TextContent(type="text", text="Research cancelled.")])
+```
+
+---
+
+## Feature 2 — MCP Support
+
+### What it does
+
+The research agent connects to one or more **remote MCP servers** at startup. Their tools are discovered, aggregated, and queried during the research task — giving the agent access to private data sources, internal APIs, or custom domain tools.
+
+### Configuration
+
+Set the `MCP_SERVERS` environment variable to a JSON array of server descriptors before starting the server:
+
+```bash
+# stdio server
+MCP_SERVERS='[{"type":"stdio","command":"python","args":["-m","my_data_server"]}]' \
+ uv run mcp-research-agent
+
+# SSE server
+MCP_SERVERS='[{"type":"sse","url":"http://internal-api/sse"}]' \
+ uv run mcp-research-agent
+
+# StreamableHTTP server
+MCP_SERVERS='[{"type":"streamable_http","url":"http://internal-api/mcp"}]' \
+ uv run mcp-research-agent
+
+# Multiple servers
+MCP_SERVERS='[
+ {"type":"stdio","command":"python","args":["-m","my_db_server"]},
+ {"type":"sse","url":"http://analytics/sse"}
+]' uv run mcp-research-agent
+```
+
+### Inspecting connected tools
+
+Use the `list_mcp_tools` tool to see what is available:
+
+```json
+{ "name": "list_mcp_tools", "arguments": {} }
+```
+
+### Implementation
+
+`ClientSessionGroup` manages concurrent connections to all configured servers. It aggregates their tools into a single `dict[str, Tool]`:
+
+```python
+async with ClientSessionGroup() as group:
+ for params in server_params:
+ await group.connect_to_server(params)
+
+ # Call a tool from any connected server by name
+ result = await group.call_tool("my_tool", {"arg": "value"})
+```
+
+The research agent stores the group as a module-level reference during the ASGI lifespan, so all task handlers can reach it.
+
+---
+
+## Feature 3 — Visualizations
+
+### What it does
+
+When `"visualization": "auto"` is passed, the agent:
+
+1. Scans the research summary for labelled numeric values (e.g. `"Market size: $4.5B"`, `"Growth rate: 12%"`).
+2. Generates a bar chart from the extracted metrics using pure-Python SVG.
+3. Appends the chart to the result as an `ImageContent` block with `mimeType: "image/svg+xml"` and base64-encoded `data`.
+
+No external charting library is required.
+
+### How to use it
+
+```json
+{
+ "name": "research",
+ "arguments": {
+ "query": "Global EV market overview",
+ "visualization": "auto"
+ }
+}
+```
+
+The `CallToolResult` will contain two content blocks: a `TextContent` with the written summary and an `ImageContent` with the chart.
+
+### Metric extraction patterns
+
+The extractor recognises patterns like:
+
+| Text in summary | Extracted metric |
+|---|---|
+| `Market size: $4.5B` | `Market size ($B): 4.5` |
+| `Growth rate: 12%` | `Growth rate (%): 12` |
+| `Active users: 1.2M` | `Active users ($M): 1.2` |
+| `Revenue: $800K` | `Revenue ($K): 800` |
+
+Up to six metrics are extracted and displayed. If no numeric patterns are found the chart is omitted.
+
+### Implementation
+
+```python
+from mcp_research_agent.visualization import extract_metrics, generate_bar_chart
+
+metrics = extract_metrics(summary) # dict[str, float]
+chart_b64 = generate_bar_chart(metrics, title=query[:50])
+
+content.append(
+ types.ImageContent(
+ type="image",
+ data=chart_b64,
+ mimeType="image/svg+xml",
+ )
+)
+```
+
+`generate_bar_chart` returns a base64-encoded UTF-8 SVG string. Clients that support `ImageContent` (e.g. Claude Desktop) will render the chart inline.
+
+---
+
+## Combining all three features
+
+```json
+{
+ "name": "research",
+ "arguments": {
+ "query": "Renewable energy market growth 2024",
+ "collaborative_planning": true,
+ "visualization": "auto"
+ }
+}
+```
+
+With this call the agent will:
+
+1. Draft a research plan and pause for your approval.
+2. Query any configured remote MCP servers for relevant data.
+3. Conduct the research and produce a written summary.
+4. Return the summary together with a bar chart of the key metrics it found.
+
+---
+
+## Architecture overview
+
+```
+Client
+ │
+ │ call_tool("research", {...})
+ ▼
+StreamableHTTP transport
+ │
+ ▼
+research-agent Server (examples/research-agent/mcp_research_agent/server.py)
+ │
+ ├── ServerTaskContext.create_message() ──▶ LLM (plan generation, research)
+ ├── ServerTaskContext.elicit() ──▶ Client UI (plan approval)
+ ├── ClientSessionGroup.call_tool() ──▶ Remote MCP servers (MCP support)
+ └── generate_bar_chart() ──▶ ImageContent (visualization)
+```
+
+## See also
+
+- [Tasks and Elicitation](experimental/tasks.md) — the async task and elicitation primitives used by collaborative planning.
+- [`ClientSessionGroup`](https://github.com/modelcontextprotocol/python-sdk/blob/main/src/mcp/client/session_group.py) — multi-server connection management used for MCP support.
+- [`ImageContent`](https://github.com/modelcontextprotocol/python-sdk/blob/main/src/mcp/types.py) — the MCP type used to carry base64-encoded image data.
diff --git a/examples/research-agent/mcp_research_agent/__init__.py b/examples/research-agent/mcp_research_agent/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/examples/research-agent/mcp_research_agent/__main__.py b/examples/research-agent/mcp_research_agent/__main__.py
new file mode 100644
index 000000000..5eb177f29
--- /dev/null
+++ b/examples/research-agent/mcp_research_agent/__main__.py
@@ -0,0 +1,5 @@
+import sys
+
+from mcp_research_agent.server import main
+
+sys.exit(main()) # type: ignore[call-arg]
diff --git a/examples/research-agent/mcp_research_agent/server.py b/examples/research-agent/mcp_research_agent/server.py
new file mode 100644
index 000000000..555d3fcfa
--- /dev/null
+++ b/examples/research-agent/mcp_research_agent/server.py
@@ -0,0 +1,364 @@
+"""Research agent MCP server demonstrating three key features:
+
+1. Collaborative Planning — set ``collaborative_planning=True`` to present a
+ research plan for user approval (via MCP elicitation) before work begins.
+2. MCP Support — configure ``MCP_SERVERS`` (JSON) to connect remote
+ MCP servers; their tools are queried as part of the research workflow.
+3. Visualizations — set ``visualization="auto"`` to receive a bar chart
+ of key metrics as a base64-encoded ``image/svg+xml`` ImageContent block.
+"""
+
+import json
+import logging
+import os
+from collections.abc import AsyncIterator
+from contextlib import asynccontextmanager
+from typing import Any, Literal
+
+import click
+import mcp.types as types
+import uvicorn
+from mcp.client.session_group import ClientSessionGroup, SseServerParameters, StreamableHttpParameters
+from mcp.client.stdio import StdioServerParameters
+from mcp.server.experimental.task_context import ServerTaskContext
+from mcp.server.lowlevel import Server
+from mcp.server.streamable_http_manager import StreamableHTTPSessionManager
+from starlette.applications import Starlette
+from starlette.routing import Mount
+
+from mcp_research_agent.visualization import extract_metrics, generate_bar_chart
+
+logger = logging.getLogger(__name__)
+
+# ---------------------------------------------------------------------------
+# Low-level MCP server with task support enabled
+# ---------------------------------------------------------------------------
+
+server = Server("research-agent")
+server.experimental.enable_tasks()
+
+# Module-level reference to the shared ClientSessionGroup, populated during
+# ASGI lifespan so tool handlers can reach connected MCP servers.
+_session_group: ClientSessionGroup | None = None
+
+
+# ---------------------------------------------------------------------------
+# MCP server connection helpers
+# ---------------------------------------------------------------------------
+
+
+def _parse_mcp_servers() -> list[StdioServerParameters | SseServerParameters | StreamableHttpParameters]:
+ """Parse the ``MCP_SERVERS`` environment variable (JSON array).
+
+ Each element must have a ``"type"`` key (``"stdio"``, ``"sse"``, or
+ ``"streamable_http"``) plus the fields required by the corresponding
+ parameter class.
+
+ Example::
+
+ MCP_SERVERS='[{"type":"stdio","command":"python","args":["-m","my_server"]}]'
+ """
+ raw = os.environ.get("MCP_SERVERS", "[]")
+ try:
+ configs: list[dict[str, Any]] = json.loads(raw)
+ except json.JSONDecodeError:
+ logger.exception("Failed to parse MCP_SERVERS — expected a JSON array")
+ return []
+
+ result: list[StdioServerParameters | SseServerParameters | StreamableHttpParameters] = []
+ for cfg in configs:
+ server_type = cfg.get("type", "stdio")
+ if server_type == "stdio":
+ result.append(
+ StdioServerParameters(
+ command=cfg["command"],
+ args=cfg.get("args", []),
+ env=cfg.get("env"),
+ )
+ )
+ elif server_type == "sse":
+ result.append(SseServerParameters(url=cfg["url"], headers=cfg.get("headers")))
+ elif server_type == "streamable_http":
+ result.append(StreamableHttpParameters(url=cfg["url"], headers=cfg.get("headers")))
+ else:
+ logger.warning("Unknown MCP server type %r — skipping", server_type)
+ return result
+
+
+# ---------------------------------------------------------------------------
+# Tool definitions
+# ---------------------------------------------------------------------------
+
+
+@server.list_tools()
+async def list_tools() -> list[types.Tool]:
+ """Advertise the tools provided by this research agent."""
+ return [
+ types.Tool(
+ name="research",
+ description=(
+ "Research a topic using LLM sampling and any connected MCP servers. "
+ "Supports collaborative planning (user approves the plan before work starts) "
+ "and automatic chart generation from extracted metrics."
+ ),
+ inputSchema={
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "string",
+ "description": "The research question or topic to investigate.",
+ },
+ "collaborative_planning": {
+ "type": "boolean",
+ "default": False,
+ "description": (
+ "When true, present a step-by-step research plan for user "
+ "approval before any research is performed."
+ ),
+ },
+ "visualization": {
+ "type": "string",
+ "enum": ["auto", "off"],
+ "default": "off",
+ "description": (
+ "When 'auto', append a bar chart of key numeric findings "
+ "as a base64-encoded SVG ImageContent block."
+ ),
+ },
+ },
+ "required": ["query"],
+ },
+ execution=types.ToolExecution(taskSupport=types.TASK_REQUIRED),
+ ),
+ types.Tool(
+ name="list_mcp_tools",
+ description="List the tools available from all connected remote MCP servers.",
+ inputSchema={"type": "object", "properties": {}},
+ ),
+ ]
+
+
+# ---------------------------------------------------------------------------
+# Tool handlers
+# ---------------------------------------------------------------------------
+
+
+@server.call_tool()
+async def handle_call_tool(
+ name: str,
+ arguments: dict[str, Any],
+) -> types.CallToolResult | types.CreateTaskResult:
+ """Dispatch incoming tool calls to the appropriate handler."""
+ if name == "research":
+ return await _handle_research(arguments)
+ if name == "list_mcp_tools":
+ return _handle_list_mcp_tools()
+ return types.CallToolResult(
+ content=[types.TextContent(type="text", text=f"Unknown tool: {name}")],
+ isError=True,
+ )
+
+
+def _handle_list_mcp_tools() -> types.CallToolResult:
+ """Return a summary of tools reachable via connected MCP servers."""
+ if _session_group is None or not _session_group.tools:
+ return types.CallToolResult(
+ content=[types.TextContent(type="text", text="No remote MCP servers are connected.")]
+ )
+
+ lines = ["Tools available from connected MCP servers:\n"]
+ for tool in _session_group.tools.values():
+ lines.append(f" • {tool.name}: {tool.description or '(no description)'}")
+ return types.CallToolResult(content=[types.TextContent(type="text", text="\n".join(lines))])
+
+
+async def _handle_research(arguments: dict[str, Any]) -> types.CreateTaskResult:
+ """Implement the ``research`` tool with all three key features."""
+ ctx = server.request_context
+ ctx.experimental.validate_task_mode(types.TASK_REQUIRED)
+
+ query: str = arguments.get("query", "")
+ collaborative_planning: bool = bool(arguments.get("collaborative_planning", False))
+ visualization: Literal["auto", "off"] = "auto" if arguments.get("visualization") == "auto" else "off"
+
+ async def work(task: ServerTaskContext) -> types.CallToolResult:
+ # ── Feature 1 (part 1): generate a research plan via LLM sampling ──
+ await task.update_status("Generating research plan…")
+ plan_response = await task.create_message(
+ messages=[
+ types.SamplingMessage(
+ role="user",
+ content=types.TextContent(
+ type="text",
+ text=(
+ f"Create a concise step-by-step research plan for this query: '{query}'. "
+ "List 3–5 concrete steps. Be brief and specific."
+ ),
+ ),
+ )
+ ],
+ max_tokens=300,
+ )
+ plan_text: str = (
+ plan_response.content.text
+ if isinstance(plan_response.content, types.TextContent)
+ else f"Research plan for: {query}"
+ )
+
+ # ── Feature 1 (part 2): collaborative planning — elicit user approval ──
+ if collaborative_planning:
+ await task.update_status("Awaiting plan approval…")
+ elicit_result = await task.elicit(
+ message=(f"Research Plan\n{'─' * 40}\n{plan_text}\n\nApprove this plan to begin research?"),
+ requestedSchema={
+ "type": "object",
+ "properties": {
+ "approved": {
+ "type": "boolean",
+ "description": "Set to true to proceed with the research.",
+ },
+ "feedback": {
+ "type": "string",
+ "description": "Optional feedback or requested modifications.",
+ },
+ },
+ "required": ["approved"],
+ },
+ )
+
+ approved = (
+ elicit_result.action == "accept"
+ and elicit_result.content is not None
+ and bool(elicit_result.content.get("approved"))
+ )
+ if not approved:
+ feedback = (elicit_result.content or {}).get("feedback", "")
+ cancel_msg = f"Research cancelled.\nFeedback: {feedback}" if feedback else "Research cancelled by user."
+ return types.CallToolResult(content=[types.TextContent(type="text", text=cancel_msg)])
+
+ # ── Feature 2: MCP support — query tools from connected servers ──────
+ mcp_context = ""
+ if _session_group is not None and _session_group.tools:
+ await task.update_status("Querying connected MCP servers…")
+ snippets: list[str] = []
+ for tool_name in list(_session_group.tools.keys())[:3]:
+ try:
+ result = await _session_group.call_tool(tool_name, {})
+ for block in result.content:
+ if isinstance(block, types.TextContent):
+ snippets.append(f"[{tool_name}]: {block.text[:400]}")
+ except Exception:
+ logger.exception("Failed to call remote MCP tool %r", tool_name)
+ if snippets:
+ mcp_context = "\n\nData from connected MCP servers:\n" + "\n".join(snippets)
+
+ # ── Execute the research via LLM sampling ─────────────────────────────
+ await task.update_status("Conducting research…")
+ research_response = await task.create_message(
+ messages=[
+ types.SamplingMessage(
+ role="user",
+ content=types.TextContent(
+ type="text",
+ text=(
+ f"Research the following query: '{query}'\n\n"
+ f"Follow this plan:\n{plan_text}"
+ f"{mcp_context}\n\n"
+ "Provide a comprehensive summary with key findings. "
+ "Include specific labelled metrics where possible "
+ "(e.g. 'Market size: $4.5B', 'Growth rate: 12%', 'Users: 1.2M') "
+ "so they can be charted automatically."
+ ),
+ ),
+ )
+ ],
+ max_tokens=1024,
+ )
+ summary: str = (
+ research_response.content.text
+ if isinstance(research_response.content, types.TextContent)
+ else "Research complete."
+ )
+
+ # ── Feature 3: visualizations — SVG bar chart of extracted metrics ────
+ content: list[types.ContentBlock] = [types.TextContent(type="text", text=summary)]
+
+ if visualization == "auto":
+ await task.update_status("Generating visualization…")
+ metrics = extract_metrics(summary)
+ if metrics:
+ chart_b64 = generate_bar_chart(metrics, title=query[:50])
+ content.append(
+ types.ImageContent(
+ type="image",
+ data=chart_b64,
+ mimeType="image/svg+xml",
+ )
+ )
+
+ return types.CallToolResult(content=content)
+
+ return await ctx.experimental.run_task(work)
+
+
+# ---------------------------------------------------------------------------
+# ASGI application with lifespan for MCP server connections
+# ---------------------------------------------------------------------------
+
+
+def create_app(session_manager: StreamableHTTPSessionManager) -> Starlette:
+ """Build the Starlette ASGI app.
+
+ The lifespan opens a :class:`~mcp.client.session_group.ClientSessionGroup`
+ that connects to any MCP servers listed in the ``MCP_SERVERS`` env var,
+ making their tools available throughout the request lifetime.
+ """
+
+ @asynccontextmanager
+ async def app_lifespan(_app: Starlette) -> AsyncIterator[None]:
+ global _session_group
+ async with ClientSessionGroup() as group:
+ _session_group = group
+ for params in _parse_mcp_servers():
+ try:
+ await group.connect_to_server(params)
+ logger.info("Connected to MCP server: %s", params)
+ except Exception:
+ logger.exception("Failed to connect to MCP server: %s", params)
+
+ tool_count = len(group.tools)
+ if tool_count:
+ logger.info("MCP support: %d remote tool(s) available", tool_count)
+ else:
+ logger.info("MCP support: no remote servers configured (set MCP_SERVERS)")
+
+ async with session_manager.run():
+ yield
+
+ _session_group = None
+
+ return Starlette(
+ routes=[Mount("/mcp", app=session_manager.handle_request)],
+ lifespan=app_lifespan,
+ )
+
+
+# ---------------------------------------------------------------------------
+# CLI entry point
+# ---------------------------------------------------------------------------
+
+
+@click.command()
+@click.option("--port", default=8000, show_default=True, help="Port to listen on.")
+@click.option("--host", default="127.0.0.1", show_default=True, help="Host to bind to.")
+def main(port: int, host: str) -> None:
+ """Start the research agent MCP server.
+
+ Set the ``MCP_SERVERS`` environment variable (JSON array) to connect remote
+ MCP servers and expose their tools during research tasks.
+ """
+ logging.basicConfig(level=logging.INFO, format="%(levelname)s %(name)s: %(message)s")
+ session_manager = StreamableHTTPSessionManager(app=server)
+ starlette_app = create_app(session_manager)
+ logger.info("Research agent starting on http://%s:%d/mcp", host, port)
+ uvicorn.run(starlette_app, host=host, port=port)
diff --git a/examples/research-agent/mcp_research_agent/visualization.py b/examples/research-agent/mcp_research_agent/visualization.py
new file mode 100644
index 000000000..fdaa55d5d
--- /dev/null
+++ b/examples/research-agent/mcp_research_agent/visualization.py
@@ -0,0 +1,125 @@
+"""Pure-Python SVG chart generation for research visualizations.
+
+No external dependencies — SVG is generated as XML and returned base64-encoded
+so it can be embedded directly in an MCP ImageContent block.
+"""
+
+import base64
+import html
+import re
+
+
+def extract_metrics(text: str) -> dict[str, float]:
+ """Extract labelled numeric metrics from research text.
+
+ Recognises patterns like "Market size: $4.5B", "Growth rate: 12%",
+ "Users: 1.2M" and returns a dict suitable for charting.
+ """
+ metrics: dict[str, float] = {}
+ # Match "Label: [$]number[unit]" — label starts with a capital letter
+ pattern = (
+ r"([A-Z][a-zA-Z\s\-]{2,25}?):\s*"
+ r"\$?([\d,]+(?:\.\d+)?)\s*"
+ r"(%|B|M|K|billion|million|thousand)?\b"
+ )
+ for match in re.finditer(pattern, text):
+ label = match.group(1).strip()
+ value_str = match.group(2).replace(",", "")
+ unit = (match.group(3) or "").lower()
+
+ try:
+ value = float(value_str)
+ except ValueError:
+ continue
+
+ if unit in ("b", "billion"):
+ metrics[f"{label[:18]} ($B)"] = value
+ elif unit in ("m", "million"):
+ metrics[f"{label[:18]} ($M)"] = value
+ elif unit == "%":
+ metrics[f"{label[:18]} (%)"] = value
+ elif unit in ("k", "thousand"):
+ metrics[f"{label[:18]} (K)"] = value
+ else:
+ metrics[label[:24]] = value
+
+ if len(metrics) >= 6:
+ break
+
+ return metrics
+
+
+def generate_bar_chart(
+ data: dict[str, float],
+ title: str = "Research Findings",
+ width: int = 640,
+ height: int = 400,
+) -> str:
+ """Generate a bar chart and return it as a base64-encoded SVG string.
+
+ Args:
+ data: Mapping of label → numeric value.
+ title: Chart title displayed at the top.
+ width: SVG canvas width in pixels.
+ height: SVG canvas height in pixels.
+
+ Returns:
+ Base64-encoded UTF-8 SVG string (mimeType ``image/svg+xml``).
+ """
+ if not data:
+ svg = (
+ f'"
+ )
+ return base64.b64encode(svg.encode()).decode()
+
+ pad = 60
+ title_h = 40
+ chart_w = width - 2 * pad
+ chart_h = height - 2 * pad - title_h
+
+ max_val = max(data.values()) or 1.0
+ n = len(data)
+ slot_w = chart_w / n
+ bar_w = slot_w * 0.65
+ colors = ["#4285f4", "#34a853", "#fbbc04", "#ea4335", "#9c27b0", "#00bcd4"]
+
+ rects: list[str] = []
+ value_labels: list[str] = []
+ x_labels: list[str] = []
+
+ for i, (key, val) in enumerate(data.items()):
+ bar_h = (val / max_val) * chart_h
+ x = pad + i * slot_w + (slot_w - bar_w) / 2
+ y = pad + title_h + chart_h - bar_h
+ color = colors[i % len(colors)]
+
+ rects.append(f'')
+ value_labels.append(
+ f'{val:.1f}'
+ )
+ short = html.escape(key[:14]) + ("…" if len(key) > 14 else "")
+ x_labels.append(
+ f''
+ f"{short}"
+ )
+
+ baseline_y = pad + title_h + chart_h
+ svg_lines = [
+ f'",
+ ]
+ return base64.b64encode("\n".join(svg_lines).encode()).decode()
diff --git a/examples/research-agent/pyproject.toml b/examples/research-agent/pyproject.toml
new file mode 100644
index 000000000..4a4b00361
--- /dev/null
+++ b/examples/research-agent/pyproject.toml
@@ -0,0 +1,43 @@
+[project]
+name = "mcp-research-agent"
+version = "0.1.0"
+description = "Research agent demonstrating collaborative planning, MCP support, and visualizations"
+readme = "README.md"
+requires-python = ">=3.10"
+authors = [{ name = "Anthropic, PBC." }]
+keywords = ["mcp", "llm", "agent", "research", "visualization", "planning"]
+license = { text = "MIT" }
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: MIT License",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.10",
+]
+dependencies = ["anyio>=4.5", "click>=8.0", "mcp", "starlette", "uvicorn"]
+
+[project.scripts]
+mcp-research-agent = "mcp_research_agent.server:main"
+
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[tool.hatch.build.targets.wheel]
+packages = ["mcp_research_agent"]
+
+[tool.pyright]
+include = ["mcp_research_agent"]
+venvPath = "."
+venv = ".venv"
+
+[tool.ruff.lint]
+select = ["E", "F", "I"]
+ignore = []
+
+[tool.ruff]
+line-length = 120
+target-version = "py310"
+
+[dependency-groups]
+dev = ["pyright>=1.1.378", "ruff>=0.6.9"]