Skip to main content
Reflect ships three integration surfaces optimized for AI agents and the IDEs that host them:

llms.txt

The entire docs site as plain text. Fetch with curl or urllib, drop into Cursor @docs or any RAG indexer.

Skills

Agent skills installed via npx skills. The integrate-reflect skill walks a coding agent through adding Reflect to a project end-to-end.

MCP server

retrieve_memories and create_memory tools over the Model Context Protocol. Compatible with Cursor, Claude Code, Cline, Continue, Windsurf, Zed, and more.

llms.txt — docs as plain text

The docs site auto-generates two plain-text bundles for ingestion by AI agents:
URLPurposeSize
/llms.txtCurated index with markdown links to every page~2 KB
/llms-full.txtFull docs site concatenated into one file~50 KB
Both serve as text/plain.
Fetch with curl or urllib, not a JS-rendered HTML fetcher. The HTML console at reflect.starlight-search.com is a single-page app and won’t yield content to plain HTTP fetchers — the text bundles above are the right entry point for any agent.
curl -sS https://docs.starlight-search.com/llms-full.txt -o reflect-docs.txt

Wiring into IDEs

Add the URL via Cursor’s docs feature:
@docs add reflect https://docs.starlight-search.com/llms-full.txt
Then reference it in any chat with @reflect.

Skills — installable workflow guides

Reflect publishes agent skills under StarlightSearch/reflect-skills on GitHub. Skills are installed with the skills CLI and work in Claude Code, Cursor, Codex, Gemini CLI, Antigravity, Deep Agents, Pi, Qwen Code, and other agent hosts.

integrate-reflect

Walks a coding agent through adding Reflect to a Python agent project end-to-end: SDK install, framework-specific loop placement, parameter tuning, LLM-as-judge wiring, and a mandatory smoke test that proves the loop closes. Covers OpenAI Agents SDK, Claude Agent SDK, LangGraph, Pydantic AI, and a generic-loop fallback. Handles both fresh projects (scaffolds a starter agent) and existing codebases (overlays Reflect onto existing loops). Install globally:
npx skills add StarlightSearch/reflect-skills@integrate-reflect -g -y
Triggers automatically when the user says:
  • “add Reflect to my agent”
  • “give my agent memory”
  • “build an agent with Reflect”
  • “wire up client.trace
  • ctx.memories is empty” / “q-values aren’t moving”
Browse the source and contribute on GitHub.
Reflect also has a project-level Skill API that distills your reviewed traces into a unified guide for your specific agent. That’s a different feature than the installable workflow skills here. The installable skills teach a coding agent how to integrate Reflect; the Skill API is what Reflect generates from your traces once you’re integrated.

MCP server

The Reflect MCP server exposes retrieve_memories and create_memory over the Model Context Protocol. Connect it to any MCP-capable client and your AI assistant will automatically query past lessons before hard tasks and record new ones after each run. Hosted endpoint: https://api.starlight-search.com/mcp Auth: Authorization: Bearer <your-api-key> Transport: Streamable HTTP Get your API key from the Reflect console.

Quick connect — JSON-based clients

Most MCP-capable IDEs use the same JSON config block. Drop this into the right file for your client:
{
  "mcpServers": {
    "reflect": {
      "url": "https://api.starlight-search.com/mcp",
      "headers": {
        "Authorization": "Bearer rf_live_..."
      }
    }
  }
}
ClientConfig file
Cursor.cursor/mcp.json (project) or ~/.cursor/mcp.json (global)
Cline (VS Code)cline_mcp_settings.json (Cline → Settings → MCP Servers → Configure)
Continue (VS Code/JetBrains)~/.continue/config.json under experimental.modelContextProtocolServers
Windsurf~/.codeium/windsurf/mcp_config.json
Zed~/.config/zed/settings.json under "context_servers" (no mcpServers wrapper)

Quick connect — Claude Code

claude mcp add --transport http reflect https://api.starlight-search.com/mcp \
  --header "Authorization: Bearer rf_live_..."

Stdio transport (alternative)

If your client doesn’t support HTTP MCP servers, run the local stdio version with uvx:
{
  "mcpServers": {
    "reflect": {
      "command": "uvx",
      "args": ["--from", "reflect-mcp-server", "reflect-mcp-server", "--transport", "stdio"],
      "env": {
        "REFLECT_API_KEY": "rf_live_...",
        "REFLECT_PROJECT_ID": "your-project-id",
        "REFLECT_API_URL": "https://api.starlight-search.com"
      }
    }
  }
}
For self-hosting, advanced options, and the full env-var reference, see the MCP server guide.

Quick reference

What you wantWhere to go
Read all the docs in one fetchhttps://docs.starlight-search.com/llms-full.txt
Browse the docs indexhttps://docs.starlight-search.com/llms.txt
Add Reflect to a Python agent projectnpx skills add StarlightSearch/reflect-skills@integrate-reflect
Connect any MCP client to Reflecthttps://api.starlight-search.com/mcp (Bearer auth)
Browse all skillsStarlightSearch/reflect-skills
Get a project + API keyReflect console