llms.txt
The entire docs site as plain text. Fetch with
curl or urllib, drop into Cursor @docs or any RAG indexer.Skills
Agent skills installed via
npx skills. The integrate-reflect skill walks a coding agent through adding Reflect to a project end-to-end.MCP server
retrieve_memories and create_memory tools over the Model Context Protocol. Compatible with Cursor, Claude Code, Cline, Continue, Windsurf, Zed, and more.llms.txt — docs as plain text
The docs site auto-generates two plain-text bundles for ingestion by AI agents:| URL | Purpose | Size |
|---|---|---|
/llms.txt | Curated index with markdown links to every page | ~2 KB |
/llms-full.txt | Full docs site concatenated into one file | ~50 KB |
text/plain.
Wiring into IDEs
- Cursor
- Claude Code
- Any agent
Add the URL via Cursor’s docs feature:Then reference it in any chat with
@reflect.Skills — installable workflow guides
Reflect publishes agent skills underStarlightSearch/reflect-skills on GitHub. Skills are installed with the skills CLI and work in Claude Code, Cursor, Codex, Gemini CLI, Antigravity, Deep Agents, Pi, Qwen Code, and other agent hosts.
integrate-reflect
Walks a coding agent through adding Reflect to a Python agent project end-to-end: SDK install, framework-specific loop placement, parameter tuning, LLM-as-judge wiring, and a mandatory smoke test that proves the loop closes. Covers OpenAI Agents SDK, Claude Agent SDK, LangGraph, Pydantic AI, and a generic-loop fallback. Handles both fresh projects (scaffolds a starter agent) and existing codebases (overlays Reflect onto existing loops).
Install globally:
- “add Reflect to my agent”
- “give my agent memory”
- “build an agent with Reflect”
- “wire up
client.trace” - “
ctx.memoriesis empty” / “q-values aren’t moving”
Reflect also has a project-level Skill API that distills your reviewed traces into a unified guide for your specific agent. That’s a different feature than the installable workflow skills here. The installable skills teach a coding agent how to integrate Reflect; the Skill API is what Reflect generates from your traces once you’re integrated.
MCP server
The Reflect MCP server exposesretrieve_memories and create_memory over the Model Context Protocol. Connect it to any MCP-capable client and your AI assistant will automatically query past lessons before hard tasks and record new ones after each run.
Hosted endpoint: https://api.starlight-search.com/mcp
Auth: Authorization: Bearer <your-api-key>
Transport: Streamable HTTP
Get your API key from the Reflect console.
Quick connect — JSON-based clients
Most MCP-capable IDEs use the same JSON config block. Drop this into the right file for your client:| Client | Config file |
|---|---|
| Cursor | .cursor/mcp.json (project) or ~/.cursor/mcp.json (global) |
| Cline (VS Code) | cline_mcp_settings.json (Cline → Settings → MCP Servers → Configure) |
| Continue (VS Code/JetBrains) | ~/.continue/config.json under experimental.modelContextProtocolServers |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| Zed | ~/.config/zed/settings.json under "context_servers" (no mcpServers wrapper) |
Quick connect — Claude Code
Stdio transport (alternative)
If your client doesn’t support HTTP MCP servers, run the local stdio version withuvx:
Quick reference
| What you want | Where to go |
|---|---|
| Read all the docs in one fetch | https://docs.starlight-search.com/llms-full.txt |
| Browse the docs index | https://docs.starlight-search.com/llms.txt |
| Add Reflect to a Python agent project | npx skills add StarlightSearch/reflect-skills@integrate-reflect |
| Connect any MCP client to Reflect | https://api.starlight-search.com/mcp (Bearer auth) |
| Browse all skills | StarlightSearch/reflect-skills |
| Get a project + API key | Reflect console |