Works across workflows
Agent agnostic
Use Reflect with any agent that can call the API. It is not tied to a single model, framework, or agent runtime.
Harness agnostic
Plug Reflect into your existing setup, whether you run custom scripts, evaluators, agent loops, or lightweight MCP-based tooling.
Task agnostic
Store and retrieve memories for debugging, implementation, documentation, testing, refactoring, and other kinds of work.
Cross-functional memory reuse
Memories created in one workflow can help with another. A reflection from a coding task can still be useful later in testing, docs, or review work when the task is relevant.
How retrieval works
Reflect ranks memories by learned utility (q_value) from real outcomes.
- Q-value (utility) captures whether using a memory previously helped or hurt.
- Higher Q-value memories are prioritized in retrieval.
- Lower Q-value memories are deprioritized over time.
pass / fail) in context, then fed back into future retrieval.
The learning loop
Query memories
Before executing a task, retrieve relevant reflections from past runs. Memories are ranked by learned utility (Q-value).
Augment your prompt
Append retrieved memories to the task text. The SDK formats successful and failed reflections into sections your LLM can use as context.
Record a trace
Store the full trajectory — task, steps, final response, and which memories were used.
Requirements
- Python 3.12+
- A Reflect API instance (local or deployed)
- A project and API key from the Reflect console or the bootstrap endpoint
Next steps
Installation
Install the SDK with pip.
Quickstart
Create a client, query memories, and record a trace.
Memories
Query and augment tasks with past reflections.
Reference
Full ReflectClient method reference.