Skip to main content

Pattern: @reflect_trace Decorator

The decorator pattern is the lowest-friction way to add Reflect to an existing agent function. Annotate your function with @reflect_trace and Reflect handles memory retrieval, trace submission, and retrieved_memory_ids tracking - you just write the agent logic.
@reflect_trace  →  ctx.augmented_task  →  agent runs  →  TraceResult returned  →  trace auto-submitted
This example uses the OpenAI Agents SDK with a WebSearchTool.

Prerequisites

export REFLECT_API_KEY=rf_live_...
export REFLECT_PROJECT_ID=your-project-id
export OPENAI_API_KEY=sk-...
Install dependencies:
pip install reflect-sdk openai-agents

Full example

openai_agents_reflect_simple.py
"""Simple OpenAI Agents SDK + Reflect SDK example (no CLI).

Prerequisites:
- Reflect API running (default: http://localhost:8000)
- REFLECT_PROJECT_ID and REFLECT_API_KEY set
- OPENAI_API_KEY set
"""

from __future__ import annotations

import asyncio
import os

from agents import Agent, Runner, WebSearchTool
from reflect_sdk import ReflectClient, TraceContext, TraceResult, reflect_trace
from reflect_sdk.converters import from_openai_agents

REFLECT_API_URL = os.getenv("REFLECT_API_URL", "http://localhost:8000")
REFLECT_API_KEY = os.getenv("REFLECT_API_KEY")
MODEL = os.getenv("OPENAI_MODEL", "gpt-5.4-mini")

TASK = "What are the stock prices for Apple and Nvidia?"


async def main() -> None:
    if not REFLECT_API_KEY:
        raise RuntimeError("Set REFLECT_API_KEY.")
    if not os.getenv("OPENAI_API_KEY"):
        raise RuntimeError("Set OPENAI_API_KEY.")

    reflect = ReflectClient(
        base_url=REFLECT_API_URL,
        api_key=REFLECT_API_KEY,
        project_id="example",
        timeout=120.0,
    )

    @reflect_trace(reflect, task=lambda question: question)
    async def answer(ctx: TraceContext, question: str) -> TraceResult:
        agent = Agent(
            name="Research assistant",
            model=MODEL,
            tools=[WebSearchTool()],
            instructions=(
                "You are a concise research assistant. Use relevant memories from the prompt if present. "
                "Return a short, factual answer."
            ),
        )

        result = await Runner.run(agent, input=ctx.augmented_task)
        final_response = str(result.final_output).strip()
        trajectory = from_openai_agents(result)
        return TraceResult(
            output=final_response,  # returned to the caller
            trajectory=trajectory,
            model=MODEL,
            metadata={"source": "openai_agents_reflect_simple"},
        )

    answer_result = await answer(question=TASK)
    print(answer_result)


if __name__ == "__main__":
    asyncio.run(main())

Run it

python openai_agents_reflect_simple.py

How it works

1

Create the client

One ReflectClient instance is shared across all decorated functions in your app.
reflect = ReflectClient(
    base_url=os.getenv("REFLECT_API_URL", "http://localhost:8000"),
    api_key=os.getenv("REFLECT_API_KEY"),
    project_id="example",
)
2

Annotate your function with @reflect_trace

The decorator intercepts the call, retrieves relevant memories, and injects a TraceContext as the first argument. The task parameter tells Reflect how to extract the task string from your function’s arguments.
@reflect_trace(reflect, task=lambda question: question)
async def answer(ctx: TraceContext, question: str) -> TraceResult:
    ...
ctx.augmented_task contains the original question with past memories appended. Pass this to the agent instead of the raw question.
3

Run the agent with augmented context

The agent receives the memory-augmented prompt, so it can draw on what worked (or didn’t) in previous runs.
result = await Runner.run(agent, input=ctx.augmented_task)
4

Convert the agent result to a trajectory

from_openai_agents converts the agent’s message history into the format Reflect expects.
from reflect_sdk.converters import from_openai_agents

trajectory = from_openai_agents(result)
Reflect ships converters for popular agent frameworks so you don’t have to map message formats manually.
5

Return a TraceResult

Return a TraceResult from the decorated function. The decorator uses it to submit the trace and passes output back to the original caller.
return TraceResult(
    output=final_response,  # what answer() returns to the caller
    trajectory=trajectory,
    model=MODEL,
    metadata={"source": "openai_agents_reflect_simple"},
)
To add a review at submission time, include result="pass" or result="fail" and optionally feedback_text.

Adding a review

To close the learning loop immediately, include result in TraceResult:
return TraceResult(
    output=final_response,
    trajectory=trajectory,
    model=MODEL,
    result="pass",   # or "fail"
    feedback_text="The answer was incomplete.",  # only needed on fail
)
Without result, the trace is stored with a pending review status and can be reviewed later from the dashboard.