Skip to main content
This guide walks through the full learning loop: query memories, augment a prompt, record a trace, and submit a review.
1

Create a client

You need an API key and project ID from the Reflect console.
from reflect_sdk import ReflectClient

client = ReflectClient(
    base_url="https://api.starlight-search.com",
    api_key="rf_live_...",
    project_id="my-project",
)
2

Run the learning loop

The client.trace() context manager handles the full loop in one block - it queries memories on entry, and auto-submits the trace with the correct retrieved_memory_ids on exit:
with client.trace("How do I implement retry logic with exponential backoff?") as ctx:
    # ctx.augmented_task contains the task + any relevant memory blocks
    # ctx.memories contains the retrieved Memory objects
    response = my_agent(ctx.augmented_task)

    ctx.set_output(
        trajectory=[
            {"role": "user", "content": ctx.augmented_task},
            {"role": "assistant", "content": response},
        ],
        result="pass",
    )
# Trace auto-submitted with retrieved_memory_ids tracked for you
If no memories exist yet, ctx.augmented_task returns the original task unchanged.
3

Query memories again

The generated reflection now appears in future queries:
memories = client.query_memories(
    task="What is the best approach for retrying failed requests?",
    limit=5,
)
for m in memories:
    print(f"{m.reflection} (score: {m.score:.2f})")
Feedback text is typically attached by judge workflows or through the platform review UI. SDK/API calls usually submit only the review result (pass or fail).
See Traces and reviews for decorator and explicit call patterns.