Skip to main content
This page summarizes two runnable SDK examples from examples:
  • deepagents_exa_quickstart.py
  • interactive_feedback_cli.py
Both examples follow the same core pattern:
  1. Retrieve memories for the current task
  2. Run an agent/model with memory-augmented context
  3. Store the run as a trace
  4. Optionally review the run so utility (q_value) updates are learned
Source: deepagents_exa_quickstart.py #check the reflect sdk example on github This example wires Reflect into a Deep Agents research workflow with an Exa search tool.

What it demonstrates

  • augment_with_memories(...) before agent execution
  • Passing memory-augmented task text to the agent
  • Converting agent messages into Reflect trajectory format
  • create_trace(...) with retrieved_memory_ids so future utility updates can be applied

Required environment

export EXA_API_KEY=...
export OPENAI_API_KEY=...
export REFLECT_PROJECT_ID=...
export REFLECT_API_KEY=...
Optional:
export REFLECT_API_URL=https://api.starlight-search.com
export DEEPAGENT_MODEL=openai:gpt-5.4-mini
export RESEARCH_QUERY="Latest news in AI"

Run

python examples/deepagents_exa_quickstart.py  #check the reflect sdk example on github

Core flow

augmented = reflect.augment_with_memories(QUERY, limit=3, lambda_=0.5)

result = agent.invoke({"messages": [{"role": "user", "content": augmented.augmented_task}]})
final_response = _extract_final_text(result)
trajectory = _to_reflect_trajectory(result["messages"])

trace = reflect.create_trace(
    task=QUERY,
    trajectory=trajectory,
    final_response=final_response,
    retrieved_memory_ids=[memory.id for memory in augmented.memories],
    model=MODEL,
    metadata={"source": "deepagents_exa_quickstart"},
    review_result=None,
)

Example 2: Interactive feedback CLI

Source: interactive_feedback_cli.py This example runs a single interactive loop where you can pass/fail/defer review after seeing model output.

What it demonstrates

  • Memory retrieval before generation
  • Interactive review decision (pass, fail, or defer)
  • create_trace_and_wait(...) for synchronous persistence
  • Utility learning update after reviewed runs

Required environment

export OPENAI_API_KEY=...
export REFLECT_PROJECT_ID=...
export REFLECT_API_KEY=...

Run

python examples/interactive_feedback_cli.py  #check the reflect sdk example on github

Core flow

augmented = client.augment_with_memories(
    task=args.task,
    limit=args.limit,
    lambda_=args.lambda_,
)

assistant_response = solve_with_openai(...)
review_action = prompt_for_review_action()

trace = client.create_trace_and_wait(
    task=args.task,
    trajectory=trajectory,
    final_response=assistant_response,
    retrieved_memory_ids=[memory.id for memory in augmented.memories],
    model=args.model,
    metadata={"source": "interactive_feedback_cli"},
    review_result=None if review_action == "defer" else ("pass" if review_action == "pass" else "fail"),
    feedback_text=None,
)

Which example should you start with?

  • Use Deep Agents + Exa if you want a research/tool-using agent workflow.
  • Use Interactive feedback CLI if you want the fastest way to test memory retrieval + trace/review loops manually.