examples:
deepagents_exa_quickstart.pyinteractive_feedback_cli.py
- Retrieve memories for the current task
- Run an agent/model with memory-augmented context
- Store the run as a trace
- Optionally review the run so utility (
q_value) updates are learned
Example 1: Deep Agents + Exa search
Source:deepagents_exa_quickstart.py #check the reflect sdk example on github
This example wires Reflect into a Deep Agents research workflow with an Exa search tool.
What it demonstrates
augment_with_memories(...)before agent execution- Passing memory-augmented task text to the agent
- Converting agent messages into Reflect trajectory format
create_trace(...)withretrieved_memory_idsso future utility updates can be applied
Required environment
Run
Core flow
Example 2: Interactive feedback CLI
Source:interactive_feedback_cli.py
This example runs a single interactive loop where you can pass/fail/defer review after seeing model output.
What it demonstrates
- Memory retrieval before generation
- Interactive review decision (
pass,fail, ordefer) create_trace_and_wait(...)for synchronous persistence- Utility learning update after reviewed runs
Required environment
Run
Core flow
Which example should you start with?
- Use Deep Agents + Exa if you want a research/tool-using agent workflow.
- Use Interactive feedback CLI if you want the fastest way to test memory retrieval + trace/review loops manually.