Documentation Index
Fetch the complete documentation index at: https://docs.starlight-search.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
A memory is a concise, LLM-generated reflection distilled from a past agent run. It captures what the agent did, what worked, what went wrong, and what to do differently next time - then stores that knowledge so future runs can benefit from it. Memories exist because LLMs are stateless. Each call starts from scratch with no awareness of what happened last time. Reflect solves this by maintaining a project-level memory bank that accumulates experience across runs, users, and sessions. Before each task, your agent queries this bank and receives the most relevant past reflections ranked by both semantic similarity (is this about the same kind of problem?) and utility (did this advice actually lead to good outcomes?). This dual ranking is the core design choice behind Reflect’s memory system. Pure semantic search returns relevant results, but it can’t distinguish between a reflection that led to a correct answer and one that didn’t. utility scores add a learned quality signal that improves over time as more traces are reviewed.How memories are created
Memories are never written directly. They are always generated from a reviewed trace:- Your agent completes a task and you submit the trace with a review (
"pass"or"fail") - An LLM reads the trace (task, trajectory, outcome, feedback) and generates a reflection
- The task is embedded and stored in a memory bank along with the reflection with an initial
q_valueof0.5 - On future runs, the memory is retrieved when the query is semantically similar and the utility score is high enough.
How utility scores evolve
Utility scores are updated every time a memory is retrieved and the run that used it is reviewed. A memory starts atq_value = 0.5. If it’s retrieved in a run that passes, its utility nudges upward. If retrieved in a run that fails, it nudges downward. Over many reviews, useful memories converge toward 1.0 and unhelpful ones toward 0.0.
utility scores only update for memories that were retrieved and used in a run. If a memory exists but wasn’t retrieved for a particular trace, its utility is unaffected by that trace’s review.
Retrieving memories
query_memories - raw retrieval
Returns a ranked list of Memory objects without modifying the task text. Use this when you want full control over how memories are injected into your prompt.
augment_with_memories - retrieval + formatting
Queries memories and appends them to the task as a structured text block. This is the most common method - it returns a ready-to-use prompt that you pass directly to the LLM.
augmented_task returns the original task unchanged - so you can always use it safely without checking.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
task | str | required | The task text to search against. The API embeds this and finds semantically similar memories. |
limit | int | 10 | Maximum number of memories to return. |
lambda_ | float | 0.5 | Blend weight between similarity and utility (see below). |
mmr_lambda | float | 0.7 | Maximal Marginal Relevance weight for diversity-aware selection. 1.0 disables MMR (pure utility ranking); 0.0 is pure diversity. See Diversity-aware retrieval with mmr_lambda. |
metadata_filter | dict | None | None | Optional metadata key/value pairs that memories must match. See Filtering by metadata. |
similarity_threshold | float | None | None | Minimum cosine similarity a candidate must reach. Overrides the server default. See Tuning the similarity threshold. |
The Memory object
| Field | Type | Description |
|---|---|---|
id | str | Unique identifier - pass these as retrieved_memory_ids when creating traces |
task | str | The past task this memory was generated from |
reflection | str | LLM-generated reflection text |
q_value | float | Learned quality score (0-1, higher = better track record) |
similarity | float | Cosine similarity to the query task |
score | float | Final ranking score: (1 - lambda_) * similarity + lambda_ * q_value |
success | bool | None | Whether the source trace passed review (None if unreviewed) |
summary | str | One-sentence description of the past task |
key_mistake | str | Specific wrong action or omission (empty for successful memories) |
correct_action | str | Specific right action — tool name + argument patterns |
applicable_tools | list[str] | Tools this lesson is about (LLM-chosen) |
guidance | str | One-paragraph general strategy |
tools_used | list[str] | Tools that actually appeared in the trajectory (server-extracted) |
The fields
summary, key_mistake, correct_action, applicable_tools, guidance, and tools_used are populated for memories reviewed after the structured-reflection upgrade. Older memories will have these as empty strings / empty lists but retain their original reflection text.Tuning retrieval with lambda_
The lambda_ parameter controls the balance between semantic relevance and learned quality when ranking memories:
| Value | Behavior | When to use |
|---|---|---|
0.0 | Pure semantic similarity | Early in a project when you have few reviewed traces and utility scores haven’t differentiated yet |
0.5 | Equal weight (default) | General-purpose starting point - works well for most projects |
0.7–0.9 | Favor utility | Mature projects with many reviewed traces - surface memories with the best track records |
1.0 | Pure utility | Only retrieve the historically most successful memories, regardless of semantic match |
When to adjust
- Increase
lambda_if your agent keeps retrieving relevant-sounding memories that lead to bad outcomes. The memories are topically similar but not actually helpful - utility scores will down-rank them. - Decrease
lambda_if your agent needs broader context from different past tasks. Strict utility ranking can narrow retrieval too much, especially when the most successful memories are about a different subtopic. - Keep
0.5if you’re unsure. The default works well until you have enough reviewed traces to notice a pattern.
Diversity-aware retrieval with mmr_lambda
lambda_ decides which memories are worth showing. mmr_lambda decides whether the top-k you return are redundant with each other.
Without diversity re-ranking, top-k can return five paraphrases of the same lesson — useful exactly once, then wasted prompt tokens. Reflect re-ranks the top candidates using Maximal Marginal Relevance (MMR), which iteratively picks memories that maximize:
| Value | Behavior | When to use |
|---|---|---|
1.0 | MMR disabled — pure utility ranking | Reproduce pre-MMR behavior, or when k is very small (1–2) |
0.7 | Default — relevance-leaning with mild redundancy suppression | General use |
0.5 | Balanced relevance and diversity | Banks with many near-duplicate reflections |
0.0 | Pure diversity | Almost never useful in production — diversity without quality is noise |
lambda_ blend and the similarity threshold, so it only re-orders candidates that already passed quality filtering. If your bank is small or your queries return naturally diverse results, MMR is a no-op. The cost is negligible — pairwise cosines on at most limit × 5 candidates per query.
Filtering by metadata
Any key/value pairs you pass onmetadata when creating a trace are stored on the resulting memory and become filterable at retrieval time.
project_id / user_id / status filter, so callers cannot reach across projects. Metadata keys can take any JSON-serializable value. Filter values are scalar; when the stored field is a list, Qdrant matches if the scalar value is a member of the list — useful for tagging a single memory with multiple categories:
Memories created before you start passing metadata won’t have any fields to match against. A
metadata_filter that works on new memories will exclude older ones.Tuning the similarity threshold
Every retrieved candidate must clear a minimum cosine-similarity floor before it can be re-ranked and returned. The server’s default is set inconfig.toml ([memory].similarity_threshold, typically 0.5). Clients can override per-call:
| Value | Behavior |
|---|---|
0.0 | Disable the floor — every candidate is considered. Useful when bootstrapping a small memory bank where all cosines are low. |
0.3–0.4 | Permissive. Lets weak but possibly-relevant memories through. Good for heterogeneous domains where embeddings collide. |
0.5 (default) | Moderate. Filters obviously-unrelated memories. |
0.7+ | Strict. Only near-duplicate retrievals pass. Use when the memory bank is large and dense. |
metadata_filter when the bank spans multiple task types: the metadata filter does coarse partitioning (same category only), and the similarity threshold does fine filtering within that partition.
Tuning Q-value learning rate with alpha
Each time a memory is retrieved and its source trace is reviewed, Reflect updates the memory’s q_value via a Bellman-style step:
reward is 1.0 for "pass", 0.0 for "fail". alpha ∈ [0, 1] controls how aggressively the Q-value tracks each new review. The server default is 0.3 (matches the MemRL paper’s configs), but you can override per-review through the SDK:
alpha | Behavior | When to use |
|---|---|---|
0.05–0.1 | Slow, smooth Q-value updates | High-volume projects with consistent task distributions; you want stable rankings and many reviews per memory |
0.3 (default) | Balanced — Q-values differentiate within ~10–20 reviews per memory | General starting point; matches MemRL’s published value across all four of their benchmarks |
0.5–0.7 | Aggressive — Q-values shift sharply per review | Small memory banks where you have few reviews per memory and want them to count for more |
1.0 | Pure overwrite — q_new = reward | Rare; effectively disables historical averaging |
When to adjust per-review
Most callers should leavealpha unset and let the server default apply. Per-review override is useful when:
- Authoritative reviews vs noisy ones. Pass
alpha=0.5for reviews from a trusted human expert andalpha=0.1for reviews from a less-reliable source like an LLM judge. - Bootstrapping a new project. First few hundred reviews can use
alpha=0.5+to differentiate memory quality fast, then drop to0.3once the bank has matured. - Penalizing pivotal failures. A review with strong evidence the memory caused harm can use
alpha=0.5+to drop its Q-value sharply.
config.toml at [q_learning].alpha and can also be overridden per-deployment via the Q_LEARNING_ALPHA environment variable.
Best practices
Always pass retrieved_memory_ids when creating traces
Always pass retrieved_memory_ids when creating traces
This is what connects the learning loop. When you create a trace, pass the IDs of the memories that were retrieved for that run. Without them, Reflect can’t update utility scores when the trace is reviewed - the memory ranking won’t improve.The context manager (
client.trace()) and decorator (@reflect_trace) handle this automatically. If you use create_trace directly, you must pass them yourself.Write descriptive task strings
Write descriptive task strings
The task string is what Reflect embeds and matches against when retrieving memories. Vague tasks like
"do the thing" will match poorly. Descriptive tasks like "Parse the uploaded CSV, validate column types, and return the first 5 rows" will retrieve more relevant memories.The task is also included in the reflection prompt - a clear task helps the LLM generate better reflections.Start with a small limit and increase as needed
Start with a small limit and increase as needed
More memories means more context in the prompt, which costs tokens and can dilute the signal. Start with
limit=3 to limit=5 and increase if the agent seems to be missing relevant context.Review traces to make memories useful
Review traces to make memories useful
Memories start with
q_value=0.5 and only differentiate through reviews. An unreviewed project has flat utility scores - every memory is ranked equally. Reviews are what make the system learn. Even a few dozen reviews can significantly improve retrieval quality.Don't filter by success in your prompt - Reflect does it for you
Don't filter by success in your prompt - Reflect does it for you
augment_with_memories already groups memories into “Successful”, “Failed”, and “Other” sections. The LLM sees the distinction naturally. You don’t need to filter out failed memories - they contain valuable “what not to do” context.