MemQ: Integrating Q-Learning into Self-Evolving Memory Agents over Provenance DAGs
Abstract
MemQ enhances LLM agents by applying TD(λ) eligibility traces to memory Q-values, enabling credit propagation through memory provenance DAGs and improving performance on multi-step tasks.
Episodic memory allows LLM agents to accumulate and retrieve experience, but current methods treat each memory independently, i.e., evaluating retrieval quality in isolation without accounting for the dependency chains through which memories enable the creation of future memories. We introduce MemQ, which applies TD(λ) eligibility traces to memory Q-values, propagating credit backward through a provenance DAG that records which memories were retrieved when each new memory was created. Credit weight decays as (γλ)^d with DAG depth d, replacing temporal distance with structural proximity. We formalize the setting as an Exogenous-Context MDP, whose factored transition decouples the exogenous task stream from the endogenous memory store. Across six benchmarks, spanning OS interaction, function calling, code generation, multimodal reasoning, embodied reasoning, and expert-level QA, MemQ achieves the highest success rate on all six in generalization evaluation and runtime learning, with gains largest on multi-step tasks that produce deep and relevant provenance chains (up to +5.7~pp) and smallest on single-step classification (+0.77~pp) where single-step updates already suffice. We further study how γ and λ interact with the EC-MDP structure, providing principled guidance for parameter selection and future research. Code is available at https://github.com/jwliao-ai/MemQ.
Community
MemQ: Integrating Q-Learning into Self-Evolving Memory Agents over Provenance DAGs
Hi everyone! We are excited to share MemQ, a framework that brings reinforcement learning to LLM agent memory — without any weight updates.
The Problem
Current episodic memory systems for LLM agents treat each memory independently, evaluating retrieval quality in isolation. But memories don't exist in isolation — they form dependency chains where retrieving one memory enables creating future ones. Existing approaches miss this structural relationship entirely.
Our Approach
MemQ applies TD(λ) eligibility traces to memory Q-values, propagating credit backward through a provenance DAG, a directed graph that records which memories were retrieved when each new memory was created. Instead of temporal distance, credit weight decays as (γλ)^d with DAG depth d, capturing structural proximity between memories.
Key ideas:
- EC-MDP Formalization: We formalize self-evolving memory as an Exogenous-Context MDP, factoring state into an exogenous task stream and an endogenous memory store
- Q-Integrated Retrieval: Two-phase retrieval — locality filtering (cosine similarity) followed by Q-guided ε-greedy selection
- Provenance DAG Credit Assignment: BFS backward through memory creation chains for multi-step credit propagation
- No Weight Updates: The LLM backbone stays frozen — all learning happens through Q-value updates on episodic memories
Algorithm
Results
MemQ achieves the highest success rate on all six benchmarks in both generalization evaluation and runtime learning:
Generalization Evaluation on Held-out Test Tasks
Runtime Learning Results
Benchmarks covered:
- Lifelong Agent Bench (OS Interaction / Database)
- Berkeley Function Calling Leaderboard (BFCL)
- Graduate-Level Google-Proof QA (GPQA)
- Embodied Reasoning QA (ERQA)
- MMMU Pro (Multimodal Understanding)
- LiveCodeBench (Code Generation)
Resources
- Paper: arXiv:2605.08374
- Code: jwliao-ai/MemQ | SII-MemQ/MemQ (Maintained in parallel)
We welcome any questions, discussions, or feedback. If you find this work useful, please consider giving us a star!
Get this paper in your agent:
hf papers read 2605.08374 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper