How Much Is One Recurrence Worth? Iso-Depth Scaling Laws for Looped Language Models
Abstract
Research quantifies the computational value of recurrent connections in language models through a scaling law that establishes a recurrence-equivalence exponent of 0.46, indicating that additional recurrence provides partial but measurable capacity gains.
We measure how much one extra recurrence is worth to a looped (depth-recurrent) language model, in equivalent unique parameters. From an iso-depth sweep of 116 pretraining runs across recurrence counts r in {1, 2, 4, 8} spanning {sim}50times in training compute, we fit a joint scaling law L = E + A,(N_once + r^φ N_rec)^{-α} + B,D^{-β} and recover a new recurrence-equivalence exponent φ= 0.46. Intuitively, φ tells us whether looping a block r times is equivalent in validation loss to r unique blocks of a non-looped model (full equivalence, φ{=}1) or to a single block run repeatedly with no capacity gain (φ{=}0). Our φ= 0.46 sits in between, so each additional recurrence predictably increases validation loss at matched training compute. For example, at r{=}4 a 410M looped model performs on par with a 580M non-looped model, but incurs the training cost of a 1B non-looped one. We demonstrate the utility of φ as a measurement tool on two probes. Truncated backpropagation lowers φ to 0.38, indicating that the loop mechanism is poorly trained under truncation, even though validation loss decreases. Conversely, hyperconnections raise φ to 0.65, a genuine capacity gain. Our method applies to any looped LM and separates true loop improvements from token-budget gains.
Community
We measure how much one extra recurrence is worth to a looped (depth-recurrent) language model, in equivalent unique parameters. We quantify this by conducting Iso-depth scaling laws across multiple recurrences. At 4 recursions, a 410M looped model performs on par with a 580M non-looped model, but incurs the training cost of a 1B non-looped one.
Our method can quantify how much unique parameter capacity can be recovered by a specific looped LM architecture. We show that the commonly applied method of truncated backpropagation through time weakens the power of loops due to inaccurate gradients. Hyperconnections between loop states substantially improve the loops.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Parcae: Scaling Laws For Stable Looped Language Models (2026)
- Test-Time Scaling Makes Overtraining Compute-Optimal (2026)
- Beyond Linearity in Attention Projections: The Case for Nonlinear Queries (2026)
- Rethinking Language Model Scaling under Transferable Hypersphere Optimization (2026)
- The Recurrent Transformer: Greater Effective Depth and Efficient Decoding (2026)
- The Spectral Geometry of Thought: Phase Transitions, Instruction Reversal, Token-Level Dynamics, and Perfect Correctness Prediction in How Transformers Reason (2026)
- Expert Upcycling: Shifting the Compute-Efficient Frontier of Mixture-of-Experts (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.21106 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper