Papers
arxiv:2603.15500

Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty

Published on Mar 16
· Submitted by
JeonghyeKim
on Mar 17
Authors:
,
,
,

Abstract

LLMs often exhibit Aha moments during reasoning, such as apparent self-correction following tokens like "Wait," yet their underlying mechanisms remain unclear. We introduce an information-theoretic framework that decomposes reasoning into procedural information and epistemic verbalization - the explicit externalization of uncertainty that supports downstream control actions. We show that purely procedural reasoning can become informationally stagnant, whereas epistemic verbalization enables continued information acquisition and is critical for achieving information sufficiency. Empirical results demonstrate that strong reasoning performance is driven by uncertainty externalization rather than specific surface tokens. Our framework unifies prior findings on Aha moments and post-training experiments, and offers insights for future reasoning model design.

Community

Paper author Paper submitter

We introduce an information-theoretic framework that decomposes self-information in LLM reasoning into procedural information and epistemic verbalization—the explicit externalization of uncertainty that supports downstream control—showing that epistemic verbalization is crucial for improving reasoning performance and understanding post-training mechanisms.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.15500 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.15500 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.15500 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.