BEAVER: A Training-Free Hierarchical Prompt Compression Method via Structure-Aware Page Selection
Abstract
BEAVER is a training-free framework that improves long-context LLM inference by using structure-aware hierarchical selection and dense tensor mapping to reduce latency while maintaining semantic integrity.
The exponential expansion of context windows in LLMs has unlocked capabilities for long-document understanding but introduced severe bottlenecks in inference latency and information utilization. Existing compression methods often suffer from high training costs or semantic fragmentation due to aggressive token pruning. In this paper, we propose BEAVER, a novel training-free framework that shifts compression from linear token removal to structure-aware hierarchical selection. BEAVER maximizes hardware parallelism by mapping variable-length contexts into dense page-level tensors via dual-path pooling, and preserves discourse integrity through a hybrid planner combining semantic and lexical dual-branch selection with sentence smoothing. Extensive evaluations on four long-context benchmarks demonstrate that BEAVER achieves comparable performance to state-of-the-art (SOTA) methods like LongLLMLingua. Notably, on the RULER benchmark, BEAVER maintains high fidelity in multi-needle retrieval where baselines deteriorate. Regarding efficiency, BEAVER reduces latency by 26.4x on 128k contexts, offering a scalable solution for high-throughput applications. Our code is available at https://cslikai.cn/BEAVER/.
Community
This paper introduces BEAVER (a training-free hierarchical prompt compression method), which addresses the computational challenges of processing long documents with LLMs.
Method
BEAVER shifts from linear token removal to structure-aware hierarchical selection:
Page-level tensor mapping: Uses dual-path pooling to map variable-length contexts into dense page-level tensors, maximizing hardware parallelism
Hybrid planner: Combines semantic and lexical dual-branch selection with sentence smoothing to preserve discourse integrity
Results
- Performance: Comparable to SOTA methods like LongLLMLingua
- RULER benchmark: Maintains high fidelity in multi-needle retrieval tasks where baselines deteriorate
- Efficiency: Achieves 26.4x latency reduction on 128k token contexts
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- COMI: Coarse-to-fine Context Compression via Marginal Information Gain (2026)
- Read As Human: Compressing Context via Parallelizable Close Reading and Skimming (2026)
- LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression (2026)
- Decoupled Reasoning with Implicit Fact Tokens (DRIFT): A Dual-Model Framework for Efficient Long-Context Inference (2026)
- SONIC: Segmented Optimized Nexus for Information Compression in Key-Value Caching (2026)
- Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection (2026)
- S3-Attention:Attention-Aligned Endogenous Retrieval for Memory-Bounded Long-Context Inference (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper
