Skip to content

[ICLR2025] ReAttention, a training-free approach to break the maximum context length in length extrapolation

License

Notifications You must be signed in to change notification settings

OpenMOSS/ReAttention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ReAttention: Training-Free Infinite Context with Finite Attention Scope

In this work, we propose ReAttention, a training-free approach enabling LLM based on the self-attention mechanism to break the maximum supported context length in length extrapolation and support an infinite context with a finite attention scope under sufficient memory resources. It performs the position-agnostic top-k attention before the ordinary position-aware self-attention, freeing LLMs from the length extrapolation issue.

We validate the performance of ReAttention on the LongBench, L-Eval, and InfiniteBench and demonstrate that it is on par with traditional methods. Furthermore, we also apply ReAttention on mainstream LLMs, including LLaMA3.1-8B and Mistral-v0.3-7B, enabling them to support context lengths of at least 1M and even expanding the context length of Qwen2-1.5B by 128× to 4M without any further training in Needle-In-A-Haystack.

We also improve the efficiency of ReAttention with Triton and achieve an extrapolation without additional overhead. If you have questions about this work, please feel free to raise issues or send an email to [email protected]. If you find our paper useful, please consider citing the following paper:

@misc{liu2025spakelongcontextlargelanguage,
    title={ReAttention: Training-Free Infinite Context with Finite Attention Scope},
    author={Liu, Xiaoran and Li, Ruixiao and Guo, Qipeng and Liu, Zhigeng and Song, Yuerong and Lv, Kai and Yan, Hang and Li, Linlin and Liu, Qun and Qiu, Xipeng},
    year={2024},
    eprint={2407.15176},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2407.15176}, 
}

News

Todo

  • Realize the precise Triton kernel for top-k attention.
  • Release the code for efficiency analysis.
  • Organize the evaluation code and give necessary comments.
  • Release the approximate Triton kernel for top-k attention reported in our paper.
  • Release the code for long-context evaluation.

About

[ICLR2025] ReAttention, a training-free approach to break the maximum context length in length extrapolation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages