


default search action
"Compressing KV Cache for Long-Context LLM Inference with Inter-Layer ..."
Da Ma et al. (2024)
- Da Ma, Lu Chen, Situo Zhang, Yuxun Miao, Su Zhu, Zhi Chen, Hongshen Xu, Hanqi Li, Shuai Fan, Lei Pan, Kai Yu:
Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity. CoRR abs/2412.02252 (2024)

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.