"SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching."

Yuxuan Zhu et al. (2025)

Details and statistics

DOI: 10.48550/ARXIV.2504.00970

access: open

type: Informal or Other Publication

metadata version: 2025-10-27