"Compressing KV Cache for Long-Context LLM Inference with Inter-Layer ..."

Da Ma et al. (2024)

Details and statistics

DOI: 10.48550/ARXIV.2412.02252

access: open

type: Informal or Other Publication

metadata version: 2025-01-14