LMCache is an extension layer for LLM serving engines that accelerates inference, especially with long contexts, by storing and reusing key-value (KV) attention caches across requests. Instead of rebuilding KV states for repeated or shared text segments, LMCache persists and retrieves them from multiple tiers—GPU memory, CPU DRAM, and local disk—then injects them into subsequent requests to reduce TTFT and increase throughput. Its design supports reuse beyond strict prefix matching and enables sharing across serving instances, improving efficiency under real multi-tenant traffic. The broader project includes examples, tests, a server component, and public posts describing cross-engine sharing and inter-GPU KV transfers. These capabilities aim to lower latency, cut GPU cycles, and stabilize performance for production workloads with overlapping prompts or retrieval-augmented contexts. The end result is a cache fabric for LLMs that complements engines rather than replacing them.
Features
- KV cache reuse across queries to cut TTFT and boost throughput
- Multi-tier storage across GPU, CPU, and local disk
- Cross-engine sharing for interoperability between serving stacks
- Non-prefix reuse to exploit any overlapping text segments
- Inter-GPU KV transfer for advanced pipeline/disaggregated setups
- Example suites, tests, and a server to ease integration