KVCache-Factory is an open-source research framework designed to explore and implement unified key-value cache compression techniques for autoregressive transformer models. In large language models, the key-value cache stores intermediate attention states that enable efficient token generation during inference, but these caches can consume large amounts of GPU memory when handling long contexts. KVCache-Factory provides a platform for implementing and evaluating multiple compression strategies that reduce memory usage while preserving model performance. The framework integrates several state-of-the-art methods such as PyramidKV, SnapKV, H2O, and StreamingLLM, allowing researchers to compare and experiment with different approaches within the same environment. It also supports advanced inference configurations such as Flash Attention v2 and multi-GPU inference setups for very large models.

Features

  • Unified framework for experimenting with multiple KV-cache compression methods
  • Support for algorithms such as PyramidKV, SnapKV, H2O, and StreamingLLM
  • Integration with modern attention implementations including Flash Attention v2
  • Multi-GPU inference support for large transformer models
  • Benchmarking tools for evaluating long-context performance and memory usage
  • Visualization utilities for analyzing attention patterns and cache behavior

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow KVCache-Factory

KVCache-Factory Web Site

Other Useful Business Software
Try Google Cloud Risk-Free With $300 in Credit Icon
Try Google Cloud Risk-Free With $300 in Credit

No hidden charges. No surprise bills. Cancel anytime.

Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of KVCache-Factory!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM)

Registered

2026-03-09