+
+

Related Products

  • KrakenD
    71 Ratings
    Visit Website
  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Convesio
    53 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • RunPod
    180 Ratings
    Visit Website
  • Sogolytics
    863 Ratings
    Visit Website
  • StackAI
    42 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • OORT DataHub
    13 Ratings
    Visit Website
  • Cloudflare
    1,903 Ratings
    Visit Website

About

LMCache is an open source Knowledge Delivery Network (KDN) designed as a caching layer for large language model serving that accelerates inference by reusing KV (key-value) caches across repeated or overlapping computations. It enables fast prompt caching, allowing LLMs to “prefill” recurring text only once and then reuse those stored KV caches, even in non-prefix positions, across multiple serving instances. This approach reduces time to first token, saves GPU cycles, and increases throughput in scenarios such as multi-round question answering or retrieval augmented generation. LMCache supports KV cache offloading (moving cache from GPU to CPU or disk), cache sharing across instances, and disaggregated prefill, which separates the prefill and decoding phases for resource efficiency. It is compatible with inference engines like vLLM and TGI and supports compressed storage, blending techniques to merge caches, and multiple backend storage options.

About

Tensormesh is a caching layer built specifically for large-language-model inference workloads that enables organizations to reuse intermediate computations, drastically reduce GPU usage, and accelerate time-to-first-token and latency. It works by capturing and reusing key-value cache states that are normally thrown away after each inference, thereby cutting redundant compute and delivering “up to 10x faster inference” while substantially lowering GPU load. It supports deployments in public cloud or on-premises, with full observability and enterprise-grade control, SDKs/APIs, and dashboards for integration into existing inference pipelines, and compatibility with inference engines such as vLLM out of the box. Tensormesh emphasizes performance at scale, including sub-millisecond repeated queries, while optimizing every layer of inference from caching through computation.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

AI engineers and infrastructure teams looking for a tool to lower latency, reduce compute cost, and scale throughput

Audience

Enterprises and AI infrastructure teams wanting a tool to reduce latency and cost while maintaining full control over deployment and data

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

LMCache
United States
lmcache.ai/

Company Information

Tensormesh
Founded: 2025
United States
www.tensormesh.ai/

Alternatives

Alternatives

DeepSeek-V2

DeepSeek-V2

DeepSeek
PrimoCache

PrimoCache

Romex Software

Categories

Categories

Integrations

No info available.

Integrations

No info available.
Claim LMCache and update features and information
Claim LMCache and update features and information
Claim Tensormesh and update features and information
Claim Tensormesh and update features and information