LTM-2-mini

LTM-2-mini

Magic AI
+
+

Related Products

  • Cloudflare
    1,918 Ratings
    Visit Website
  • SureSync
    13 Ratings
    Visit Website
  • EBizCharge
    202 Ratings
    Visit Website
  • TinyPNG
    49 Ratings
    Visit Website
  • StackAI
    49 Ratings
    Visit Website
  • FinOpsly
    3 Ratings
    Visit Website
  • Juspay
    15 Ratings
    Visit Website
  • PDFCreator
    534 Ratings
    Visit Website
  • Imorgon
    5 Ratings
    Visit Website
  • ND Wallet
    14 Ratings
    Visit Website

About

Edgee is an AI gateway that sits between your application and large language model providers, acting as an edge intelligence layer that compresses prompts before they reach the model to reduce token usage, lower costs, and improve latency without changing your existing code. Applications call Edgee through a single OpenAI-compatible API, and Edgee applies edge-level policies such as intelligent token compression, routing, privacy controls, retries, caching, and cost governance before forwarding requests to the selected provider, including OpenAI, Anthropic, Gemini, xAI, and Mistral. Its token compression engine removes redundant input tokens while preserving semantic intent and context, achieving up to 50% input token reduction, which is especially valuable for long contexts, RAG pipelines, and multi-turn agents. Edgee enables tagging requests with custom metadata to track usage and spending by feature, team, project, or environment, and provides cost alerts when spending spikes.

About

LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Engineering teams and AI product builders who need a unified gateway to compress prompts, control costs, route traffic, and manage LLM providers efficiently in production

Audience

AI developers interested in a 100M token context model LLM

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Edgee
Founded: 2024
United States
www.edgee.ai/

Company Information

Magic AI
Founded: 2022
United States
magic.dev/

Alternatives

Alternatives

MiniMax M1

MiniMax M1

MiniMax
GPT-5 mini

GPT-5 mini

OpenAI
Koog

Koog

JetBrains
GPT-4o mini

GPT-4o mini

OpenAI
DeepSeek-V2

DeepSeek-V2

DeepSeek

Categories

Categories

Integrations

Claude
Gemini
Grok
Mistral AI
OpenAI

Integrations

Claude
Gemini
Grok
Mistral AI
OpenAI
Claim Edgee and update features and information
Claim Edgee and update features and information
Claim LTM-2-mini and update features and information
Claim LTM-2-mini and update features and information