Alternatives to Edgee

Compare Edgee alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Edgee in 2026. Compare features, ratings, user reviews, pricing, and more from Edgee competitors and alternatives in order to make an informed decision for your business.

  • 1
    OpenRouter

    OpenRouter

    OpenRouter

    OpenRouter is a unified interface for LLMs. OpenRouter scouts for the lowest prices and best latencies/throughputs across dozens of providers, and lets you choose how to prioritize them. No need to change your code when switching between models or providers. You can even let users choose and pay for their own. Evals are flawed; instead, compare models by how often they're used for different purposes. Chat with multiple at once in the chatroom. Model usage can be paid by users, developers, or both, and may shift in availability. You can also fetch models, prices, and limits via API. OpenRouter routes requests to the best available providers for your model, given your preferences. By default, requests are load-balanced across the top providers to maximize uptime, but you can customize how this works using the provider object in the request body. Prioritize providers that have not seen significant outages in the last 10 seconds.
    Starting Price: $2 one-time payment
  • 2
    FastRouter

    FastRouter

    FastRouter

    FastRouter is a unified API gateway that enables AI applications to access many large language, image, and audio models (like GPT-5, Claude 4 Opus, Gemini 2.5 Pro, Grok 4, etc.) through a single OpenAI-compatible endpoint. It features automatic routing, which dynamically picks the optimal model per request based on factors like cost, latency, and output quality. It supports massive scale (no imposed QPS limits) and ensures high availability via instant failover across model providers. FastRouter also includes cost control and governance tools to set budgets, rate limits, and model permissions per API key or project, and it delivers real-time analytics on token usage, request counts, and spending trends. The integration process is minimal; you simply swap your OpenAI base URL to FastRouter’s endpoint and configure preferences in the dashboard; the routing, optimization, and failover functions then run transparently.
  • 3
    LLM Gateway

    LLM Gateway

    LLM Gateway

    LLM Gateway is a fully open source, unified API gateway that lets you route, manage, and analyze requests to any large language model provider, OpenAI, Anthropic, Google Vertex AI, and more, using a single, OpenAI-compatible endpoint. It offers multi-provider support with seamless migration and integration, dynamic model orchestration that routes each request to the optimal engine, and comprehensive usage analytics to track requests, token consumption, response times, and costs in real time. Built-in performance monitoring lets you compare models’ accuracy and cost-effectiveness, while secure key management centralizes API credentials under role-based controls. You can deploy LLM Gateway on your own infrastructure under the MIT license or use the hosted service as a progressive web app, and simple integration means you only need to change your API base URL, your existing code in any language or framework (cURL, Python, TypeScript, Go, etc.) continues to work without modification.
    Starting Price: $50 per month
  • 4
    Storm MCP

    Storm MCP

    Storm MCP

    Storm MCP is a gateway built around the Model Context Protocol (MCP) that lets AI applications connect to multiple verified MCP servers with one-click deployment, offering enterprise-grade security, observability, and simplified tool integration without requiring custom integration work. It enables you to standardize AI connections by exposing only selected tools from each MCP server, thereby reducing token usage and improving model tool selection. Through Lightning deployment, one can connect to over 30 secure MCP servers, while Storm handles OAuth-based access, full usage logs, rate limiting, and monitoring. It’s designed to bridge AI agents with external context sources in a secure, managed fashion, letting developers avoid building and maintaining MCP servers themselves. Built for AI agent developers, workflow builders, and indie hackers, Storm MCP positions itself as a composable, configurable API gateway that abstracts away infrastructure overhead and provides reliable context.
    Starting Price: $29 per month
  • 5
    Koog

    Koog

    JetBrains

    Koog is a Kotlin‑based framework for building and running AI agents entirely in idiomatic Kotlin, supporting both single‑run agents that process individual inputs and complex workflow agents with custom strategies and configurations. It features pure Kotlin implementation, seamless Model Control Protocol (MCP) integration for enhanced model management, vector embeddings for semantic search, and a flexible system for creating and extending tools that access external systems and APIs. Ready‑to‑use components address common AI engineering challenges, while intelligent history compression optimizes token usage and preserves context. A powerful streaming API enables real‑time response processing and parallel tool calls. Persistent memory allows agents to retain knowledge across sessions and between agents, and comprehensive tracing facilities provide detailed debugging and monitoring.
    Starting Price: Free
  • 6
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
    Starting Price: Free
  • 7
    AI Spend

    AI Spend

    AI Spend

    Keep track of your OpenAI usage and costs with AI Spend and never be surprised again. AI Spend offers user-friendly cost tracking with a dashboard and notifications that passively monitor your usage and costs. The analytics and charts provide insights that help you optimize your OpenAI usage and avoid billing surprises. Get daily, weekly, and monthly notifications with your spending. Discover which models and how many tokens you're using. Get clear insights into how much OpenAI is costing you.
    Starting Price: $6.61 per month
  • 8
    Repo Prompt

    Repo Prompt

    Repo Prompt

    Repo Prompt is a macOS-native AI coding assistant and context engineering tool that helps developers interact with, refine, and modify codebases using large language models by letting users select specific files or folders, build structured prompts with exactly the relevant context, and review and apply AI-generated code changes as diffs rather than rewriting entire files, ensuring precise, auditable modifications. It provides a visual file explorer for project navigation, an intelligent context builder, and CodeMaps that reduce token usage and help models understand project structure, and multi-model support so users can bring their own API keys for providers like OpenAI, Anthropic, Gemini, Azure, or others, keeping all processing local and private unless the user explicitly sends code to an LLM. Repo Prompt works as both a standalone chat/workflow interface and an MCP (Model Context Protocol) server for integration with AI editors.
    Starting Price: $14.99 per month
  • 9
    TensorBlock

    TensorBlock

    TensorBlock

    TensorBlock is an open source AI infrastructure platform designed to democratize access to large language models through two complementary components. It has a self-hosted, privacy-first API gateway that unifies connections to any LLM provider under a single, OpenAI-compatible endpoint, with encrypted key management, dynamic model routing, usage analytics, and cost-optimized orchestration. TensorBlock Studio delivers a lightweight, developer-friendly multi-LLM interaction workspace featuring a plugin-based UI, extensible prompt workflows, real-time conversation history, and integrated natural-language APIs for seamless prompt engineering and model comparison. Built on a modular, scalable architecture and guided by principles of openness, composability, and fairness, TensorBlock enables organizations to experiment, deploy, and manage AI agents with full control and minimal infrastructure overhead.
    Starting Price: Free
  • 10
    GPT-5 mini
    GPT-5 mini is a streamlined, faster, and more affordable variant of OpenAI’s GPT-5, optimized for well-defined tasks and precise prompts. It supports text and image inputs and delivers high-quality text outputs with a 400,000-token context window and up to 128,000 output tokens. This model excels at rapid response times, making it suitable for applications requiring fast, accurate language understanding without the full overhead of GPT-5. Pricing is cost-effective, with input tokens at $0.25 per million and output tokens at $2 per million, providing savings over the flagship model. GPT-5 mini supports advanced features like streaming, function calling, structured outputs, and fine-tuning, but does not support audio input or image generation. It integrates well with various API endpoints including chat completions, responses, and embeddings, making it versatile for many AI-powered tasks.
    Starting Price: $0.25 per 1M tokens
  • 11
    Qwen3.5-Plus
    Qwen3.5-Plus is a high-performance native vision-language model designed for efficient text generation, deep reasoning, and multimodal understanding. Built on a hybrid architecture that combines linear attention with a sparse mixture-of-experts design, it delivers strong performance while optimizing inference efficiency. The model supports text, image, and video inputs and produces text outputs, making it suitable for complex multimodal workflows. With a massive 1 million token context window and up to 64K output tokens, Qwen3.5-Plus enables long-form reasoning and large-scale document analysis. It includes advanced capabilities such as structured outputs, function calling, web search, and tool integration via the Responses API. The model supports prefix continuation, caching, batch processing, and fine-tuning for flexible deployment. Designed for developers and enterprises, Qwen3.5-Plus provides scalable, high-throughput AI performance with OpenAI-compatible API access.
    Starting Price: $0.4 per 1M tokens
  • 12
    Kong AI Gateway
    ​Kong AI Gateway is a semantic AI gateway designed to run and secure Large Language Model (LLM) traffic, enabling faster adoption of Generative AI (GenAI) through new semantic AI plugins for Kong Gateway. It allows users to easily integrate, secure, and monitor popular LLMs. The gateway enhances AI requests with semantic caching and security features, introducing advanced prompt engineering for compliance and governance. Developers can power existing AI applications written using SDKs or AI frameworks by simply changing one line of code, simplifying migration. Kong AI Gateway also offers no-code AI integrations, allowing users to transform, enrich, and augment API responses without writing code, using declarative configuration. It implements advanced prompt security by determining allowed behaviors and enables the creation of better prompts with AI templates compatible with the OpenAI interface.
  • 13
    GPT-5 nano
    GPT-5 nano is OpenAI’s fastest and most affordable version of the GPT-5 family, designed for high-speed text processing tasks like summarization and classification. It supports text and image inputs, generating high-quality text outputs with a large 400,000-token context window and up to 128,000 output tokens. GPT-5 nano offers very fast response times, making it ideal for applications requiring quick turnaround without sacrificing quality. Pricing is extremely competitive, with input tokens costing $0.05 per million and output tokens $0.40 per million, making it accessible for budget-conscious projects. The model supports advanced API features such as streaming, function calling, structured outputs, and fine-tuning. While it supports image input, it does not handle audio input or web search, focusing on core text tasks efficiently.
    Starting Price: $0.05 per 1M tokens
  • 14
    Parallel

    Parallel

    Parallel

    The Parallel Search API is a web-search tool engineered specifically for AI agents, designed from the ground up to provide the most information-dense, token-efficient context for large-language models and automated workflows. Unlike traditional search engines optimized for human browsing, this API supports declarative semantic objectives, allowing agents to specify what they want rather than merely keywords. It returns ranked URLs and compressed excerpts tailored for model context windows, enabling higher accuracy, fewer search steps, and lower token cost per result. Its infrastructure includes a proprietary crawler, live-index updates, freshness policies, domain-filtering controls, and SOC 2 Type 2 security compliance. The API is built to fit seamlessly within agent workflows: developers can control parameters like maximum characters per result, select custom processors, adjust output size, and orchestrate retrieval directly into AI reasoning pipelines.
    Starting Price: $5 per 1,000 requests
  • 15
    Cohere Embed
    Cohere's Embed is a leading multimodal embedding platform designed to transform text, images, or a combination of both into high-quality vector representations. These embeddings are optimized for semantic search, retrieval-augmented generation, classification, clustering, and agentic AI applications.​ The latest model, embed-v4.0, supports mixed-modality inputs, allowing users to combine text and images into a single embedding. It offers Matryoshka embeddings with configurable dimensions of 256, 512, 1024, or 1536, enabling flexibility in balancing performance and resource usage. With a context length of up to 128,000 tokens, embed-v4.0 is well-suited for processing large documents and complex data structures. It also supports compressed embedding types, including float, int8, uint8, binary, and ubinary, facilitating efficient storage and faster retrieval in vector databases. Multilingual support spans over 100 languages, making it a versatile tool for global applications.
    Starting Price: $0.47 per image
  • 16
    APIPark

    APIPark

    APIPark

    APIPark is an open-source, all-in-one AI gateway and API developer portal, that helps developers and enterprises easily manage, integrate, and deploy AI services. No matter which AI model you use, APIPark provides a one-stop integration solution. It unifies the management of all authentication information and tracks the costs of API calls. Standardize the request data format for all AI models. When switching AI models or modifying prompts, it won’t affect your app or microservices, simplifying your AI usage and reducing maintenance costs. You can quickly combine AI models and prompts into new APIs. For example, using OpenAI GPT-4 and custom prompts, you can create sentiment analysis APIs, translation APIs, or data analysis APIs. API lifecycle management helps standardize the process of managing APIs, including traffic forwarding, load balancing, and managing different versions of publicly accessible APIs. This improves API quality and maintainability.
    Starting Price: Free
  • 17
    Stableoutput

    Stableoutput

    Stableoutput

    Stableoutput is a user-friendly AI chat client that allows users to interact with popular AI models like OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet without requiring coding knowledge. It operates on a bring-your-own-key model, meaning users utilize their own API keys, which are securely stored in the browser's local storage; these keys are not transmitted to Stableoutput's servers, ensuring privacy and security. The platform offers features such as cloud synchronization, a usage tracker to monitor API consumption, customization options for system prompts, and model settings like temperature and maximum tokens. Users can upload PDFs, images, and code files for AI analysis, facilitating more personalized and context-aware interactions. Additional functionalities include pinning and sharing chats with controlled visibility and managing message requests to optimize API usage. Stableoutput provides lifetime access with a one-time payment.
    Starting Price: $29 one-time payment
  • 18
    ManagePrompt

    ManagePrompt

    ManagePrompt

    Unleash your AI dream project in hours, not months. Imagine, this electrifying message was crafted by AI and beamed directly to you; welcome to a live demo experience like no other. With us, forget the hassle of rate-limiting, authentication, analytics, spend management, and juggling multiple top-tier AI models. We've got it all under control, so you can zero in on creating the ultimate AI masterpiece. We provide the tools to help you build and deploy your AI projects faster. We take care of the infrastructure so you can focus on what you do best. Using our workflows, you can tweak prompts, update models, and deliver changes to your users instantly. Filter and control malicious requests with our security features such as single-use tokens and rate limiting. Use multiple models using the same API, models from OpenAI, Meta, Google, Mixtral, and Anthropic. Prices are per 1,000 tokens, you can think of tokens as pieces of words, where 1,000 tokens are about 750 words.
    Starting Price: $0.01 per 1K tokens per month
  • 19
    Qwen Code
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.
    Starting Price: Free
  • 20
    Qwen3-Coder
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.
    Starting Price: Free
  • 21
    GPT-4o mini
    A small model with superior textual intelligence and multimodal reasoning. GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots). Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective.
  • 22
    AudioCraft

    AudioCraft

    Meta AI

    AudioCraft is a single-stop code base for all your generative audio needs: music, sound effects, and compression after training on raw audio signals. With AudioCraft, we simplify the overall design of generative models for audio compared to prior work. Both MusicGen and AudioGen consist of a single autoregressive Language Model (LM) that operates over streams of compressed discrete music representation, i.e., tokens. We introduce a simple approach to leverage the internal structure of the parallel streams of tokens and show that, with a single model and elegant token interleaving pattern, our approach efficiently models audio sequences, simultaneously capturing the long-term dependencies in the audio and allowing us to generate high-quality audio. Our models leverage the EnCodec neural audio codec to learn the discrete audio tokens from the raw waveform. EnCodec maps the audio signal to one or several parallel streams of discrete tokens.
  • 23
    MiMo-V2-Flash

    MiMo-V2-Flash

    Xiaomi Technology

    MiMo-V2-Flash is an open weight large language model developed by Xiaomi based on a Mixture-of-Experts (MoE) architecture that blends high performance with inference efficiency. It has 309 billion total parameters but activates only 15 billion active parameters per inference, letting it balance reasoning quality and computational efficiency while supporting extremely long context handling, for tasks like long-document understanding, code generation, and multi-step agent workflows. It incorporates a hybrid attention mechanism that interleaves sliding-window and global attention layers to reduce memory usage and maintain long-range comprehension, and it uses a Multi-Token Prediction (MTP) design that accelerates inference by processing batches of tokens in parallel. MiMo-V2-Flash delivers very fast generation speeds (up to ~150 tokens/second) and is optimized for agentic applications requiring sustained reasoning and multi-turn interactions.
    Starting Price: Free
  • 24
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 25
    LiteLLM

    LiteLLM

    LiteLLM

    ​LiteLLM is a versatile platform designed to streamline interactions with over 100 Large Language Models (LLMs) through a unified interface. It offers both a Proxy Server (LLM Gateway) and a Python SDK, enabling developers to integrate various LLMs seamlessly into their applications. The Proxy Server facilitates centralized management, allowing for load balancing, cost tracking across projects, and consistent input/output formatting compatible with OpenAI standards. This setup supports multiple providers. It ensures robust observability by generating unique call IDs for each request, aiding in precise tracking and logging across systems. Developers can leverage pre-defined callbacks to log data using various tools. For enterprise users, LiteLLM offers advanced features like Single Sign-On (SSO), user management, and professional support through dedicated channels like Discord and Slack.
    Starting Price: Free
  • 26
    AudioLM

    AudioLM

    Google

    AudioLM is a pure audio language model that generates high‑fidelity, long‑term coherent speech and piano music by learning from raw audio alone, without requiring any text transcripts or symbolic representations. It represents audio hierarchically using two types of discrete tokens, semantic tokens extracted from a self‑supervised model to capture phonetic or melodic structure and global context, and acoustic tokens from a neural codec to preserve speaker characteristics and fine waveform details, and chains three Transformer stages to predict first semantic tokens for high‑level structure, then coarse and finally fine acoustic tokens for detailed synthesis. The resulting pipeline allows AudioLM to condition on a few seconds of input audio and produce seamless continuations that retain voice identity, prosody, and recording conditions in speech or melody, harmony, and rhythm in music. Human evaluations show that synthetic continuations are nearly indistinguishable from real recordings.
  • 27
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 28
    Helicone

    Helicone

    Helicone

    Track costs, usage, and latency for GPT applications with one line of code. Trusted by leading companies building with OpenAI. We will support Anthropic, Cohere, Google AI, and more coming soon. Stay on top of your costs, usage, and latency. Integrate models like GPT-4 with Helicone to track API requests and visualize results. Get an overview of your application with an in-built dashboard, tailor made for generative AI applications. View all of your requests in one place. Filter by time, users, and custom properties. Track spending on each model, user, or conversation. Use this data to optimize your API usage and reduce costs. Cache requests to save on latency and money, proactively track errors in your application, handle rate limits and reliability concerns with Helicone.
    Starting Price: $1 per 10,000 requests
  • 29
    Claude Sonnet 3.5
    Claude Sonnet 3.5 sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude Sonnet 3.5 operates at twice the speed of Claude Opus 3. This performance boost, combined with cost-effective pricing, makes Claude Sonnet 3.5 ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows. Claude Sonnet 3.5 is now available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. It is also available via the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model costs $3 per million input tokens and $15 per million output tokens, with a 200K token context window.
  • 30
    Toolspend

    Toolspend

    Toolspend

    Toolspend is an AI-powered spend management platform designed to give organizations complete visibility into their AI and SaaS costs through a unified, automated dashboard. It connects directly to AI providers and financial data sources to reveal real usage patterns, show which teams drive consumption, and reconcile token metrics with actual billing. It goes beyond simple subscription tracking by analyzing usage behavior to identify underutilized licenses, duplicate tools across departments, and potential overpayments. It provides real-time monitoring, anomaly alerts for unusual spikes, and month-end forecasting so teams can anticipate costs before invoices arrive. It also delivers AI-driven recommendations such as switching to cheaper models or pausing idle resources, helping companies reduce waste and control budget growth.
    Starting Price: $14.99 per month
  • 31
    Gemini 2.0 Flash-Lite
    Gemini 2.0 Flash-Lite is Google DeepMind's lighter AI model, designed to offer a cost-effective solution without compromising performance. As the most economical model in the Gemini 2.0 lineup, Flash-Lite is tailored for developers and businesses seeking efficient AI capabilities at a lower cost. It supports multimodal inputs and features a context window of one million tokens, making it suitable for a variety of applications. Flash-Lite is currently available in public preview, allowing users to explore its potential in enhancing their AI-driven projects.
  • 32
    Composer 1.5
    Composer 1.5 is the latest agentic coding model from Cursor that balances speed and intelligence for everyday code tasks by scaling reinforcement learning approximately 20x more than its predecessor, enabling stronger performance on real-world programming challenges. It’s designed as a “thinking model” that generates internal reasoning tokens to analyze a user’s codebase and plan next steps, responding quickly to simple problems and engaging deeper reasoning on complex ones, while remaining interactive and fast for daily development workflows. To handle long-running tasks, Composer 1.5 introduces self-summarization, allowing the model to compress and carry forward context when it reaches context limits, which helps maintain accuracy across varying input lengths. Internal benchmarks show it surpasses Composer 1 in coding tasks, especially on more difficult issues, making it more capable for interactive use within Cursor’s environment.
    Starting Price: $20 per month
  • 33
    LTM-2-mini

    LTM-2-mini

    Magic AI

    LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.
  • 34
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • 35
    Portkey

    Portkey

    Portkey.ai

    Launch production-ready apps with the LMOps stack for monitoring, model management, and more. Replace your OpenAI or other provider APIs with the Portkey endpoint. Manage prompts, engines, parameters, and versions in Portkey. Switch, test, and upgrade models with confidence! View your app performance & user level aggregate metics to optimise usage and API costs Keep your user data secure from attacks and inadvertent exposure. Get proactive alerts when things go bad. A/B test your models in the real world and deploy the best performers. We built apps on top of LLM APIs for the past 2 and a half years and realised that while building a PoC took a weekend, taking it to production & managing it was a pain! We're building Portkey to help you succeed in deploying large language models APIs in your applications. Regardless of you trying Portkey, we're always happy to help!
    Starting Price: $49 per month
  • 36
    LM Studio

    LM Studio

    LM Studio

    Use models through the in-app Chat UI or an OpenAI-compatible local server. Minimum requirements: M1/M2/M3 Mac, or a Windows PC with a processor that supports AVX2. Linux is available in beta. One of the main reasons for using a local LLM is privacy, and LM Studio is designed for that. Your data remains private and local to your machine. You can use LLMs you load within LM Studio via an API server running on localhost.
  • 37
    Gemini Live API
    ​The Gemini Live API is a preview feature that enables low-latency, bidirectional voice and video interactions with Gemini. It allows end users to experience natural, human-like voice conversations and provides the ability to interrupt the model's responses using voice commands. The model can process text, audio, and video input, and it can provide text and audio output. New capabilities include two new voices and 30 new languages with configurable output language, configurable image resolutions (66/256 tokens), configurable turn coverage (send all inputs all the time or only when the user is speaking), configurable interruption settings, configurable voice activity detection, new client events for end-of-turn signaling, token counts, a client event for signaling the end of stream, text streaming, configurable session resumption with session data stored on the server for 24 hours, and longer session support with a sliding context window.
  • 38
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
    Starting Price: Free
  • 39
    nebulaONE

    nebulaONE

    Cloudforce

    nebulaONE is a secure, private generative AI gateway built on Microsoft Azure that lets organizations harness leading AI models and build custom AI agents without code, all within their own cloud environment. It aggregates top AI models from providers like OpenAI, Anthropic, Meta, and others into a unified interface so users can safely ingest sensitive data, generate organization-aligned content, and automate routine tasks while keeping data fully under institutional control. Designed to replace insecure public AI tools, nebulaONE emphasizes enterprise-grade security, compliance with regulatory standards such as HIPAA, FERPA, and GDPR, and seamless integration with existing systems. It supports custom AI chatbot creation, no-code development of personalized assistants, and rapid prototyping of new generative use cases, helping educational, healthcare, and enterprise teams accelerate innovation, streamline operations, and enhance productivity.
  • 40
    RouteLLM
    Developed by LM-SYS, RouteLLM is an open-source toolkit that allows users to route tasks between different large language models to improve efficiency and manage resources. It supports strategy-based routing, helping developers balance speed, accuracy, and cost by selecting the best model for each input dynamically.
  • 41
    LMCache

    LMCache

    LMCache

    LMCache is an open source Knowledge Delivery Network (KDN) designed as a caching layer for large language model serving that accelerates inference by reusing KV (key-value) caches across repeated or overlapping computations. It enables fast prompt caching, allowing LLMs to “prefill” recurring text only once and then reuse those stored KV caches, even in non-prefix positions, across multiple serving instances. This approach reduces time to first token, saves GPU cycles, and increases throughput in scenarios such as multi-round question answering or retrieval augmented generation. LMCache supports KV cache offloading (moving cache from GPU to CPU or disk), cache sharing across instances, and disaggregated prefill, which separates the prefill and decoding phases for resource efficiency. It is compatible with inference engines like vLLM and TGI and supports compressed storage, blending techniques to merge caches, and multiple backend storage options.
    Starting Price: Free
  • 42
    DoCoreAI

    DoCoreAI

    MobiLights

    DoCoreAI is an AI prompt optimization and telemetry platform designed for AI-first product teams, SaaS companies, and developers working with large language models (LLMs) like OpenAI & Groq (Infra). With a local-first Python client and secure telemetry engine, DoCoreAI enables teams to collect LLM usage metrics without exposing original prompts & ensuring data privacy. Key Capabilities: - Prompt Optimization → Improve efficiency and reliability of LLM prompts. - LLM Usage Monitoring → Track tokens, response times, and performance trends. - Cost Analytics → Monitor and optimize LLM costs across teams. - Developer Productivity Dashboards → Identify time savings and usage bottlenecks. - AI Telemetry → Collect detailed insights while maintaining user privacy. DoCoreAI helps businesses save on token costs, improve AI model performance, and give developers a single place to understand how prompts behave in production.
    Starting Price: $9/month
  • 43
    GPT-4.1 mini
    GPT-4.1 mini is a compact version of OpenAI’s powerful GPT-4.1 model, designed to provide high performance while significantly reducing latency and cost. With a smaller size and optimized architecture, GPT-4.1 mini still delivers impressive results in tasks such as coding, instruction following, and long-context processing. It supports up to 1 million tokens of context, making it an efficient solution for applications that require fast responses without sacrificing accuracy or depth.
    Starting Price: $0.40 per 1M tokens (input)
  • 44
    Qwen2.5-1M

    Qwen2.5-1M

    Alibaba

    Qwen2.5-1M is an open-source language model developed by the Qwen team, designed to handle context lengths of up to one million tokens. This release includes two model variants, Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, marking the first time Qwen models have been upgraded to support such extensive context lengths. To facilitate efficient deployment, the team has also open-sourced an inference framework based on vLLM, integrated with sparse attention methods, enabling processing of 1M-token inputs with a 3x to 7x speed improvement. Comprehensive technical details, including design insights and ablation experiments, are available in the accompanying technical report.
    Starting Price: Free
  • 45
    GPT-4.1 nano
    GPT-4.1 nano is the smallest and most efficient version of OpenAI's GPT-4.1 model, optimized for low-latency, cost-effective AI processing. Despite its compact size, GPT-4.1 nano delivers strong performance with a 1 million token context window, making it ideal for applications like classification, autocompletion, and smaller-scale tasks that require fast responses. It provides a highly efficient solution for businesses and developers who need an AI model that balances speed, cost, and performance.
    Starting Price: $0.10 per 1M tokens (input)
  • 46
    Nansen.ai
    Surface the Signal in Blockchain Data. Nansen analyzes 50M+ labeled Ethereum wallets and their activity. So you can separate the signal from the noise in blockchain data. Billions of on-chain data points, millions of wallet labels, thousands of entities. Dashboards let you see exactly what’s happening, requiring no technical knowledge. Users consult Nansen before making their investment decisions. 50M+ labeled wallets give you the full context you need to understand the flow of ETH, stablecoins, and tokens. With Nansen you get an executive summary of where funds are moving. And if you want details, you can trace transactions down to the most granular level. Nansen tracks exchanges, token teams, and funds, which means you can see exactly which entities are accumulating - or selling off - a specific token. Token metrics on usage, engagement, and liquidity are available so you can make informed decisions before investing in a new token.
    Starting Price: $149 per month
  • 47
    AI Gateway

    AI Gateway

    AI Gateway

    ​AI Gateway is an all-in-one secure and centralized AI management solution designed to unlock employee potential and drive productivity. It offers centralized AI services, allowing employees to access authorized AI tools via a single, user-friendly platform, streamlining workflows and boosting productivity. AI Gateway ensures data governance by removing sensitive information before it reaches AI providers, safeguarding data, and upholding compliance with regulations. Additionally, AI Gateway provides cost control and monitoring features, enabling businesses to monitor usage, manage employee access, and control costs, promoting optimized and cost-effective access to AI. Control cost, roles, and access while enabling employees to interact with modern AI technology. Streamline utilization of AI tools, save time, and boost efficiency. Data protection by cleaning Personally Identifiable Information (PII), commercial, or sensitive data before sending it to AI providers.
    Starting Price: $100 per month
  • 48
    Yi-Lightning

    Yi-Lightning

    Yi-Lightning

    Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed.
  • 49
    Mistral Small 3.1
    ​Mistral Small 3.1 is a state-of-the-art, multimodal, and multilingual AI model released under the Apache 2.0 license. Building upon Mistral Small 3, this enhanced version offers improved text performance, and advanced multimodal understanding, and supports an expanded context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, delivering inference speeds of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in tasks such as instruction following, conversational assistance, image understanding, and function calling, making it suitable for both enterprise and consumer-grade AI applications. Its lightweight architecture allows it to run efficiently on a single RTX 4090 or a Mac with 32GB RAM, facilitating on-device deployments. It is available for download on Hugging Face, accessible via Mistral AI's developer playground, and integrated into platforms like Google Cloud Vertex AI, with availability on NVIDIA NIM and
    Starting Price: Free
  • 50
    MagicVest

    MagicVest

    MagicVest

    MagicVest is an AI-driven crypto intelligence platform designed to help traders spot high-potential memecoins early and avoid scams before they strike. It leverages its signature MagicRadar and MagicDip engines to scan hundreds of newly launched tokens in real time, tracking metrics like social volume spikes, whale wallet movements, developer activity, token age, liquidity health, and contract integrity, to identify promising tokens before they pump and flag rug pulls with advanced predictive modeling. Every token is assigned a MagicScore, a proprietary risk-performance score derived from over 18 critical signals (including sentiment, volatility, insider behavior, and fake social metrics), which clearly indicates safety levels: green for safe with potential, orange for caution, and red for risky. Users receive instant, AI-generated buy/sell alerts, visual trend predictions (e.g., buy zones, danger zones), and can monitor a curated watchlist 24/7 with real-time notifications.
    Starting Price: $49.99 per month