Alternatives to Amazon Nova 2 Pro

Compare Amazon Nova 2 Pro alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Amazon Nova 2 Pro in 2025. Compare features, ratings, user reviews, pricing, and more from Amazon Nova 2 Pro competitors and alternatives in order to make an informed decision for your business.

  • 1
    Amazon Nova 2 Lite
    Nova 2 Lite is a lightweight, high-speed reasoning model designed to handle everyday AI workloads across text, images, and video. It can generate clear, context-aware responses and lets users fine-tune how much internal reasoning the model performs before producing an answer. This adjustable “thinking depth” gives teams the flexibility to choose faster replies or more detailed problem-solving depending on the task. It stands out for customer service bots, automated document handling, and general business workflow support. Nova 2 Lite delivers strong performance across standard evaluation tests. It performs on par with or better than comparable compact models in most benchmark categories, demonstrating reliable comprehension and response quality. Its strengths include interpreting complex documents, pulling accurate insights from video content, generating usable code, and delivering grounded answers based on provided information.
  • 2
    Amazon Nova 2 Omni
    Nova 2 Omni is a fully unified multimodal reasoning and generation model capable of understanding and producing content across text, images, video, and speech. It can take in extremely large inputs, ranging from hundreds of thousands of words to hours of audio and lengthy videos, while maintaining coherent analysis across formats. This allows it to digest full product catalogs, long-form documents, customer testimonials, and complete video libraries all at the same time, giving teams a single system that replaces the need for multiple specialized models. With its ability to handle mixed media in one workflow, Nova 2 Omni opens new possibilities for creative and operational automation. A marketing team, for example, can feed in product specs, brand guidelines, reference images, and video content and instantly generate an entire campaign, including messaging, social content, and visuals, in one pass.
  • 3
    Amazon Nova 2 Sonic
    Nova 2 Sonic is Amazon’s real-time speech-to-speech model designed to deliver natural, flowing voice interactions without relying on separate systems for text and audio. It combines speech recognition, speech generation, and text processing in a single model, enabling smooth, human-like conversations that can shift effortlessly between voice and text. With expanded multilingual support and expressive voice options, it produces responses that sound more lifelike and contextually aware. Its one-million-token context window allows for long, continuous interactions without losing track of prior details. It supports asynchronous task handling, meaning users can continue speaking, change topics, or ask follow-up questions while background tasks, such as searching for information or completing a request, continue uninterrupted. This makes voice experiences feel more fluid and less bound by traditional turn-based dialog constraints.
  • 4
    GPT-5.1

    GPT-5.1

    OpenAI

    GPT-5.1 is the latest update in the GPT-5 series, designed to make ChatGPT dramatically smarter and more conversational. The release introduces two distinct model variants: GPT-5.1 Instant, which is described as the most-used model and is now warmer, better at following instructions, and more intelligent; and GPT-5.1 Thinking, which is the advanced reasoning engine that’s been tuned to be easier to understand, faster on straightforward tasks, and more persistent on complex ones. Users' queries are now routed automatically to the variant best-suited to the task. The update emphasizes not just improved raw intelligence but also enhanced communication style; the models are tuned to be more natural, enjoyable to talk to, and better aligned with user intents. The system card addendum notes that GPT-5.1 Instant uses “adaptive reasoning” that lets it decide when to think more deeply before responding, while GPT-5.1 Thinking adapts its thinking time accurately to the question at hand.
  • 5
    GPT-5.1 Pro
    GPT-5.1 Pro is the highest-performance version of the GPT-5.1 model family, designed for research-grade reasoning and advanced analytical workloads. It delivers deeper, more structured thinking, making it ideal for complex problem-solving across coding, science, finance, law, and technical research. Unlike the Instant and Thinking versions, GPT-5.1 Pro is built to maintain accuracy under heavy cognitive load, producing clearer logic and more reliable multi-step reasoning. Pro users also gain access to extended context windows, allowing significantly longer inputs and deeper information processing. While it supports the full range of ChatGPT features, GPT-5.1 Pro is optimized for precision, rigor, and high-stakes tasks. It is available exclusively to ChatGPT Pro and Business customers.
  • 6
    GPT-5.2

    GPT-5.2

    OpenAI

    GPT-5.2 is the newest evolution in the GPT-5 series, engineered to deliver even greater intelligence, adaptability, and conversational depth. This release introduces enhanced model variants that refine how ChatGPT reasons, communicates, and responds to complex user intent. GPT-5.2 Instant remains the primary, high-usage model—now faster, more context-aware, and more precise in following instructions. GPT-5.2 Thinking takes advanced reasoning further, offering clearer step-by-step logic, improved consistency on multi-stage problems, and more efficient handling of long or intricate tasks. The system automatically routes each query to the most suitable variant, ensuring optimal performance without requiring user selection. Beyond raw intelligence gains, GPT-5.2 emphasizes more natural dialogue flow, stronger intent alignment, and a smoother, more humanlike communication style.
  • 7
    GPT-5.2 Instant
    GPT-5.2 Instant is the fast, capable variant of OpenAI’s GPT-5.2 model family designed for everyday work and learning with clear improvements in information-seeking questions, how-tos and walkthroughs, technical writing, and translation compared to prior versions. It builds on the warmer conversational tone introduced in GPT-5.1 Instant and produces clearer explanations that surface key information upfront, making it easier for users to get concise, accurate answers quickly. GPT-5.2 Instant delivers speed and responsiveness for typical tasks like answering queries, generating summaries, assisting with research, and helping with writing and editing, while incorporating broader enhancements from the GPT-5.2 series in reasoning, long-context handling, and factual grounding. As part of the GPT-5.2 lineup, it shares the same foundational improvements that boost overall reliability and performance across a wide range of everyday activities.
  • 8
    GPT-5.2 Pro
    GPT-5.2 Pro is the highest-capability variant of OpenAI’s latest GPT-5.2 model family, built to deliver professional-grade reasoning, complex task performance, and enhanced accuracy for demanding knowledge work, creative problem-solving, and enterprise-level applications. It builds on the foundational improvements of GPT-5.2, including stronger general intelligence, superior long-context understanding, better factual grounding, and improved tool use, while using more compute and deeper processing to produce more thoughtful, reliable, and context-rich responses for users with intricate, multi-step requirements. GPT-5.2 Pro is designed to handle challenging workflows such as advanced coding and debugging, deep data analysis, research synthesis, extensive document comprehension, and complex project planning with greater precision and fewer errors than lighter variants.
  • 9
    GPT-5.2 Thinking
    GPT-5.2 Thinking is the highest-capability configuration in OpenAI’s GPT-5.2 model family, engineered for deep, expert-level reasoning, complex task execution, and advanced problem solving across long contexts and professional domains. Built on the foundational GPT-5.2 architecture with improvements in grounding, stability, and reasoning quality, this variant applies more compute and reasoning effort to generate responses that are more accurate, structured, and contextually rich when handling highly intricate workflows, multi-step analysis, and domain-specific challenges. GPT-5.2 Thinking excels at tasks that require sustained logical coherence, such as detailed research synthesis, advanced coding and debugging, complex data interpretation, strategic planning, and sophisticated technical writing, and it outperforms lighter variants on benchmarks that test professional skills and deep comprehension.
  • 10
    Gemini 2.5 Pro
    Gemini 2.5 Pro is an advanced AI model designed to handle complex tasks with enhanced reasoning and coding capabilities. Leading common benchmarks, it excels in math, science, and coding, demonstrating strong performance in tasks like web app creation and code transformation. Built on the Gemini 2.5 foundation, it features a 1 million token context window, enabling it to process vast datasets from various sources such as text, images, and code repositories. Available now in Google AI Studio, Gemini 2.5 Pro is optimized for more sophisticated applications and supports advanced users with improved performance for complex problem-solving.
  • 11
    Gemini 3 Pro
    Gemini 3 Pro is Google’s most advanced multimodal AI model, built for developers who want to bring ideas to life with intelligence, precision, and creativity. It delivers breakthrough performance across reasoning, coding, and multimodal understanding—surpassing Gemini 2.5 Pro in both speed and capability. The model excels in agentic workflows, enabling autonomous coding, debugging, and refactoring across entire projects with long-context awareness. With superior performance in image, video, and spatial reasoning, Gemini 3 Pro powers next-generation applications in development, robotics, XR, and document intelligence. Developers can access it through the Gemini API, Google AI Studio, or Vertex AI, integrating seamlessly into existing tools and IDEs. Whether generating code, analyzing visuals, or building interactive apps from a single prompt, Gemini 3 Pro represents the future of intelligent, multimodal AI development.
  • 12
    Claude Opus 4.5
    Claude Opus 4.5 is Anthropic’s newest flagship model, delivering major improvements in reasoning, coding, agentic workflows, and real-world problem solving. It outperforms previous models and leading competitors on benchmarks such as SWE-bench, multilingual coding tests, and advanced agent evaluations. Opus 4.5 also introduces stronger safety features, including significantly higher resistance to prompt injection and improved alignment across sensitive tasks. Developers gain new controls through the Claude API—like effort parameters, context compaction, and advanced tool use—allowing for more efficient, longer-running agentic workflows. Product updates across Claude, Claude Code, the Chrome extension, and Excel integrations expand how users interact with the model for software engineering, research, and everyday productivity. Overall, Claude Opus 4.5 marks a substantial step forward in capability, reliability, and usability for developers, enterprises, and end users.
  • 13
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 14
    Grok 4.1
    Grok 4.1 is an advanced AI model developed by Elon Musk’s xAI, designed to push the limits of reasoning and natural language understanding. Built on the powerful Colossus supercomputer, it processes multimodal inputs including text and images, with upcoming support for video. The model delivers exceptional accuracy in scientific, technical, and linguistic tasks. Its architecture enables complex reasoning and nuanced response generation that rivals the best AI systems in the world. Enhanced moderation ensures more responsible and unbiased outputs than earlier versions. Grok 4.1 is a breakthrough in creating AI that can think, interpret, and respond more like a human.
  • 15
    Grok 4.1 Fast
    Grok 4.1 Fast is the newest xAI model designed to deliver advanced tool-calling capabilities with a massive 2-million-token context window. It excels at complex real-world tasks such as customer support, finance, troubleshooting, and dynamic agent workflows. The model pairs seamlessly with the new Agent Tools API, which enables real-time web search, X search, file retrieval, and secure code execution. This combination gives developers the power to build fully autonomous, production-grade agents that plan, reason, and use tools effectively. Grok 4.1 Fast is trained with long-horizon reinforcement learning, ensuring stable multi-turn accuracy even across extremely long prompts. With its speed, cost-efficiency, and high benchmark scores, it sets a new standard for scalable enterprise-grade AI agents.
  • 16
    Grok 4.1 Thinking
    Grok 4.1 Thinking is xAI’s advanced reasoning-focused AI model designed for deeper analysis, reflection, and structured problem-solving. It uses explicit thinking tokens to reason through complex prompts before delivering a response, resulting in more accurate and context-aware outputs. The model excels in tasks that require multi-step logic, nuanced understanding, and thoughtful explanations. Grok 4.1 Thinking demonstrates a strong, coherent personality while maintaining analytical rigor and reliability. It has achieved the top overall ranking on the LMArena Text Leaderboard, reflecting strong human preference in blind evaluations. The model also shows leading performance in emotional intelligence and creative reasoning benchmarks. Grok 4.1 Thinking is built for users who value clarity, depth, and defensible reasoning in AI interactions.
  • 17
    MiniMax M2

    MiniMax M2

    MiniMax

    MiniMax M2 is an open source foundation model built specifically for agentic applications and coding workflows, striking a new balance of performance, speed, and cost. It excels in end-to-end development scenarios, handling programming, tool-calling, and complex, long-chain workflows with capabilities such as Python integration, while delivering inference speeds of around 100 tokens per second and offering API pricing at just ~8% of the cost of comparable proprietary models. The model supports “Lightning Mode” for high-speed, lightweight agent tasks, and “Pro Mode” for in-depth full-stack development, report generation, and web-based tool orchestration; its weights are fully open source and available for local deployment with vLLM or SGLang. MiniMax M2 positions itself as a production-ready model that enables agents to complete independent tasks, such as data analysis, programming, tool orchestration, and large-scale multi-step logic at real organizational scale.
    Starting Price: $0.30 per million input tokens
  • 18
    Qwen3-Max

    Qwen3-Max

    Alibaba

    Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
  • 19
    Amazon Nova
    Amazon Nova is a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price-performance, available exclusively on Amazon Bedrock. Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are understanding models that accept text, image, or video inputs and generate text output. They provide a broad selection of capability, accuracy, speed, and cost operation points. Amazon Nova Micro is a text only model that delivers the lowest latency responses at very low cost. Amazon Nova Lite is a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Pro’s capabilities, coupled with its industry-leading speed and cost efficiency, makes it a compelling model for almost any task, including video summarization, Q&A, math & more.
  • 20
    Amazon Nova Premier
    Amazon Nova Premier is the most advanced model in their Nova family, designed to handle complex tasks and act as a teacher for model distillation. Available on Amazon Bedrock, Nova Premier can process text, images, and video inputs, making it capable of managing intricate workflows, multi-step planning, and the precise execution of tasks across various data sources. The model features a context length of one million tokens, enabling it to handle large-scale documents and code bases efficiently. Furthermore, Nova Premier allows users to create smaller, faster, and more cost-effective versions of its models, such as Nova Pro and Nova Micro, for specific use cases through model distillation.
  • 21
    Amazon Nova Pro
    Amazon Nova Pro is a versatile, multimodal AI model designed for a wide range of complex tasks, offering an optimal combination of accuracy, speed, and cost efficiency. It excels in video summarization, Q&A, software development, and AI agent workflows that require executing multi-step processes. With advanced capabilities in text, image, and video understanding, Nova Pro supports tasks like mathematical reasoning and content generation, making it ideal for businesses looking to implement cutting-edge AI in their operations.
  • 22
    Amazon Nova Micro
    Amazon Nova Micro is an AI model designed for high-speed, low-cost text processing and generation. It excels in language understanding, translation, code completion, and mathematical problem-solving, providing fast responses with a generation speed of over 200 tokens per second. The model supports fine-tuning for text input and is ideal for applications requiring real-time processing and efficiency. With support for 200+ languages and a maximum of 128k tokens, Nova Micro is perfect for interactive AI applications that prioritize speed and affordability.
  • 23
    Amazon Nova Lite
    Amazon Nova Lite is a cost-efficient, multimodal AI model designed for rapid processing of image, video, and text inputs. It delivers impressive performance at an affordable price, making it ideal for interactive, high-volume applications where cost is a key consideration. With support for fine-tuning across text, image, and video inputs, Nova Lite excels in a variety of tasks that require fast, accurate responses, such as content generation and real-time analytics.
  • 24
    Arcee-SuperNova
    Our new flagship model is a small Language Model (SLM) with all the power and performance of leading closed-source LLMs. Excels at generalized tasks, instruction-following, and human preferences. The best 70B model on the market. SuperNova can be utilized for any generalized task, much like Open AI’s GPT4o, Claude Sonnet 3.5, and Cohere. Trained with the most advanced learning & optimization techniques, SuperNova generates highly accurate responses in human-like text. It's the most flexible, secure, and cost-effective language model on the market, saving customers up to 95% on total deployment costs vs. traditional closed-source models. Use SuperNova to integrate AI into apps and products, for general chat purposes, and for diverse use cases. Regularly update your models with the latest open-source tech, ensuring you're never locked into any one solution. Protect your data with industry-leading privacy measures.
  • 25
    Sky-T1

    Sky-T1

    NovaSky

    Sky-T1-32B-Preview is an open source reasoning model developed by the NovaSky team at UC Berkeley's Sky Computing Lab. It matches the performance of proprietary models like o1-preview on reasoning and coding benchmarks, yet was trained for under $450, showcasing the feasibility of cost-effective, high-level reasoning capabilities. The model was fine-tuned from Qwen2.5-32B-Instruct using a curated dataset of 17,000 examples across diverse domains, including math and coding. The training was completed in 19 hours on eight H100 GPUs with DeepSpeed Zero-3 offloading. All aspects of the project, including data, code, and model weights, are fully open-source, empowering the academic and open-source communities to replicate and enhance the model's performance.
  • 26
    Mistral Large 3
    Mistral Large 3 is a next-generation, open multimodal AI model built with a powerful sparse Mixture-of-Experts architecture featuring 41B active parameters out of 675B total. Designed from scratch on NVIDIA H200 GPUs, it delivers frontier-level reasoning, multilingual performance, and advanced image understanding while remaining fully open-weight under the Apache 2.0 license. The model achieves top-tier results on modern instruction benchmarks, positioning it among the strongest permissively licensed foundation models available today. With native support across vLLM, TensorRT-LLM, and major cloud providers, Mistral Large 3 offers exceptional accessibility and performance efficiency. Its design enables enterprise-grade customization, letting teams fine-tune or adapt the model for domain-specific workflows and proprietary applications. Mistral Large 3 represents a major advancement in open AI, offering frontier intelligence without sacrificing transparency or control.
  • 27
    Claude Sonnet 4
    Claude Sonnet 4, the latest evolution of Anthropic’s language models, offers a significant upgrade in coding, reasoning, and performance. Designed for diverse use cases, Sonnet 4 builds upon the success of its predecessor, Claude Sonnet 3.7, delivering more precise responses and better task execution. With a state-of-the-art 72.7% performance on the SWE-bench, it stands out in agentic scenarios, offering enhanced steerability and clear reasoning capabilities. Whether handling software development, multi-feature app creation, or complex problem-solving, Claude Sonnet 4 ensures higher code quality, reduced errors, and a smoother development process.
    Starting Price: $3 / 1 million tokens (input)
  • 28
    Claude Sonnet 3.7
    Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.
  • 29
    Claude Haiku 4.5
    Anthropic has launched Claude Haiku 4.5, its latest small-language model designed to deliver near-frontier performance at significantly lower cost. The model provides similar coding and reasoning quality as the company’s mid-tier Sonnet 4, yet it runs at roughly one-third of the cost and more than twice the speed. In benchmarks cited by Anthropic, Haiku 4.5 meets or exceeds Sonnet 4’s performance in key tasks such as code generation and multi-step “computer use” workflows. It is optimized for real-time, low-latency scenarios such as chat assistants, customer service agents, and pair-programming support. Haiku 4.5 is made available via the Claude API under the identifier “claude-haiku-4-5” and supports large-scale deployments where cost, responsiveness, and near-frontier intelligence matter. Claude Haiku 4.5 is available now on Claude Code and our apps. Its efficiency means you can accomplish more within your usage limits while maintaining premium model performance.
    Starting Price: $1 per million input tokens
  • 30
    Claude Sonnet 3.5
    Claude Sonnet 3.5 sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude Sonnet 3.5 operates at twice the speed of Claude Opus 3. This performance boost, combined with cost-effective pricing, makes Claude Sonnet 3.5 ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows. Claude Sonnet 3.5 is now available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. It is also available via the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model costs $3 per million input tokens and $15 per million output tokens, with a 200K token context window.
  • 31
    Gemini 3 Flash
    Gemini 3 Flash is Google’s latest AI model built to deliver frontier intelligence with exceptional speed and efficiency. It combines Pro-level reasoning with Flash-level latency, making advanced AI more accessible and affordable. The model excels in complex reasoning, multimodal understanding, and agentic workflows while using fewer tokens for everyday tasks. Gemini 3 Flash is designed to scale across consumer apps, developer tools, and enterprise platforms. It supports rapid coding, data analysis, video understanding, and interactive application development. By balancing performance, cost, and speed, Gemini 3 Flash redefines what fast AI can achieve.
  • 32
    Claude Opus 4

    Claude Opus 4

    Anthropic

    Claude Opus 4 represents a revolutionary leap in AI model performance, setting a new standard for coding and reasoning capabilities. As the world’s best coding model, Opus 4 excels in handling long-running, complex tasks, and agent workflows. With sustained performance that can run for hours, it outperforms all prior models—including the Sonnet series—making it ideal for demanding coding projects, research, and AI agent applications. It’s the model of choice for organizations looking to enhance their software engineering, streamline workflows, and improve productivity with remarkable precision. Now available on Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI, Opus 4 offers unparalleled support for coding, debugging, and collaborative agent tasks.
    Starting Price: $15 / 1 million tokens (input)
  • 33
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
  • 34
    ERNIE X1 Turbo
    ERNIE X1 Turbo, developed by Baidu, is an advanced deep reasoning AI model introduced at the Baidu Create 2025 conference. Designed to handle complex multi-step tasks such as problem-solving, literary creation, and code generation, this model outperforms competitors like DeepSeek R1 in terms of reasoning abilities. With a focus on multimodal capabilities, ERNIE X1 Turbo supports text, audio, and image processing, making it an incredibly versatile AI solution. Despite its cutting-edge technology, it is priced at just a fraction of the cost of other top-tier models, offering a high-value solution for businesses and developers.
    Starting Price: $0.14 per 1M tokens
  • 35
    Claude Haiku 3.5
    Our fastest model, delivering advanced coding, tool use, and reasoning at an accessible price Claude Haiku 3.5 is the next generation of our fastest model. For a similar speed to Claude Haiku 3, Claude Haiku 3.5 improves across every skill set and surpasses Claude Opus 3, the largest model in our previous generation, on many intelligence benchmarks. Claude Haiku 3.5 is available across our first-party API, Amazon Bedrock, and Google Cloud’s Vertex AI—initially as a text-only model and with image input to follow.
  • 36
    GLM-4.6

    GLM-4.6

    Zhipu AI

    GLM-4.6 advances upon its predecessor with stronger reasoning, coding, and agentic capabilities: it demonstrates clear improvements in inferential performance, supports tool use during inference, and more effectively integrates into agent frameworks. In benchmark tests spanning reasoning, coding, and agents, GLM-4.6 outperforms GLM-4.5 and shows competitive strength against models such as DeepSeek-V3.2-Exp and Claude Sonnet 4, though it still trails Claude Sonnet 4.5 in pure coding performance. In real-world tests using an extended “CC-Bench” suite across front-end development, tool building, data analysis, and algorithmic tasks, GLM-4.6 beats GLM-4.5 and approaches parity with Claude Sonnet 4, winning ~48.6% of head-to-head comparisons, while also achieving ~15% better token efficiency. GLM-4.6 is available via the Z.ai API, and developers can integrate it as an LLM backend or agent core using the platform’s API.
  • 37
    QwQ-Max-Preview
    QwQ-Max-Preview is an advanced AI model built on the Qwen2.5-Max architecture, designed to excel in deep reasoning, mathematical problem-solving, coding, and agent-related tasks. This preview version offers a sneak peek at its capabilities, which include improved performance in a wide range of general-domain tasks and the ability to handle complex workflows. QwQ-Max-Preview is slated for an official open-source release under the Apache 2.0 license, offering further advancements and refinements in its full version. It also paves the way for a more accessible AI ecosystem, with the upcoming launch of the Qwen Chat app and smaller variants of the model like QwQ-32B, aimed at developers seeking local deployment options.
  • 38
    Amazon Nova Sonic
    ​Amazon Nova Sonic is a state-of-the-art speech-to-speech model that delivers real-time, human-like voice conversations with industry-leading price performance. It unifies speech understanding and generation into a single model, enabling developers to create natural, expressive conversational AI experiences with low latency. Nova Sonic adapts its responses based on the prosody of input speech, such as pace and timbre, resulting in more natural dialogue. It supports function calling and agentic workflows to interact with external services and APIs, including knowledge grounding with enterprise data using Retrieval-Augmented Generation (RAG). It provides robust speech understanding for American and British English across various speaking styles and acoustic conditions, with additional languages coming soon. Nova Sonic handles user interruptions gracefully without dropping conversational context and is robust to background noise.
  • 39
    Gemini 2.0
    Gemini 2.0 is an advanced AI-powered model developed by Google, designed to offer groundbreaking capabilities in natural language understanding, reasoning, and multimodal interactions. Building on the success of its predecessor, Gemini 2.0 integrates large language processing with enhanced problem-solving and decision-making abilities, enabling it to interpret and generate human-like responses with greater accuracy and nuance. Unlike traditional AI models, Gemini 2.0 is trained to handle multiple data types simultaneously, including text, images, and code, making it a versatile tool for research, business, education, and creative industries. Its core improvements include better contextual understanding, reduced bias, and a more efficient architecture that ensures faster, more reliable outputs. Gemini 2.0 is positioned as a major step forward in the evolution of AI, pushing the boundaries of human-computer interaction.
  • 40
    Amazon Nova Forge
    Amazon Nova Forge is a groundbreaking service that enables organizations to build their own frontier models by leveraging early Nova checkpoints and proprietary data. It provides complete flexibility across the full training lifecycle, including pre-training, mid-training, supervised fine-tuning, and reinforcement learning. With access to Nova-curated datasets and responsible AI tooling, customers can create powerful and safer custom models tailored to their domain. Nova Forge allows teams to mix their own datasets at the peak learning stage to maximize accuracy while preventing catastrophic forgetting. Companies across industries—from Reddit to Sony—use Nova Forge to consolidate ML workflows, accelerate innovation, and outperform specialized models. Hosted securely on AWS, it offers the most cost-effective, streamlined path to building next-generation AI systems.
  • 41
    DeepSeek-V3.2-Speciale
    DeepSeek-V3.2-Speciale is a high-compute variant of the DeepSeek-V3.2 model, created specifically for deep reasoning and advanced problem-solving tasks. It builds on DeepSeek Sparse Attention (DSA), a custom long-context attention mechanism that reduces computational overhead while preserving high performance. Through a large-scale reinforcement learning framework and extensive post-training compute, the Speciale variant surpasses GPT-5 on reasoning benchmarks and matches the capabilities of Gemini-3.0-Pro. The model achieved gold-medal performance in the International Mathematical Olympiad (IMO) 2025 and International Olympiad in Informatics (IOI) 2025. DeepSeek-V3.2-Speciale does not support tool-calling, making it purely optimized for uninterrupted reasoning and analytical accuracy. Released under the MIT license, it provides researchers and developers an open, state-of-the-art model focused entirely on high-precision reasoning.
  • 42
    Gemini 2.5 Pro Deep Think
    Gemini 2.5 Pro Deep Think is a cutting-edge AI model designed to enhance the reasoning capabilities of machine learning models, offering improved performance and accuracy. This advanced version of the Gemini 2.5 series incorporates a feature called "Deep Think," allowing the model to reason through its thoughts before responding. It excels in coding, handling complex prompts, and multimodal tasks, offering smarter, more efficient execution. Whether for coding tasks, visual reasoning, or handling long-context input, Gemini 2.5 Pro Deep Think provides unparalleled performance. It also introduces features like native audio for more expressive conversations and optimizations that make it faster and more accurate than previous versions.
  • 43
    OpenAI o1-pro
    OpenAI o1-pro is the enhanced version of OpenAI's o1 model, designed to tackle more complex and demanding tasks with greater reliability. It features significant performance improvements over its predecessor, the o1 preview, with a notable 34% reduction in major errors and the ability to think 50% faster. This model excels in areas like math, physics, and coding, where it can provide detailed and accurate solutions. Additionally, the o1-pro mode can process multimodal inputs, including text and images, and is particularly adept at reasoning tasks that require deep thought and problem-solving. It's accessible through a ChatGPT Pro subscription, offering unlimited usage and enhanced capabilities for users needing advanced AI assistance.
  • 44
    Mistral Medium 3.1
    Mistral Medium 3.1 is the latest frontier-class multimodal foundation model released in August 2025, designed to deliver advanced reasoning, coding, and multimodal capabilities while dramatically reducing deployment complexity and costs. It builds on the highly efficient architecture of Mistral Medium 3, renowned for offering state-of-the-art performance at up to 8-times lower cost than leading large models, enhancing tone consistency, responsiveness, and accuracy across diverse tasks and modalities. The model supports deployment across hybrid environments, on-premises systems, and virtual private clouds, and it achieves competitive performance relative to high-end models such as Claude Sonnet 3.7, Llama 4 Maverick, and Cohere Command A. Ideal for professional and enterprise use cases, Mistral Medium 3.1 excels in coding, STEM reasoning, language understanding, and multimodal comprehension, while maintaining broad compatibility with custom workflows and infrastructure.
  • 45
    Gemini 2.5 Flash-Lite
    Gemini 2.5 is Google DeepMind’s latest generation AI model family, designed to deliver advanced reasoning and native multimodality with a long context window. It improves performance and accuracy by reasoning through its thoughts before responding. The model offers different versions tailored for complex coding tasks, fast everyday performance, and cost-efficient high-volume workloads. Gemini 2.5 supports multiple data types including text, images, video, audio, and PDFs, enabling versatile AI applications. It features adaptive thinking budgets and fine-grained control for developers to balance cost and output quality. Available via Google AI Studio and Gemini API, Gemini 2.5 powers next-generation AI experiences.
  • 46
    Gemini 3 Deep Think
    The most advanced model from Google DeepMind, Gemini 3, sets a new bar for model intelligence by delivering state-of-the-art reasoning and multimodal understanding across text, image, and video. It surpasses its predecessor on key AI benchmarks and excels at deeper problems such as scientific reasoning, complex coding, spatial logic, and visual-/video-based understanding. The new “Deep Think” mode pushes the boundaries even further, offering enhanced reasoning for very challenging tasks, outperforming Gemini 3 Pro on benchmarks like Humanity’s Last Exam and ARC-AGI. Gemini 3 is now available across Google’s ecosystem, enabling users to learn, build, and plan at new levels of sophistication. With context windows up to one million tokens, more granular media-processing options, and specialized configurations for tool use, the model brings better precision, depth, and flexibility for real-world workflows.
  • 47
    Olmo 3
    Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.
  • 48
    Phi-4

    Phi-4

    Microsoft

    Phi-4 is a 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates what’s possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face. Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality.
  • 49
    Nova-3

    Nova-3

    Deepgram

    ​Deepgram's Nova-3 is an advanced speech-to-text model that sets new standards in accuracy and performance for complex, real-world scenarios. It offers real-time multilingual transcription, enabling seamless processing of conversations spanning multiple languages, a critical advancement for global customer support and emergency response services. Nova-3 also provides self-serve customization through Keyterm Prompting, allowing users to instantly adapt up to 100 domain-specific terms without the need for model retraining. This feature enhances the recognition of specialized vocabulary and technical terminology, making it highly adaptable to various industries. Additionally, Nova-3 delivers industry-leading performance with a 54.3% reduction in word error rate for streaming and 47.4% for batch processing compared to competitors. These advancements make Nova-3 a versatile solution for organizations seeking to enhance their speech recognition capabilities across diverse applications.
    Starting Price: $4,000 per year
  • 50
    DeepScaleR

    DeepScaleR

    Agentica Project

    DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.