Compare the Top AI Models in the USA as of February 2026 - Page 17

  • 1
    Veo 3.1 Fast
    Veo 3.1 Fast is Google’s upgraded video-generation model, released in paid preview within the Gemini API alongside Veo 3.1. It enables developers to create cinematic, high-quality videos from text prompts or reference images at a much faster processing speed. The model introduces native audio generation with natural dialogue, ambient sound, and synchronized effects for lifelike storytelling. Veo 3.1 Fast also supports advanced controls such as “Ingredients to Video,” allowing up to three reference images, “Scene Extension” for longer sequences, and “First and Last Frame” transitions for seamless shot continuity. Built for efficiency and realism, it delivers improved image-to-video quality and character consistency across multiple scenes. With direct integration into Google AI Studio and Vertex AI, Veo 3.1 Fast empowers developers to bring creative video concepts to life in record time.
  • 2
    Microsoft Foundry Models
    Microsoft Foundry Models is a unified model catalog that gives enterprises access to more than 11,000 AI models from Microsoft, OpenAI, Anthropic, Mistral AI, Meta, Cohere, DeepSeek, xAI, and others. It allows teams to explore, test, and deploy models quickly using a task-centric discovery experience and integrated playground. Organizations can fine-tune models with ready-to-use pipelines and evaluate performance using their own datasets for more accurate benchmarking. Foundry Models provides secure, scalable deployment options with serverless and managed compute choices tailored to enterprise needs. With built-in governance, compliance, and Azure’s global security framework, businesses can safely operationalize AI across mission-critical workflows. The platform accelerates innovation by enabling developers to build, iterate, and scale AI solutions from one centralized environment.
  • 3
    Grok 4.1
    Grok 4.1 is an advanced AI model developed by Elon Musk’s xAI, designed to push the limits of reasoning and natural language understanding. Built on the powerful Colossus supercomputer, it processes multimodal inputs including text and images, with upcoming support for video. The model delivers exceptional accuracy in scientific, technical, and linguistic tasks. Its architecture enables complex reasoning and nuanced response generation that rivals the best AI systems in the world. Enhanced moderation ensures more responsible and unbiased outputs than earlier versions. Grok 4.1 is a breakthrough in creating AI that can think, interpret, and respond more like a human.
  • 4
    FLUX.2

    FLUX.2

    Black Forest Labs

    FLUX.2 is built for real production workflows, delivering high-quality visuals while maintaining character, product, and style consistency across multiple reference images. It handles structured prompts, brand-safe layouts, complex text rendering, and detailed logos with precision. The model supports multi-reference inputs, editing at up to 4 megapixels, and generates both photorealistic scenes and highly stylized compositions. With a focus on reliability, FLUX.2 processes real-world creative tasks—such as infographics, product shots, and UI mockups—with exceptional stability. It represents Black Forest Labs’ open-core approach, pairing frontier-level capability with open-weight models that invite experimentation. Across its variants, FLUX.2 provides flexible options for studios, developers, and researchers who need scalable, customizable visual intelligence.
  • 5
    Amazon Nova 2 Omni
    Nova 2 Omni is a fully unified multimodal reasoning and generation model capable of understanding and producing content across text, images, video, and speech. It can take in extremely large inputs, ranging from hundreds of thousands of words to hours of audio and lengthy videos, while maintaining coherent analysis across formats. This allows it to digest full product catalogs, long-form documents, customer testimonials, and complete video libraries all at the same time, giving teams a single system that replaces the need for multiple specialized models. With its ability to handle mixed media in one workflow, Nova 2 Omni opens new possibilities for creative and operational automation. A marketing team, for example, can feed in product specs, brand guidelines, reference images, and video content and instantly generate an entire campaign, including messaging, social content, and visuals, in one pass.
  • 6
    Amazon Nova 2 Sonic
    Nova 2 Sonic is Amazon’s real-time speech-to-speech model designed to deliver natural, flowing voice interactions without relying on separate systems for text and audio. It combines speech recognition, speech generation, and text processing in a single model, enabling smooth, human-like conversations that can shift effortlessly between voice and text. With expanded multilingual support and expressive voice options, it produces responses that sound more lifelike and contextually aware. Its one-million-token context window allows for long, continuous interactions without losing track of prior details. It supports asynchronous task handling, meaning users can continue speaking, change topics, or ask follow-up questions while background tasks, such as searching for information or completing a request, continue uninterrupted. This makes voice experiences feel more fluid and less bound by traditional turn-based dialog constraints.
  • 7
    Nano Banana 2 Flash
    Gemini Nano Banana 2 Flash is a cost-efficient AI model now available within Google’s Gemini ecosystem. It delivers performance comparable to Nano Banana 2 Pro while operating at significantly lower costs. Internal naming previously referred to the Pro model as “Ketchup,” while the Flash variant was identified as “Mayo.” Sample outputs show strong parity between Flash and Pro in real-world tasks. This allows users to access high-quality AI capabilities without premium pricing. Gemini Nano Banana 2 Flash is optimized for large-scale and high-volume deployments. The model strengthens Gemini’s balance between performance, efficiency, and accessibility.
  • 8
    Kling 2.5

    Kling 2.5

    Kuaishou Technology

    Kling 2.5 is an AI video generation model designed to create high-quality visuals from text or image inputs. It focuses on producing detailed, cinematic video output with smooth motion and strong visual coherence. Kling 2.5 generates silent visuals, allowing creators to add voiceovers, sound effects, and music separately for full creative control. The model supports both text-to-video and image-to-video workflows for flexible content creation. Kling 2.5 excels at scene composition, camera movement, and visual storytelling. It enables creators to bring ideas to life quickly without complex editing tools. Kling 2.5 serves as a powerful foundation for visually rich AI-generated video content.
  • 9
    Hunyuan Motion 1.0

    Hunyuan Motion 1.0

    Tencent Hunyuan

    Hunyuan Motion (also known as HY-Motion 1.0) is a state-of-the-art text-to-3D motion generation AI model that uses a billion-parameter Diffusion Transformer with flow matching to turn natural language prompts into high-quality, skeleton-based 3D character animation in seconds. It understands descriptive text in English and Chinese and produces smooth, physically plausible motion sequences that integrate seamlessly into standard 3D animation pipelines by exporting to skeleton formats such as SMPL or SMPLH and common formats like FBX or BVH for use in Blender, Unity, Unreal Engine, Maya, and other tools. The model’s three-stage training pipeline (large-scale pre-training on thousands of hours of motion data, fine-tuning on curated sequences, and reinforcement learning from human feedback) enhances its ability to follow complex instructions and generate realistic, temporally coherent motion.
  • 10
    Molmo 2
    Molmo 2 is a new suite of state-of-the-art open vision-language models with fully open weights, training data, and training code that extends the original Molmo family’s grounded image understanding to video and multi-image inputs, enabling advanced video understanding, pointing, tracking, dense captioning, and question-answering capabilities; all with strong spatial and temporal reasoning across frames. Molmo 2 includes three variants: an 8 billion-parameter model optimized for overall video grounding and QA, a 4 billion-parameter version designed for efficiency, and a 7 billion-parameter Olmo-backed model offering a fully open end-to-end architecture including the underlying language model. These models outperform earlier Molmo versions on core benchmarks and set new open-model high-water marks for image and video understanding tasks, often competing with substantially larger proprietary systems while training on a fraction of the data used by comparable closed models.
  • 11
    Claude Opus 4.6
    Claude Opus 4.6 is Anthropic’s flagship AI model designed to push the boundaries of reasoning, coding, and real-world problem solving. It delivers significant performance gains over previous versions and competing models across key benchmarks. Opus 4.6 excels on SWE-bench, multilingual coding evaluations, and advanced agent-based tests. The model is built to support complex, long-running agentic workflows with greater efficiency. Enhanced safety measures improve resistance to prompt injection and strengthen alignment on sensitive tasks. Developers benefit from new API controls such as effort parameters, context compaction, and advanced tool usage. These improvements make Opus 4.6 more powerful, reliable, and versatile across use cases.
  • 12
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • 13
    OPT

    OPT

    Meta

    Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
  • 14
    T5

    T5

    Google

    With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
  • 15
    PanGu-α

    PanGu-α

    Huawei

    PanGu-α is developed under the MindSpore and trained on a cluster of 2048 Ascend 910 AI processors. The training parallelism strategy is implemented based on MindSpore Auto-parallel, which composes five parallelism dimensions to scale the training task to 2048 processors efficiently, including data parallelism, op-level model parallelism, pipeline model parallelism, optimizer model parallelism and rematerialization. To enhance the generalization ability of PanGu-α, we collect 1.1TB high-quality Chinese data from a wide range of domains to pretrain the model. We empirically test the generation ability of PanGu-α in various scenarios including text summarization, question answering, dialogue generation, etc. Moreover, we investigate the effect of model scales on the few-shot performances across a broad range of Chinese NLP tasks. The experimental results demonstrate the superior capabilities of PanGu-α in performing various tasks under few-shot or zero-shot settings.
  • 16
    Megatron-Turing
    Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode.
  • 17
    Galactica
    Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. Galactica is a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%.
  • 18
    PanGu-Σ

    PanGu-Σ

    Huawei

    Significant advancements in the field of natural language processing, understanding, and generation have been achieved through the expansion of large language models. This study introduces a system which utilizes Ascend 910 AI processors and the MindSpore framework to train a language model with over a trillion parameters, specifically 1.085T, named PanGu-{\Sigma}. This model, which builds upon the foundation laid by PanGu-{\alpha}, takes the traditionally dense Transformer model and transforms it into a sparse one using a concept known as Random Routed Experts (RRE). The model was efficiently trained on a dataset of 329 billion tokens using a technique called Expert Computation and Storage Separation (ECSS), leading to a 6.3-fold increase in training throughput via heterogeneous computing. Experimentation indicates that PanGu-{\Sigma} sets a new standard in zero-shot learning for various downstream Chinese NLP tasks.
  • 19
    OpenELM

    OpenELM

    Apple

    OpenELM is an open-source language model family developed by Apple. It uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy compared to existing open language models of similar size. OpenELM is trained on publicly available datasets and achieves state-of-the-art performance for its size.
  • 20
    LTM-2-mini

    LTM-2-mini

    Magic AI

    LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.
  • 21
    OpenAI o3-mini-high
    The o3-mini-high model from OpenAI advances AI reasoning by refining deep problem-solving in coding, mathematics, and complex tasks. It features adaptive thinking time with adjustable reasoning modes (low, medium, high) to optimize performance based on task complexity. Outperforming the o1 series by 200 Elo points on Codeforces, it delivers high efficiency at a lower cost while maintaining speed and accuracy. As part of the o3 family, it pushes AI problem-solving boundaries while remaining accessible, offering a free tier and expanded limits for Plus subscribers.
  • 22
    Grounded Language Model (GLM)
    Contextual AI introduces its Grounded Language Model (GLM), engineered specifically to minimize hallucinations and deliver highly accurate, source-based responses for retrieval-augmented generation (RAG) and agentic applications. The GLM prioritizes faithfulness to the provided data, ensuring responses are grounded in specific knowledge sources and backed by inline citations. With state-of-the-art performance on the FACTS groundedness benchmark, the GLM outperforms other foundation models in scenarios requiring high accuracy and reliability. The model is designed for enterprise use cases like customer service, finance, and engineering, where trustworthy and precise responses are critical to minimizing risks and improving decision-making.
  • 23
    ERNIE 4.5 Turbo
    ERNIE 4.5 Turbo, unveiled by Baidu at the 2025 Baidu Create conference, is a cutting-edge AI model designed to handle a variety of data inputs, including text, images, audio, and video. It offers powerful multimodal processing capabilities that enable it to perform complex tasks across industries such as customer support automation, content creation, and data analysis. With enhanced reasoning abilities and reduced hallucinations, ERNIE 4.5 Turbo ensures that businesses can achieve higher accuracy and reliability in AI-driven processes. Additionally, this model is priced at just 1% of GPT-4.5’s cost, making it a highly cost-effective alternative for enterprises looking for top-tier AI performance.
  • 24
    ERNIE X1.1
    ERNIE X1.1 is Baidu’s upgraded reasoning model that delivers major improvements over its predecessor. It achieves 34.8% higher factual accuracy, 12.5% better instruction following, and 9.6% stronger agentic capabilities compared to ERNIE X1. In benchmark testing, it surpasses DeepSeek R1-0528 and performs on par with GPT-5 and Gemini 2.5 Pro. Built on the foundation of ERNIE 4.5, it has been enhanced with extensive mid-training and post-training, including reinforcement learning. The model is available through ERNIE Bot, the Wenxiaoyan app, and Baidu’s Qianfan MaaS platform via API. These upgrades are designed to reduce hallucinations, improve reliability, and strengthen real-world AI task performance.
  • 25
    ERNIE 5.0
    ERNIE 5.0 is a next-generation conversational AI platform developed by Baidu, designed to deliver natural, human-like interactions across multiple domains. Built on Baidu’s Enhanced Representation through Knowledge Integration (ERNIE) framework, it fuses advanced natural language processing (NLP) with deep contextual understanding. The model supports multimodal capabilities, allowing it to process and generate text, images, and voice seamlessly. ERNIE 5.0’s refined contextual awareness enables it to handle complex conversations with greater precision and nuance. Its applications span customer service, content generation, and enterprise automation, enhancing both user engagement and productivity. With its robust architecture, ERNIE 5.0 represents a major step forward in Baidu’s pursuit of intelligent, knowledge-driven AI systems.
  • 26
    Grok 4.20
    Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
  • 27
    Claude Sonnet 5
    Claude Sonnet 5 is Anthropic’s latest frontier AI model built for advanced coding, agentic workflows, and intensive computer-based tasks. It is designed to handle long-horizon work while maintaining strong safety and alignment guarantees. Sonnet 5 delivers state-of-the-art results on the SWE-bench Verified benchmark, demonstrating high-level software engineering capabilities. The model also leads performance on OSWorld, showcasing its strength in computer use and complex task execution. It can sustain focus for over 30 hours on multi-step, demanding workflows. Improvements in tool usage, memory handling, and context processing enable deeper reasoning and understanding across domains. These enhancements make Sonnet 5 especially effective for complex, real-world problem solving.
  • 28
    Chinchilla

    Chinchilla

    Google DeepMind

    Chinchilla is a large language model. Chinchilla uses the same compute budget as Gopher but with 70B parameters and 4× more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.