Compare the Top AI Models as of May 2026 - Page 9

AI Models Clear Filters
  • 1
    Hailuo 2.3

    Hailuo 2.3

    Hailuo AI

    Hailuo 2.3 is a next-generation AI video generator model available through the Hailuo AI platform that lets users create short videos from text prompts or static images with smooth motion, natural expressions, and cinematic polish. It supports multi-modal workflows where you describe a scene in plain language or upload a reference image and then generate vivid, fluid video content in seconds, handling complex motion such as dynamic dance choreography and lifelike facial micro-expressions with improved visual consistency over earlier models. Hailuo 2.3 enhances stylistic stability for anime and artistic video styles, delivers heightened realism in movement and expression, and maintains coherent lighting and motion throughout each generated clip. It offers a Fast mode variant optimized for speed and lower cost while still producing high-quality results, and it is tuned to address common challenges in ecommerce and marketing content.
    Starting Price: Free
  • 2
    TranslateGemma
    TranslateGemma is a new suite of open machine translation models from Google built on the Gemma 3 foundation that lets people and systems communicate across 55 languages with high-quality AI translation while maintaining efficiency and broad deployment flexibility. Available in 4 B, 12 B, and 27 B parameter sizes, TranslateGemma distills advanced multilingual capabilities into compact models that can run on mobile devices, consumer laptops, local machines, or cloud hardware without sacrificing accuracy or performance; technical evaluations show the 12 B version can outperform larger baseline models with lower compute demands. The models were developed through a specialized two-stage fine-tuning process combining high-quality human and synthetic translation data with reinforcement learning to optimize translation quality across diverse language families.
    Starting Price: Free
  • 3
    GLM-4.7-Flash
    GLM-4.7 Flash is a lightweight variant of GLM-4.7, Z.ai’s flagship large language model designed for advanced coding, reasoning, and multi-step task execution with strong agentic performance and a very large context window. It is an MoE-based model optimized for efficient inference that balances performance and resource use, enabling deployment on local machines with moderate memory requirements while maintaining deep reasoning, coding, and agentic task abilities. GLM-4.7 itself advances over earlier generations with enhanced programming capabilities, stable multi-step reasoning, context preservation across turns, and improved tool-calling workflows, and supports very long context lengths (up to ~200 K tokens) for complex tasks that span large inputs or outputs. The Flash variant retains many of these strengths in a smaller footprint, offering competitive benchmark performance in coding and reasoning tasks for models in its size class.
    Starting Price: Free
  • 4
    LFM2.5

    LFM2.5

    Liquid AI

    Liquid AI’s LFM2.5 is the next generation of on-device AI foundation models designed to deliver high-performance, efficient AI inference on edge devices such as phones, laptops, vehicles, IoT systems, and embedded hardware without relying on cloud compute. It extends the previous LFM2 architecture by significantly increasing the pretraining scale and reinforcement learning stages, yielding a family of hybrid models around 1.2 billion parameters that balance instruction following, reasoning, and multimodal capabilities for real-world agentic use cases. The LFM2.5 family includes Base (for fine-tuning and customization), Instruct (general-purpose instruction-tuned), Japanese-optimized, Vision-Language, and Audio-Language variants, all optimized for fast, on-device inference under tight memory constraints and available as open-weight models deployable via frameworks like llama.cpp, MLX, vLLM, and ONNX.
    Starting Price: Free
  • 5
    Qwen3-TTS

    Qwen3-TTS

    Alibaba

    Qwen3-TTS is an open source series of advanced text-to-speech models developed by the Qwen team at Alibaba Cloud under the Apache-2.0 license, offering stable, expressive, and real-time speech generation with features such as voice cloning, voice design, and fine-grained control of prosody and acoustic attributes. The models support 10 major languages, including Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian, and multiple dialectal voice profiles with adaptive control over tone, speaking rate, and emotional expression based on text semantics and instructions. Qwen3-TTS uses efficient tokenization and a dual-track architecture that enables ultra-low-latency streaming synthesis (first audio packet in ~97 ms), making it suitable for interactive and real-time use cases, and includes a range of models with different capabilities (e.g., rapid 3-second voice cloning, custom voice timbres, and instruction-based voice design).
    Starting Price: Free
  • 6
    Composer 1
    Composer is Cursor’s custom-built agentic AI model optimized specifically for software engineering tasks and designed to power fast, interactive coding assistance directly within the Cursor IDE, a VS Code-derived editor enhanced with intelligent automation. It is a mixture-of-experts model trained with reinforcement learning (RL) on real-world coding problems across large codebases, so it can produce high-speed, context-aware responses, from code edits and planning to answers that understand project structure, tools, and conventions, with generation speeds roughly four times faster than similar models in benchmarks. Composer is specialized for development workflows, leveraging long-context understanding, semantic search, and limited tool access (like file editing and terminal commands) so it can solve complex engineering requests with efficient and practical outputs.
    Starting Price: $20 per month
  • 7
    Ray3.14

    Ray3.14

    Luma AI

    Ray3.14 is Luma AI’s most advanced generative video model, designed to deliver high-quality, production-ready video with native 1080p output while significantly improving speed, cost, and stability. It generates video up to four times faster and at roughly one-third the cost of its predecessor, offering better adherence to prompts and improved motion consistency across frames. The model natively supports 1080p across core workflows such as text-to-video, image-to-video, and video-to-video, eliminating the need for post-upscaling and making outputs suitable for broadcast, streaming, and digital delivery. Ray3.14 enhances temporal motion fidelity and visual stability, especially for animation and complex scenes, addressing artifacts like flicker and drift and enabling creative teams to iterate more quickly under real production timelines. It extends the reasoning-based video generation foundation of the earlier Ray3 model.
    Starting Price: $7.99 per month
  • 8
    Z-Image

    Z-Image

    Z-Image

    Z-Image is an open source image generation foundation model family developed by Alibaba’s Tongyi-MAI team that uses a Scalable Single-Stream Diffusion Transformer architecture to generate photorealistic and creative images from text prompts with only 6 billion parameters, making it more efficient than many larger models while still delivering competitive quality and instruction following. It includes multiple variants; Z-Image-Turbo, a distilled version optimized for ultra-fast inference with as few as eight function evaluations and sub-second generation on appropriate GPUs; Z-Image, the full foundation model suited for high-fidelity creative generation and fine-tuning; Z-Image-Omni-Base, a versatile base checkpoint for community-driven development; and Z-Image-Edit, tuned for image-to-image editing tasks with strong instruction adherence.
    Starting Price: Free
  • 9
    Step 3.5 Flash
    Step 3.5 Flash is an advanced open source foundation language model engineered for frontier reasoning and agentic capabilities with exceptional efficiency, built on a sparse Mixture of Experts (MoE) architecture that selectively activates only about 11 billion of its ~196 billion parameters per token to deliver high-density intelligence and real-time responsiveness. Its 3-way Multi-Token Prediction (MTP-3) enables generation throughput in the hundreds of tokens per second for complex multi-step reasoning chains and task execution, and it supports efficient long contexts with a hybrid sliding window attention approach that reduces computational overhead across large datasets or codebases. It demonstrates robust performance on benchmarks for reasoning, coding, and agentic tasks, rivaling or exceeding many larger proprietary models, and includes a scalable reinforcement learning framework for consistent self-improvement.
    Starting Price: Free
  • 10
    Qwen3-Coder-Next
    Qwen3-Coder-Next is an open-weight language model specifically designed for coding agents and local development that delivers advanced coding reasoning, complex tool usage, and robust performance on long-horizon programming tasks with high efficiency, using a mixture-of-experts architecture that balances powerful capabilities with resource-friendly operation. It provides enhanced agentic coding abilities that help software developers, AI system builders, and automated coding workflows generate, debug, and reason about code with deep contextual understanding while recovering from execution errors, making it well-suited for autonomous coding agents and development-oriented applications. By achieving strong performance comparable to much larger parameter models while requiring fewer active parameters, Qwen3-Coder-Next enables cost-effective deployment for dynamic and complex programming workloads in research and production environments.
    Starting Price: Free
  • 11
    GLM-OCR
    GLM-OCR is a multimodal optical character recognition model and open source repository that provides accurate, efficient, and comprehensive document understanding by combining text and visual modalities into a unified encoder–decoder architecture derived from the GLM-V family. Built with a visual encoder pre-trained on large-scale image–text data and a lightweight cross-modal connector feeding into a GLM-0.5B language decoder, the model supports layout detection, parallel region recognition, and structured output for text, tables, formulas, and complicated real-world document formats. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization, achieving state-of-the-art benchmarks on major document understanding tasks.
    Starting Price: Free
  • 12
    Voxtral Transcribe 2
    Voxtral Transcribe 2 is a next-generation family of speech-to-text models from Mistral AI that delivers ultra-low-latency, high-quality audio transcription and speaker diarization with broad language support. The suite includes Voxtral Mini Transcribe V2, optimized for batch transcription with features such as word-level timestamps, context biasing, and support for 13 languages, and Voxtral Realtime, designed specifically for live, streaming speech recognition with latency configurable down to sub-200 ms for real-time applications. Both models achieve state-of-the-art transcription accuracy while running efficiently and economically, with Mini Transcribe V2 offering leading performance and low error rates, and Realtime available as open source under the Apache 2.0 license so developers can deploy it on edge devices or in private environments.
    Starting Price: $14.99 per month
  • 13
    Raven-1

    Raven-1

    Tavus

    Raven-1 is a multimodal, real-time perceptual AI model from Tavus designed to bring emotional intelligence to artificial intelligence by interpreting human audio, visual, and temporal signals together instead of reducing communication to text alone. It unifies tone, facial expression, body language, hesitation, and contextual dynamics into a rich, unified representation of user intent and state, enabling conversational AI to understand how people communicate in real time with nuanced natural language descriptions rather than static emotion labels. It was engineered to overcome the limitations of traditional systems that rely on transcripts and limited emotion scoring by capturing subtle cues, such as emphasis, sarcasm, engagement shifts, and evolving emotional arcs, and continuously updating this understanding with low latency so responses align with the true context of the interaction.
    Starting Price: $59 per month
  • 14
    MiniMax M2.5
    MiniMax M2.5 is a frontier AI model engineered for real-world productivity across coding, agentic workflows, search, and office tasks. Extensively trained with reinforcement learning in hundreds of thousands of real-world environments, it achieves state-of-the-art performance in benchmarks such as SWE-Bench Verified and BrowseComp. The model demonstrates strong architectural thinking, decomposing complex problems before generating code across more than ten programming languages. M2.5 operates at high throughput speeds of up to 100 tokens per second, enabling faster completion of multi-step tasks. It is optimized for efficient reasoning, reducing token usage and execution time compared to previous versions. With dramatically lower pricing than competing frontier models, it delivers powerful performance at minimal cost. Integrated into MiniMax Agent, M2.5 supports professional-grade office workflows, financial modeling, and autonomous task execution.
    Starting Price: Free
  • 15
    DeepSeek-V4

    DeepSeek-V4

    DeepSeek

    DeepSeek-V4 is a next-generation open-source language model designed for high-performance reasoning, coding, and long-context intelligence. It introduces a powerful architecture with up to one million token context length, enabling seamless handling of large datasets and complex multi-step workflows. The model comes in two variants: DeepSeek-V4-Pro for maximum performance and DeepSeek-V4-Flash for efficiency and speed. DeepSeek-V4-Pro features 1.6 trillion total parameters with 49 billion activated, delivering near state-of-the-art performance comparable to leading closed-source models. It excels in agentic coding, mathematical reasoning, and world knowledge tasks. The model integrates advanced attention mechanisms, including token-wise compression and sparse attention, significantly reducing compute and memory costs. It is also optimized for AI agents, supporting tool use and multi-step workflows.
    Starting Price: Free
  • 16
    Tiny Aya

    Tiny Aya

    Cohere AI

    Tiny Aya is a family of open-weight multilingual language models from Cohere Labs designed to deliver powerful, adaptable AI that can run efficiently on local devices, including phones and laptops, without requiring constant cloud connectivity. It focuses on enabling high-quality text understanding and generation across more than 70 languages, including many lower-resource languages that are often underserved by mainstream models. Built with lightweight architectures around 3.35 billion parameters, Tiny Aya is optimized for balanced multilingual representation and realistic compute constraints, making it suitable for edge deployment and offline use. The models support downstream adaptation and instruction tuning, allowing developers to customize behavior for specific applications while maintaining strong cross-lingual performance.
    Starting Price: Free
  • 17
    Qwen3.5

    Qwen3.5

    Alibaba

    Qwen3.5 is a next-generation open-weight multimodal large language model designed to power native vision-language agents. The flagship release, Qwen3.5-397B-A17B, combines a hybrid linear attention architecture with sparse mixture-of-experts, activating only 17 billion parameters per forward pass out of 397 billion total to maximize efficiency. It delivers strong benchmark performance across reasoning, coding, multilingual understanding, visual reasoning, and agent-based tasks. The model expands language support from 119 to 201 languages and dialects while introducing a 1M-token context window in its hosted version, Qwen3.5-Plus. Built for multimodal tasks, it processes text, images, and video with advanced spatial reasoning and tool integration. Qwen3.5 also incorporates scalable reinforcement learning environments to improve general agent capabilities. Designed for developers and enterprises, it enables efficient, tool-augmented, multimodal AI workflows.
    Starting Price: Free
  • 18
    Alibaba AI Coding Plan
    Alibaba Cloud’s AI Scene Coding campaign introduces a cloud-based development environment designed to help developers write, test, and deploy software faster using advanced AI coding models. It provides access to powerful models such as Qwen3-Coder-Plus and integrates with popular developer tools, including Cline, Claude Code, Qwen Code, and OpenClaw, allowing engineers to use their preferred coding interfaces while leveraging Alibaba Cloud’s AI infrastructure. It is built to streamline software development by combining large language models with cloud computing resources so developers can generate code, analyze projects, and automate development workflows from a unified environment. These AI models are capable of understanding prompts, writing code, debugging programs, and assisting with complex development tasks, allowing applications to be built in minutes rather than through traditional manual coding cycles.
    Starting Price: $3 per month
  • 19
    LTX-2.3

    LTX-2.3

    Lightricks

    LTX-2.3 is an advanced AI video generation model designed to create high-quality videos from text prompts, images, or other media inputs while maintaining strong control over motion, structure, and audiovisual synchronization. It is part of the LTX family of multimodal generative models built for developers and production teams that need scalable tools to generate and edit video programmatically. It builds on the capabilities of earlier LTX models by improving detail rendering, motion consistency, prompt understanding, and audio quality throughout the video generation pipeline. It features a redesigned latent representation using an upgraded VAE trained on higher-quality datasets, which improves the preservation of fine textures, edges, and small visual elements such as hair, text, and intricate surfaces across frames.
    Starting Price: Free
  • 20
    Kling 3.0 Omni
    Kling 3.0 Omni model is a generative video system designed to create imaginative videos from text prompts, images, or reference materials using advanced multimodal AI technology. It allows users to generate continuous video clips with flexible durations ranging from approximately 3 to 15 seconds, enabling short cinematic scenes that respond closely to prompt instructions. It supports prompt-based video generation as well as reference-based workflows, where users provide images or other visual elements to guide the subject, style, or composition of the generated scene. It improves prompt adherence and subject consistency, allowing characters, objects, and environments to remain stable throughout the generated clip while maintaining realistic motion and visual coherence. The Omni model also enhances reference-based generation so that characters or elements introduced through images remain recognizable across frames.
    Starting Price: Free
  • 21
    Mistral Small 4
    Mistral Small 4 is an advanced open-source AI model developed by Mistral AI that combines reasoning, coding, and multimodal capabilities into a single system. It unifies the strengths of previous models such as Magistral for reasoning, Pixtral for multimodal processing, and Devstral for agentic coding tasks. The model can handle both text and image inputs, allowing it to perform tasks ranging from conversational chat to visual analysis and document understanding. Built with a mixture-of-experts architecture, Mistral Small 4 delivers efficient performance while scaling to complex workloads. It also features a configurable reasoning parameter that allows users to switch between fast responses and deeper analytical outputs. With a large context window and optimized inference performance, the model supports long-form interactions and complex workflows.
    Starting Price: Free
  • 22
    Leanstral

    Leanstral

    Mistral AI

    Leanstral is an open-source code agent developed by Mistral AI specifically designed to work with the Lean 4 proof assistant. The model focuses on generating code while also formally verifying its correctness against strict mathematical or software specifications. Unlike traditional coding assistants, Leanstral integrates directly with formal proof systems to ensure that generated code satisfies defined logical requirements. Its architecture is optimized for proof engineering tasks and operates efficiently with sparse model parameters. Leanstral is released under the Apache 2.0 license, making it freely accessible for developers, researchers, and organizations to use and customize. The model is designed to operate within real-world formal repositories rather than isolated problem environments. By combining code generation with formal verification, Leanstral aims to reduce the need for manual human review in complex software and mathematical development.
    Starting Price: Free
  • 23
    GLM-5-Turbo
    GLM-5-Turbo is a high-speed variant of Z.ai’s GLM-5 model, designed to deliver efficient and stable performance in agent-driven environments while maintaining strong reasoning and coding capabilities. It is optimized for high-throughput workloads, particularly long-chain agent tasks where multiple steps, tools, and decisions must be executed in sequence with reliability and low latency. It supports advanced agentic workflows, enabling systems to perform multi-step planning, tool calling, and task execution with improved responsiveness compared to larger flagship models. GLM-5-Turbo inherits core capabilities from the GLM-5 family, including strong reasoning, coding performance, and support for long-context processing, while focusing on optimization of core requirements such as speed, efficiency, and stability in production environments. It is designed to integrate with agent frameworks like OpenClaw, where it can coordinate actions, process inputs, and execute tasks.
    Starting Price: Free
  • 24
    MiniMax M2.7
    MiniMax M2.7 is an advanced AI model designed to enhance real-world productivity across coding, search, and office workflows. It is trained with reinforcement learning across numerous real-world environments, enabling it to handle complex, multi-step tasks effectively. The model excels in problem-solving by breaking down challenges before generating solutions across multiple programming languages. It delivers high-speed performance with rapid token generation, allowing tasks to be completed efficiently. With optimized reasoning and cost-effective pricing, it provides powerful capabilities while minimizing resource usage. It also achieves strong performance in software engineering benchmarks, reducing incident response time and improving development efficiency. Additionally, it supports advanced agentic workflows and professional-grade office tasks, making it highly versatile for modern work environments.
    Starting Price: Free
  • 25
    MiMo-V2-Pro

    MiMo-V2-Pro

    Xiaomi Technology

    Xiaomi MiMo-V2-Pro is a flagship AI foundation model designed to power real-world agentic workflows and complex task execution. It is built to function as the core intelligence behind agent systems, enabling orchestration of multi-step processes and production-level tasks. The model demonstrates strong capabilities in coding, tool usage, and search-based tasks, performing competitively on global benchmarks. With its large-scale architecture and extended context window, it can handle long and complex interactions efficiently. MiMo-V2-Pro is optimized for practical applications, delivering reliable performance across development, automation, and enterprise workflows.
    Starting Price: $1/million tokens
  • 26
    Wan2.2-Animate
    Wan2.2 Animate is a specialized module within the Wan video generation framework designed for high-fidelity character animation and character replacement, enabling users to transform static images into dynamic videos or swap subjects within existing footage while preserving realism and motion consistency. It works by taking two primary inputs: a reference image that defines the character’s appearance and a reference video that provides motion, expressions, and scene context. Using this combination, it can animate a still character by replicating body movements, gestures, and facial expressions from the source video, or replace the original subject in a video while maintaining the original lighting, camera movement, and environment for seamless integration. It relies on advanced techniques such as spatially aligned skeleton signals and implicit facial feature extraction to accurately reproduce motion and expressions.
    Starting Price: $5 per month
  • 27
    Trinity-Large-Thinking
    Trinity Large Thinking is a frontier open source reasoning model developed by Arcee AI, designed specifically for complex, multi-step problem solving and autonomous agent workflows that require long-horizon planning and tool use. Built on a sparse Mixture-of-Experts architecture with roughly 400 billion total parameters but only about 13 billion active per token, the model achieves high efficiency while maintaining strong reasoning performance across tasks such as mathematical problem solving, code generation, and multi-step analysis. It introduces extended chain-of-thought reasoning capabilities, allowing the model to generate intermediate “thinking traces” before producing final answers, which improves accuracy and reliability in complex scenarios. Trinity Large Thinking supports a very large context window of up to 262K tokens, enabling it to process long documents, maintain state across extended interactions, and operate effectively in continuous agent loops.
    Starting Price: Free
  • 28
    MAI-Transcribe-1
    MAI-Transcribe-1 is a state-of-the-art speech-to-text model developed by Microsoft and available through Azure AI Foundry, designed to deliver high-accuracy transcription for real-world audio across enterprise and developer use cases. It supports 25 major languages and is optimized to handle diverse accents, dialects, and speaking styles, maintaining consistent performance even in challenging conditions such as background noise, low-quality recordings, or overlapping speech. It is built by Microsoft’s AI Superintelligence team with a dual focus on accuracy and efficiency, enabling fast batch transcription and scalable deployment for production environments. MAI-Transcribe-1 powers a wide range of applications, including meeting transcription, live captions, accessibility tools, call center analytics, and voice-driven agents, making it a foundational component for voice-enabled systems.
    Starting Price: Free
  • 29
    Gemini Audio
    Gemini Audio is a set of advanced real-time audio models built on Gemini's architecture, designed to enable natural, fluid voice interaction and expressive audio generation through simple language prompts. It supports conversational experiences where users can speak, listen, and interact with AI in a seamless loop, combining understanding, reasoning, and response generation in audio form. It is capable of both analyzing and generating audio, allowing applications such as speech-to-text transcription, translation, speaker identification, emotion detection, and detailed audio content analysis. They are optimized for low-latency, real-time use cases, making them suitable for live assistants, voice agents, and interactive systems that require continuous, multi-turn dialogue. Gemini Audio also integrates advanced capabilities like function calling, enabling the model to trigger external tools and incorporate real-time data into responses.
    Starting Price: Free
  • 30
    Mercury Edit 2
    Mercury Edit 2 is part of Inception Labs’ Mercury family of AI models, designed to perform high-speed reasoning, coding, and editing tasks using a fundamentally different architecture from traditional large language models. It builds on Mercury 2, a diffusion-based reasoning model that generates and refines entire outputs in parallel rather than producing text token by token, enabling significantly faster performance and more responsive editing workflows. Instead of acting like a sequential “typewriter,” the system behaves more like an editor, starting with a rough draft and iteratively improving it across multiple tokens at once, which allows for real-time interaction and rapid iteration in tasks such as code editing, content generation, and agent-based workflows. This architecture delivers throughput of up to around 1,000 tokens per second, making it several times faster than conventional models while maintaining competitive reasoning quality across benchmarks.
    Starting Price: $0.25 per 1M input tokens
MongoDB Logo MongoDB