Alternatives to GLM-4.5V
Compare GLM-4.5V alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to GLM-4.5V in 2025. Compare features, ratings, user reviews, pricing, and more from GLM-4.5V competitors and alternatives in order to make an informed decision for your business.
-
1
GPT-5.2
OpenAI
GPT-5.2 is the newest evolution in the GPT-5 series, engineered to deliver even greater intelligence, adaptability, and conversational depth. This release introduces enhanced model variants that refine how ChatGPT reasons, communicates, and responds to complex user intent. GPT-5.2 Instant remains the primary, high-usage model—now faster, more context-aware, and more precise in following instructions. GPT-5.2 Thinking takes advanced reasoning further, offering clearer step-by-step logic, improved consistency on multi-stage problems, and more efficient handling of long or intricate tasks. The system automatically routes each query to the most suitable variant, ensuring optimal performance without requiring user selection. Beyond raw intelligence gains, GPT-5.2 emphasizes more natural dialogue flow, stronger intent alignment, and a smoother, more humanlike communication style. -
2
GLM-4.1V
Zhipu AI
GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.Starting Price: Free -
3
GLM-4.5V-Flash
Zhipu AI
GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.Starting Price: Free -
4
Qwen2
Alibaba
Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.Starting Price: Free -
5
Ministral 3
Mistral AI
Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.Starting Price: Free -
6
GigaChat 3 Ultra
Sberbank
GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.Starting Price: Free -
7
Llama 4 Maverick
Meta
Llama 4 Maverick is one of the most advanced multimodal AI models from Meta, featuring 17 billion active parameters and 128 experts. It surpasses its competitors like GPT-4o and Gemini 2.0 Flash in a broad range of benchmarks, especially in tasks related to coding, reasoning, and multilingual capabilities. Llama 4 Maverick combines image and text understanding, enabling it to deliver industry-leading results in image-grounding tasks and precise, high-quality output. With its efficient performance at a reduced parameter size, Maverick offers exceptional value, especially in general assistant and chat applications.Starting Price: Free -
8
DeepSeek R1
DeepSeek
DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.Starting Price: Free -
9
Llama 4 Scout
Meta
Llama 4 Scout is a powerful 17 billion active parameter multimodal AI model that excels in both text and image processing. With an industry-leading context length of 10 million tokens, it outperforms its predecessors, including Llama 3, in tasks such as multi-document summarization and parsing large codebases. Llama 4 Scout is designed to handle complex reasoning tasks while maintaining high efficiency, making it perfect for use cases requiring long-context comprehension and image grounding. It offers cutting-edge performance in image-related tasks and is particularly well-suited for applications requiring both text and visual understanding.Starting Price: Free -
10
Mistral Large 3
Mistral AI
Mistral Large 3 is a next-generation, open multimodal AI model built with a powerful sparse Mixture-of-Experts architecture featuring 41B active parameters out of 675B total. Designed from scratch on NVIDIA H200 GPUs, it delivers frontier-level reasoning, multilingual performance, and advanced image understanding while remaining fully open-weight under the Apache 2.0 license. The model achieves top-tier results on modern instruction benchmarks, positioning it among the strongest permissively licensed foundation models available today. With native support across vLLM, TensorRT-LLM, and major cloud providers, Mistral Large 3 offers exceptional accessibility and performance efficiency. Its design enables enterprise-grade customization, letting teams fine-tune or adapt the model for domain-specific workflows and proprietary applications. Mistral Large 3 represents a major advancement in open AI, offering frontier intelligence without sacrificing transparency or control.Starting Price: Free -
11
GLM-4.5
Z.ai
GLM‑4.5 is Z.ai’s latest flagship model in the GLM family, engineered with 355 billion total parameters (32 billion active) and a companion GLM‑4.5‑Air variant (106 billion total, 12 billion active) to unify advanced reasoning, coding, and agentic capabilities in one architecture. It operates in a “thinking” mode for complex, multi‑step reasoning and tool use, and a “non‑thinking” mode for instant responses, supporting up to 128 K token context length and native function calling. Available via the Z.ai chat platform and API, with open weights on HuggingFace and ModelScope, GLM‑4.5 ingests diverse inputs to solve general problem‑solving, common‑sense reasoning, coding from scratch or within existing projects, and end‑to‑end agent workflows such as web browsing and slide generation. Built on a Mixture‑of‑Experts design with loss‑free balance routing, grouped‑query attention, and an MTP layer for speculative decoding, it delivers enterprise‑grade performance. -
12
Pixtral Large
Mistral AI
Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.Starting Price: Free -
13
DeepSeek-V2
DeepSeek
DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.Starting Price: Free -
14
HunyuanOCR
Tencent
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data. -
15
Kimi K2 Thinking
Moonshot AI
Kimi K2 Thinking is an advanced open source reasoning model developed by Moonshot AI, designed specifically for long-horizon, multi-step workflows where the system interleaves chain-of-thought processes with tool invocation across hundreds of sequential tasks. The model uses a mixture-of-experts architecture with a total of 1 trillion parameters, yet only about 32 billion parameters are activated per inference pass, optimizing efficiency while maintaining vast capacity. It supports a context window of up to 256,000 tokens, enabling the handling of extremely long inputs and reasoning chains without losing coherence. Native INT4 quantization is built in, which reduces inference latency and memory usage without performance degradation. Kimi K2 Thinking is explicitly built for agentic workflows; it can autonomously call external tools, manage sequential logic steps (up to and typically between 200-300 tool calls in a single chain), and maintain consistent reasoning.Starting Price: Free -
16
Gemini 3 Deep Think
Google
The most advanced model from Google DeepMind, Gemini 3, sets a new bar for model intelligence by delivering state-of-the-art reasoning and multimodal understanding across text, image, and video. It surpasses its predecessor on key AI benchmarks and excels at deeper problems such as scientific reasoning, complex coding, spatial logic, and visual-/video-based understanding. The new “Deep Think” mode pushes the boundaries even further, offering enhanced reasoning for very challenging tasks, outperforming Gemini 3 Pro on benchmarks like Humanity’s Last Exam and ARC-AGI. Gemini 3 is now available across Google’s ecosystem, enabling users to learn, build, and plan at new levels of sophistication. With context windows up to one million tokens, more granular media-processing options, and specialized configurations for tool use, the model brings better precision, depth, and flexibility for real-world workflows. -
17
Kimi K2
Moonshot AI
Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.Starting Price: Free -
18
Janus-Pro-7B
DeepSeek
Janus-Pro-7B is an innovative open-source multimodal AI model from DeepSeek, designed to excel in both understanding and generating content across text, images, and videos. It leverages a unique autoregressive architecture with separate pathways for visual encoding, enabling high performance in tasks ranging from text-to-image generation to complex visual comprehension. This model outperforms competitors like DALL-E 3 and Stable Diffusion in various benchmarks, offering scalability with versions from 1 billion to 7 billion parameters. Licensed under the MIT License, Janus-Pro-7B is freely available for both academic and commercial use, providing a significant leap in AI capabilities while being accessible on major operating systems like Linux, MacOS, and Windows through Docker.Starting Price: Free -
19
Gemini 3 Pro
Google
Gemini 3 Pro is Google’s most advanced multimodal AI model, built for developers who want to bring ideas to life with intelligence, precision, and creativity. It delivers breakthrough performance across reasoning, coding, and multimodal understanding—surpassing Gemini 2.5 Pro in both speed and capability. The model excels in agentic workflows, enabling autonomous coding, debugging, and refactoring across entire projects with long-context awareness. With superior performance in image, video, and spatial reasoning, Gemini 3 Pro powers next-generation applications in development, robotics, XR, and document intelligence. Developers can access it through the Gemini API, Google AI Studio, or Vertex AI, integrating seamlessly into existing tools and IDEs. Whether generating code, analyzing visuals, or building interactive apps from a single prompt, Gemini 3 Pro represents the future of intelligent, multimodal AI development.Starting Price: $19.99/month -
20
GLM-4.6V
Zhipu AI
GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).Starting Price: Free -
21
Hunyuan-Vision-1.5
Tencent
HunyuanVision is a cutting-edge vision-language model developed by Tencent’s Hunyuan team. It uses a mamba-transformer hybrid architecture to deliver strong performance and efficient inference in multimodal reasoning tasks. The version Hunyuan-Vision-1.5 is designed for “thinking on images,” meaning it not only understands vision+language content, but can perform deeper reasoning that involves manipulating or reflecting on image inputs, such as cropping, zooming, pointing, box drawing, or drawing on the image to acquire additional knowledge. It supports a variety of vision tasks (image + video recognition, OCR, diagram understanding), visual reasoning, and even 3D spatial comprehension, all in a unified multilingual framework. The model is built to work seamlessly across languages and tasks and is intended to be open sourced (including checkpoints, technical report, inference support) to encourage the community to experiment and adopt.Starting Price: Free -
22
Llama 4 Behemoth
Meta
Llama 4 Behemoth is Meta's most powerful AI model to date, featuring a massive 288 billion active parameters. It excels in multimodal tasks, outperforming previous models like GPT-4.5 and Gemini 2.0 Pro across multiple STEM-focused benchmarks such as MATH-500 and GPQA Diamond. As the teacher model for the Llama 4 series, Behemoth sets the foundation for models like Llama 4 Maverick and Llama 4 Scout. While still in training, Llama 4 Behemoth demonstrates unmatched intelligence, pushing the boundaries of AI in fields like math, multilinguality, and image understanding.Starting Price: Free -
23
Phi-2
Microsoft
We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models. -
24
Reka Flash 3
Reka
Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization. -
25
K2 Think
Institute of Foundation Models
K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.Starting Price: Free -
26
LLaVA
LLaVA
LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.Starting Price: Free -
27
DeepSeek-Coder-V2
DeepSeek
DeepSeek-Coder-V2 is an open source code language model designed to excel in programming and mathematical reasoning tasks. It features a Mixture-of-Experts (MoE) architecture with 236 billion total parameters and 21 billion activated parameters per token, enabling efficient processing and high performance. The model was trained on an extensive dataset of 6 trillion tokens, enhancing its capabilities in code generation and mathematical problem-solving. DeepSeek-Coder-V2 supports over 300 programming languages and has demonstrated superior performance on benchmarks such surpassing other models. It is available in multiple variants, including DeepSeek-Coder-V2-Instruct, optimized for instruction-based tasks; DeepSeek-Coder-V2-Base, suitable for general text generation; and lightweight versions like DeepSeek-Coder-V2-Lite-Base and DeepSeek-Coder-V2-Lite-Instruct, designed for environments with limited computational resources. -
28
QwQ-32B
Alibaba
QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.Starting Price: Free -
29
ChatGLM
Zhipu AI
ChatGLM-6B is an open-source, Chinese-English bilingual dialogue language model based on the General Language Model (GLM) architecture with 6.2 billion parameters. Combined with model quantization technology, users can deploy locally on consumer-grade graphics cards (only 6GB of video memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese Q&A and dialogue. After about 1T identifiers of Chinese and English bilingual training, supplemented by supervision and fine-tuning, feedback self-help, human feedback reinforcement learning and other technologies, ChatGLM-6B with 6.2 billion parameters has been able to generate answers that are quite in line with human preferences.Starting Price: Free -
30
DeepSeek R2
DeepSeek
DeepSeek R2 is the anticipated successor to DeepSeek R1, a groundbreaking AI reasoning model launched in January 2025 by the Chinese AI startup DeepSeek. Building on R1’s success, which disrupted the AI industry with its cost-effective performance rivaling top-tier models like OpenAI’s o1, R2 promises a quantum leap in capabilities. It is expected to deliver exceptional speed and human-like reasoning, excelling in complex tasks such as advanced coding and high-level mathematical problem-solving. Leveraging DeepSeek’s innovative Mixture-of-Experts architecture and efficient training methods, R2 aims to outperform its predecessor while maintaining a low computational footprint, potentially expanding its reasoning abilities to languages beyond English.Starting Price: Free -
31
Olmo 3
Ai2
Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.Starting Price: Free -
32
PaliGemma 2
Google
PaliGemma 2, the next evolution in tunable vision-language models, builds upon the performant Gemma 2 models, adding the power of vision and making it easier than ever to fine-tune for exceptional performance. With PaliGemma 2, these models can see, understand, and interact with visual input, opening up a world of new possibilities. It offers scalable performance with multiple model sizes (3B, 10B, 28B parameters) and resolutions (224px, 448px, 896px). PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene. Our research demonstrates leading performance in chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation, as detailed in the technical report. Upgrading to PaliGemma 2 is a breeze for existing PaliGemma users. -
33
Amazon Nova 2 Omni
Amazon
Nova 2 Omni is a fully unified multimodal reasoning and generation model capable of understanding and producing content across text, images, video, and speech. It can take in extremely large inputs, ranging from hundreds of thousands of words to hours of audio and lengthy videos, while maintaining coherent analysis across formats. This allows it to digest full product catalogs, long-form documents, customer testimonials, and complete video libraries all at the same time, giving teams a single system that replaces the need for multiple specialized models. With its ability to handle mixed media in one workflow, Nova 2 Omni opens new possibilities for creative and operational automation. A marketing team, for example, can feed in product specs, brand guidelines, reference images, and video content and instantly generate an entire campaign, including messaging, social content, and visuals, in one pass. -
34
Amazon Nova Pro
Amazon
Amazon Nova Pro is a versatile, multimodal AI model designed for a wide range of complex tasks, offering an optimal combination of accuracy, speed, and cost efficiency. It excels in video summarization, Q&A, software development, and AI agent workflows that require executing multi-step processes. With advanced capabilities in text, image, and video understanding, Nova Pro supports tasks like mathematical reasoning and content generation, making it ideal for businesses looking to implement cutting-edge AI in their operations. -
35
Qwen
Alibaba
Qwen is a powerful, free AI assistant built on the advanced Qwen model series, designed to help anyone with creativity, research, problem-solving, and everyday tasks. While Qwen Chat is the main interface for most users, Qwen itself powers a broad range of intelligent capabilities including image generation, deep research, website creation, advanced reasoning, and context-aware search. Its multimodal intelligence enables Qwen to understand and process text, images, audio, and video simultaneously for richer insights. Qwen is available on web, desktop, and mobile, ensuring seamless access across all devices. For developers, the Qwen API provides OpenAI-compatible endpoints, making integration simple and allowing Qwen’s intelligence to power apps, services, and automation. Whether you're chatting through Qwen Chat or building with the Qwen API, Qwen delivers fast, flexible, and highly capable AI support.Starting Price: Free -
36
Ferret
Apple
An End-to-End MLLM that Accept Any-Form Referring and Ground Anything in Response. Ferret Model - Hybrid Region Representation + Spatial-aware Visual Sampler enable fine-grained and open-vocabulary referring and grounding in MLLM. GRIT Dataset (~1.1M) - A Large-scale, Hierarchical, Robust ground-and-refer instruction tuning dataset. Ferret-Bench - A multimodal evaluation benchmark that jointly requires Referring/Grounding, Semantics, Knowledge, and Reasoning.Starting Price: Free -
37
Amazon Nova 2 Pro
Amazon
Amazon Nova 2 Pro is Amazon’s most advanced reasoning model, designed to handle highly complex, multimodal tasks across text, images, video, and speech with exceptional accuracy. It excels in deep problem-solving scenarios such as agentic coding, multi-document analysis, long-range planning, and advanced math. With benchmark performance equal or superior to leading models like Claude Sonnet 4.5, GPT-5.1, and Gemini Pro, Nova 2 Pro delivers top-tier intelligence across a wide range of enterprise workloads. The model includes built-in web grounding and code execution, ensuring responses remain factual, current, and contextually accurate. Nova 2 Pro can also serve as a “teacher model,” enabling knowledge distillation into smaller, purpose-built variants for specific domains. It is engineered for organizations that require precision, reliability, and frontier-level reasoning in mission-critical AI applications. -
38
Ai2 OLMoE
The Allen Institute for Artificial Intelligence
Ai2 OLMoE is a fully open source mixture-of-experts language model that is capable of running completely on-device, allowing you to try our model privately and securely. Our app is intended to help researchers better explore how to make on-device intelligence better and to enable developers to quickly prototype new AI experiences, all with no cloud connectivity required. OLMoE is a highly efficient mixture-of-experts version of the Ai2 OLMo family of models. Experience which real-world tasks state-of-the-art local models are capable of. Research how to improve small AI models. Test your own models locally using our open-source codebase. Integrate OLMoE into other iOS applications. The Ai2 OLMoE app provides privacy and security by operating completely on-device. Easily share the output of your conversations with friends or colleagues. The OLMoE model and the application code are fully open source.Starting Price: Free -
39
DBRX
Databricks
Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B in programming, in addition to its strength as a general-purpose LLM. This state-of-the-art quality comes with marked improvements in training and inference performance. DBRX advances the state-of-the-art in efficiency among open models thanks to its fine-grained mixture-of-experts (MoE) architecture. Inference is up to 2x faster than LLaMA2-70B, and DBRX is about 40% of the size of Grok-1 in terms of both total and active parameter counts. -
40
Qwen2.5-VL
Alibaba
Qwen2.5-VL is the latest vision-language model from the Qwen series, representing a significant advancement over its predecessor, Qwen2-VL. This model excels in visual understanding, capable of recognizing a wide array of objects, including text, charts, icons, graphics, and layouts within images. It functions as a visual agent, capable of reasoning and dynamically directing tools, enabling applications such as computer and phone usage. Qwen2.5-VL can comprehend videos exceeding one hour in length and can pinpoint relevant segments within them. Additionally, it accurately localizes objects in images by generating bounding boxes or points and provides stable JSON outputs for coordinates and attributes. The model also supports structured outputs for data like scanned invoices, forms, and tables, benefiting sectors such as finance and commerce. Available in base and instruct versions across 3B, 7B, and 72B sizes, Qwen2.5-VL is accessible through platforms like Hugging Face and ModelScope.Starting Price: Free -
41
DeepSeek V3.1
DeepSeek
DeepSeek V3.1 is a groundbreaking open-weight large language model featuring a massive 685-billion parameters and an extended 128,000‑token context window, enabling it to process documents equivalent to 400-page books in a single prompt. It delivers integrated capabilities for chat, reasoning, and code generation within a unified hybrid architecture, seamlessly blending these functions into one coherent model. V3.1 supports a variety of tensor formats to give developers flexibility in optimizing performance across different hardware. Early benchmark results show robust performance, including a 71.6% score on the Aider coding benchmark, putting it on par with or ahead of systems like Claude Opus 4 and doing so at a far lower cost. Made available under an open source license on Hugging Face with minimal fanfare, DeepSeek V3.1 is poised to reshape access to high-performance AI, challenging traditional proprietary models.Starting Price: Free -
42
Grok 4.1
xAI
Grok 4.1 is an advanced AI model developed by Elon Musk’s xAI, designed to push the limits of reasoning and natural language understanding. Built on the powerful Colossus supercomputer, it processes multimodal inputs including text and images, with upcoming support for video. The model delivers exceptional accuracy in scientific, technical, and linguistic tasks. Its architecture enables complex reasoning and nuanced response generation that rivals the best AI systems in the world. Enhanced moderation ensures more responsible and unbiased outputs than earlier versions. Grok 4.1 is a breakthrough in creating AI that can think, interpret, and respond more like a human. -
43
VideoPoet
Google
VideoPoet is a simple modeling method that can convert any autoregressive language model or large language model (LLM) into a high-quality video generator. It contains a few simple components. An autoregressive language model learns across video, image, audio, and text modalities to autoregressively predict the next video or audio token in the sequence. A mixture of multimodal generative learning objectives are introduced into the LLM training framework, including text-to-video, text-to-image, image-to-video, video frame continuation, video inpainting and outpainting, video stylization, and video-to-audio. Furthermore, such tasks can be composed together for additional zero-shot capabilities. This simple recipe shows that language models can synthesize and edit videos with a high degree of temporal consistency. -
44
Grok 4
xAI
Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses. -
45
Megatron-Turing
NVIDIA
Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode. -
46
Mixtral 8x22B
Mistral AI
Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.Starting Price: Free -
47
Qwen3-VL
Alibaba
Qwen3-VL is the newest vision-language model in the Qwen family (by Alibaba Cloud), designed to fuse powerful text understanding/generation with advanced visual and video comprehension into one unified multimodal model. It accepts inputs in mixed modalities, text, images, and video, and handles long, interleaved contexts natively (up to 256 K tokens, with extensibility beyond). Qwen3-VL delivers major advances in spatial reasoning, visual perception, and multimodal reasoning; the model architecture incorporates several innovations such as Interleaved-MRoPE (for robust spatio-temporal positional encoding), DeepStack (to leverage multi-level features from its Vision Transformer backbone for refined image-text alignment), and text–timestamp alignment (for precise reasoning over video content and temporal events). These upgrades enable Qwen3-VL to interpret complex scenes, follow dynamic video sequences, read and reason about visual layouts.Starting Price: Free -
48
Grok 4 Heavy
xAI
Grok 4 Heavy is the most powerful AI model offered by xAI, designed as a multi-agent system to deliver cutting-edge reasoning and intelligence. Built on the Colossus supercomputer, it achieves a 50% score on the challenging HLE benchmark, outperforming many competitors. This advanced model supports multimodal inputs including text and images, with plans to add video capabilities. Grok 4 Heavy targets power users such as developers, researchers, and technical enthusiasts who require top-tier AI performance. Access is provided through the premium “SuperGrok Heavy” subscription priced at $300 per month. xAI has enhanced moderation and removed problematic system prompts to ensure responsible and ethical AI use. -
49
Qwen2.5-Max
Alibaba
Qwen2.5-Max is a large-scale Mixture-of-Experts (MoE) model developed by the Qwen team, pretrained on over 20 trillion tokens and further refined through Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). In evaluations, it outperforms models like DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro. Qwen2.5-Max is accessible via API through Alibaba Cloud and can be explored interactively on Qwen Chat.Starting Price: Free -
50
Amazon Nova
Amazon
Amazon Nova is a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price-performance, available exclusively on Amazon Bedrock. Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are understanding models that accept text, image, or video inputs and generate text output. They provide a broad selection of capability, accuracy, speed, and cost operation points. Amazon Nova Micro is a text only model that delivers the lowest latency responses at very low cost. Amazon Nova Lite is a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Pro’s capabilities, coupled with its industry-leading speed and cost efficiency, makes it a compelling model for almost any task, including video summarization, Q&A, math & more.