Alternatives to GLM-4.1V
Compare GLM-4.1V alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to GLM-4.1V in 2025. Compare features, ratings, user reviews, pricing, and more from GLM-4.1V competitors and alternatives in order to make an informed decision for your business.
-
1
HunyuanOCR
Tencent
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data. -
2
Hunyuan-Vision-1.5
Tencent
HunyuanVision is a cutting-edge vision-language model developed by Tencent’s Hunyuan team. It uses a mamba-transformer hybrid architecture to deliver strong performance and efficient inference in multimodal reasoning tasks. The version Hunyuan-Vision-1.5 is designed for “thinking on images,” meaning it not only understands vision+language content, but can perform deeper reasoning that involves manipulating or reflecting on image inputs, such as cropping, zooming, pointing, box drawing, or drawing on the image to acquire additional knowledge. It supports a variety of vision tasks (image + video recognition, OCR, diagram understanding), visual reasoning, and even 3D spatial comprehension, all in a unified multilingual framework. The model is built to work seamlessly across languages and tasks and is intended to be open sourced (including checkpoints, technical report, inference support) to encourage the community to experiment and adopt.Starting Price: Free -
3
GLM-4.6V
Zhipu AI
GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).Starting Price: Free -
4
GLM-4.5V-Flash
Zhipu AI
GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.Starting Price: Free -
5
UI-TARS
ByteDance
UI-TARS is an advanced vision-language model designed for seamless interaction with graphical user interfaces (GUIs) by integrating perception, reasoning, grounding, and memory into a unified system. It processes multimodal inputs, such as text and images, to understand interfaces and execute tasks in real time without predefined workflows. Supporting desktop, mobile, and web platforms, UI-TARS automates complex, multi-step tasks using advanced reasoning and planning. Its use of large-scale datasets enhances generalization and robustness, making it a cutting-edge solution for GUI automation.Starting Price: Free -
6
Qwen3-VL
Alibaba
Qwen3-VL is the newest vision-language model in the Qwen family (by Alibaba Cloud), designed to fuse powerful text understanding/generation with advanced visual and video comprehension into one unified multimodal model. It accepts inputs in mixed modalities, text, images, and video, and handles long, interleaved contexts natively (up to 256 K tokens, with extensibility beyond). Qwen3-VL delivers major advances in spatial reasoning, visual perception, and multimodal reasoning; the model architecture incorporates several innovations such as Interleaved-MRoPE (for robust spatio-temporal positional encoding), DeepStack (to leverage multi-level features from its Vision Transformer backbone for refined image-text alignment), and text–timestamp alignment (for precise reasoning over video content and temporal events). These upgrades enable Qwen3-VL to interpret complex scenes, follow dynamic video sequences, read and reason about visual layouts.Starting Price: Free -
7
Moondream
Moondream
Moondream is an open source vision language model designed for efficient image understanding across various devices, including servers, PCs, mobile phones, and edge devices. It offers two primary variants, Moondream 2B, a 1.9-billion-parameter model providing robust performance for general-purpose tasks, and Moondream 0.5B, a compact 500-million-parameter model optimized for resource-constrained hardware. Both models support quantization formats like fp16, int8, and int4, allowing for reduced memory usage without significant performance loss. Moondream's capabilities include generating detailed image captions, answering visual queries, performing object detection, and pinpointing specific items within images. Its design emphasizes versatility and accessibility, enabling deployment across a wide range of platforms. Starting Price: Free -
8
Pixtral Large
Mistral AI
Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.Starting Price: Free -
9
PaliGemma 2
Google
PaliGemma 2, the next evolution in tunable vision-language models, builds upon the performant Gemma 2 models, adding the power of vision and making it easier than ever to fine-tune for exceptional performance. With PaliGemma 2, these models can see, understand, and interact with visual input, opening up a world of new possibilities. It offers scalable performance with multiple model sizes (3B, 10B, 28B parameters) and resolutions (224px, 448px, 896px). PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene. Our research demonstrates leading performance in chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation, as detailed in the technical report. Upgrading to PaliGemma 2 is a breeze for existing PaliGemma users. -
10
ModelMatch
ModelMatch
ModelMatch is an online platform that allows users to compare top open source vision-language models for image-understanding tasks without the need for coding. Users can upload up to four images and input specific prompts to receive detailed analyses from multiple models simultaneously. It evaluates models ranging from 1 billion to 12 billion parameters, all of which are open source with commercial licenses. For each model, ModelMatch provides a quality score (1-10) based on the model's performance for the given use case, processing time metrics, and real-time status updates during processing.Starting Price: Free -
11
Florence-2
Microsoft
Florence-2-large is an advanced vision foundation model developed by Microsoft, capable of handling a wide variety of vision and vision-language tasks, such as captioning, object detection, segmentation, and OCR. Built with a sequence-to-sequence architecture, it uses the FLD-5B dataset containing over 5 billion annotations and 126 million images to master multi-task learning. Florence-2-large excels in both zero-shot and fine-tuned settings, providing high-quality results with minimal training. The model supports tasks including detailed captioning, object detection, and dense region captioning, and can process images with text prompts to generate relevant responses. It offers great flexibility by handling diverse vision-related tasks through prompt-based approaches, making it a competitive tool in AI-powered visual tasks. The model is available on Hugging Face with pre-trained weights, enabling users to quickly get started with image processing and task execution.Starting Price: Free -
12
Devstral Small 2
Mistral AI
Devstral Small 2 is the compact, 24 billion-parameter variant of the new coding-focused model family from Mistral AI, released under the permissive Apache 2.0 license to enable both local deployment and API use. Alongside its larger sibling (Devstral 2), this model brings “agentic coding” capabilities to environments with modest compute: it supports a large 256K-token context window, enabling it to understand and make changes across entire codebases. On the standard code-generation benchmark (SWE-Bench Verified), Devstral Small 2 scores around 68.0%, placing it among open-weight models many times its size. Because of its reduced size and efficient design, Devstral Small 2 can run on a single GPU or even CPU-only setups, making it practical for developers, small teams, or hobbyists without access to data-center hardware. Despite its compact footprint, Devstral Small 2 retains key capabilities of larger models; it can reason across multiple files and track dependencies.Starting Price: Free -
13
Reka Flash 3
Reka
Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization. -
14
DeepSeek-VL
DeepSeek
DeepSeek-VL is an open source Vision-Language (VL) model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios, including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead.Starting Price: Free -
15
NVIDIA Cosmos
NVIDIA
NVIDIA Cosmos is a developer-first platform of state-of-the-art generative World Foundation Models (WFMs), advanced video tokenizers, guardrails, and an accelerated data processing and curation pipeline designed to supercharge physical AI development. It enables developers working on autonomous vehicles, robotics, and video analytics AI agents to generate photorealistic, physics-aware synthetic video data, trained on an immense dataset including 20 million hours of real-world and simulated video, to rapidly simulate future scenarios, train world models, and fine‑tune custom behaviors. It includes three core WFM types; Cosmos Predict, capable of generating up to 30 seconds of continuous video from multimodal inputs; Cosmos Transfer, which adapts simulations across environments and lighting for versatile domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for planning and decision-making.Starting Price: Free -
16
Phi-2
Microsoft
We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models. -
17
GLM-4.5V
Zhipu AI
GLM-4.5V builds on the GLM-4.5-Air foundation, using a Mixture-of-Experts (MoE) architecture with 106 billion total parameters and 12 billion activation parameters. It achieves state-of-the-art performance among open-source VLMs of similar scale across 42 public benchmarks, excelling in image, video, document, and GUI-based tasks. It supports a broad range of multimodal capabilities, including image reasoning (scene understanding, spatial recognition, multi-image analysis), video understanding (segmentation, event recognition), complex chart and long-document parsing, GUI-agent workflows (screen reading, icon recognition, desktop automation), and precise visual grounding (e.g., locating objects and returning bounding boxes). GLM-4.5V also introduces a “Thinking Mode” switch, allowing users to choose between fast responses or deeper reasoning when needed.Starting Price: Free -
18
Qwen Code
Qwen
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.Starting Price: Free -
19
Magistral
Mistral AI
Magistral is Mistral AI’s first reasoning‑focused language model family, released in two sizes: Magistral Small, a 24 B‑parameter open‑weight model under Apache 2.0 (downloadable on Hugging Face), and Magistral Medium, a more capable enterprise version available via Mistral’s API, Le Chat platform, and major cloud marketplaces. Built for domain‑specific, transparent, multilingual reasoning across tasks like math, physics, structured calculations, programmatic logic, decision trees, and rule‑based systems, Magistral produces chain‑of‑thought outputs in the user’s language that you can follow and verify. This launch marks a shift toward compact yet powerful transparent AI reasoning. Magistral Medium is currently available in preview on Le Chat, the API, SageMaker, WatsonX, Azure AI, and Google Cloud Marketplace. Magistral is ideal for general-purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. -
20
SmolVLM
Hugging Face
SmolVLM-Instruct is a compact, AI-powered multimodal model that combines the capabilities of vision and language processing, designed to handle tasks like image captioning, visual question answering, and multimodal storytelling. It works with both text and image inputs, providing highly efficient results while being optimized for smaller, resource-constrained environments. Built with SmolLM2 as its text decoder and SigLIP as its image encoder, the model offers improved performance for tasks that require integration of both textual and visual information. SmolVLM-Instruct can be fine-tuned for specific applications, offering businesses and developers a versatile tool for creating intelligent, interactive systems that require multimodal inputs.Starting Price: Free -
21
Phi-4-mini-flash-reasoning
Microsoft
Phi-4-mini-flash-reasoning is a 3.8 billion‑parameter open model in Microsoft’s Phi family, purpose‑built for edge, mobile, and other resource‑constrained environments where compute, memory, and latency are tightly limited. It introduces the SambaY decoder‑hybrid‑decoder architecture with Gated Memory Units (GMUs) interleaved alongside Mamba state‑space and sliding‑window attention layers, delivering up to 10× higher throughput and a 2–3× reduction in latency compared to its predecessor without sacrificing advanced math and logic reasoning performance. Supporting a 64 K‑token context length and fine‑tuned on high‑quality synthetic data, it excels at long‑context retrieval, reasoning tasks, and real‑time inference, all deployable on a single GPU. Phi-4-mini-flash-reasoning is available today via Azure AI Foundry, NVIDIA API Catalog, and Hugging Face, enabling developers to build fast, scalable, logic‑intensive applications. -
22
Qwen2.5-VL
Alibaba
Qwen2.5-VL is the latest vision-language model from the Qwen series, representing a significant advancement over its predecessor, Qwen2-VL. This model excels in visual understanding, capable of recognizing a wide array of objects, including text, charts, icons, graphics, and layouts within images. It functions as a visual agent, capable of reasoning and dynamically directing tools, enabling applications such as computer and phone usage. Qwen2.5-VL can comprehend videos exceeding one hour in length and can pinpoint relevant segments within them. Additionally, it accurately localizes objects in images by generating bounding boxes or points and provides stable JSON outputs for coordinates and attributes. The model also supports structured outputs for data like scanned invoices, forms, and tables, benefiting sectors such as finance and commerce. Available in base and instruct versions across 3B, 7B, and 72B sizes, Qwen2.5-VL is accessible through platforms like Hugging Face and ModelScope.Starting Price: Free -
23
Olmo 3
Ai2
Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.Starting Price: Free -
24
Llama 4 Scout
Meta
Llama 4 Scout is a powerful 17 billion active parameter multimodal AI model that excels in both text and image processing. With an industry-leading context length of 10 million tokens, it outperforms its predecessors, including Llama 3, in tasks such as multi-document summarization and parsing large codebases. Llama 4 Scout is designed to handle complex reasoning tasks while maintaining high efficiency, making it perfect for use cases requiring long-context comprehension and image grounding. It offers cutting-edge performance in image-related tasks and is particularly well-suited for applications requiring both text and visual understanding.Starting Price: Free -
25
Qwen3-Coder
Qwen
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.Starting Price: Free -
26
Open Computer Agent
Hugging Face
The Open Computer Agent is a browser-based AI assistant developed by Hugging Face that automates web interactions such as browsing, form-filling, and data retrieval. It leverages vision-language models like Qwen-VL to simulate mouse and keyboard actions, enabling tasks like booking tickets, checking store hours, and finding directions. Operating within a web browser, the agent can locate and interact with webpage elements using their image coordinates. As part of Hugging Face's smolagents project, it emphasizes flexibility and transparency, offering an open-source platform for developers to inspect, modify, and build upon for niche applications. While still in its early stages and facing challenges, the agent represents a new approach to AI as an active digital assistant, capable of performing online tasks without direct user input.Starting Price: Free -
27
Solar Pro 2
Upstage AI
Solar Pro 2 is Upstage’s latest frontier‑scale large language model, designed to power complex tasks and agent‑like workflows across domains such as finance, healthcare, and legal. Packaged in a compact 31 billion‑parameter architecture, it delivers top‑tier multilingual performance, especially in Korean, where it outperforms much larger models on benchmarks like Ko‑MMLU, Hae‑Rae, and Ko‑IFEval, while also excelling in English and Japanese. Beyond superior language understanding and generation, Solar Pro 2 offers next‑level intelligence through an advanced Reasoning Mode that significantly boosts multi‑step task accuracy on challenges ranging from general reasoning (MMLU, MMLU‑Pro, HumanEval) to complex mathematics (Math500, AIME) and software engineering (SWE‑Bench Agentless), achieving problem‑solving efficiency comparable to or exceeding that of models twice its size. Enhanced tool‑use capabilities enable the model to interact seamlessly with external APIs and data sources.Starting Price: $0.1 per 1M tokens -
28
Kimi K2
Moonshot AI
Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.Starting Price: Free -
29
Devstral 2
Mistral AI
Devstral 2 is a next-generation, open source agentic AI model tailored for software engineering: it doesn’t just suggest code snippets, it understands and acts across entire codebases, enabling multi-file edits, bug fixes, refactoring, dependency resolution, and context-aware code generation. The Devstral 2 family includes a large 123-billion-parameter model as well as a smaller 24-billion-parameter variant (“Devstral Small 2”), giving teams flexibility; the larger model excels in heavy-duty coding tasks requiring deep context, while the smaller one can run on more modest hardware. With a vast context window of up to 256 K tokens, Devstral 2 can reason across extensive repositories, track project history, and maintain a consistent understanding of lengthy files, an advantage for complex, real-world projects. The CLI tracks project metadata, Git statuses, and directory structure to give the model context, making “vibe-coding” more powerful.Starting Price: Free -
30
Mistral Small
Mistral AI
On September 17, 2024, Mistral AI announced several key updates to enhance the accessibility and performance of their AI offerings. They introduced a free tier on "La Plateforme," their serverless platform for tuning and deploying Mistral models as API endpoints, enabling developers to experiment and prototype at no cost. Additionally, Mistral AI reduced prices across their entire model lineup, with significant cuts such as a 50% reduction for Mistral Nemo and an 80% decrease for Mistral Small and Codestral, making advanced AI more cost-effective for users. The company also unveiled Mistral Small v24.09, a 22-billion-parameter model offering a balance between performance and efficiency, suitable for tasks like translation, summarization, and sentiment analysis. Furthermore, they made Pixtral 12B, a vision-capable model with image understanding capabilities, freely available on "Le Chat," allowing users to analyze and caption images without compromising text-based performance.Starting Price: Free -
31
LLaMA-Factory
hoshi-hiyouga
LLaMA-Factory is an open source platform designed to streamline and enhance the fine-tuning process of over 100 Large Language Models (LLMs) and Vision-Language Models (VLMs). It supports various fine-tuning techniques, including Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), and Prefix-Tuning, allowing users to customize models efficiently. It has demonstrated significant performance improvements; for instance, its LoRA tuning offers up to 3.7 times faster training speeds with better Rouge scores on advertising text generation tasks compared to traditional methods. LLaMA-Factory's architecture is designed for flexibility, supporting a wide range of model architectures and configurations. Users can easily integrate their datasets and utilize the platform's tools to achieve optimized fine-tuning results. Detailed documentation and diverse examples are provided to assist users in navigating the fine-tuning process effectively.Starting Price: Free -
32
K2 Think
Institute of Foundation Models
K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.Starting Price: Free -
33
NVIDIA Isaac GR00T
NVIDIA
NVIDIA Isaac GR00T (Generalist Robot 00 Technology) is a research-driven platform for developing general-purpose humanoid robot foundation models and data pipelines. It includes models like Isaac GR00T-N, and synthetic motion blueprints, GR00T-Mimic for augmenting demonstrations, and GR00T-Dreams for generating novel synthetic trajectories, to accelerate humanoid robotics development. Recently, the open source Isaac GR00T N1 foundation model debuted, featuring a dual-system cognitive architecture, a fast-reacting “System 1” action model, and a deliberative, language-enabled “System 2” reasoning model. The updated GR00T N1.5 introduces enhancements such as improved vision-language grounding, better language command following, few-shot adaptability, and new robot embodiment support. Together with tools like Isaac Sim, Lab, and Omniverse, GR00T empowers developers to train, simulate, post-train, and deploy adaptable humanoid agents using both real and synthetic data.Starting Price: Free -
34
Holo2
H Company
H Company’s Holo2 model family delivers cost-efficient, high-performance vision-language models tailored for computer-use agents that navigate, localize UI elements, and act across web, desktop, and mobile environments. The series, available in 4 B, 8 B, and 30 B-A3B sizes, builds on their earlier Holo1 and Holo1.5 models, retaining strong UI grounding while significantly enhancing navigation capabilities. Holo2 models use a mixture-of-experts (MoE) architecture, activating only necessary parameters, to optimize efficiency. Trained on curated localization and agent datasets, they can be deployed as drop-in replacements for their predecessors. They support seamless inference in frameworks compatible with Qwen3-VL models and can be integrated into agentic pipelines like Surfer 2. In benchmark testing, Holo2-30B-A3B achieved 66.1% accuracy on ScreenSpot-Pro and 76.1% on OSWorld-G, leading the UI localization category. -
35
LLaVA
LLaVA
LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.Starting Price: Free -
36
Qwen3-Max
Alibaba
Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.Starting Price: Free -
37
Surfer H
H Company
Surfer H from H Company is an autonomous web-agent platform built to understand and navigate user interfaces like a human by combining three modular models; a policy model that plans tasks, a localizer model that identifies UI elements visually, and a validator model that checks outcomes. The agent works purely through the browser interface with no special API hooks, enabling it to scroll, click, type, and complete real-web tasks such as booking hotels, comparing product deals, or extracting structured information. When paired with H Company’s open-weight vision-language models, Surfer H achieved state-of-the-art performance on the WebVoyager benchmark (92.2% accuracy at around $0.13 per task) and supports deployment locally, via Docker, or on cloud infrastructure. Use cases span web automation, QA testing without brittle scripts, data harvesting, and intelligent workflow agents that interact with the web directly as a human would.Starting Price: $0.13 per task -
38
GigaChat 3 Ultra
Sberbank
GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.Starting Price: Free -
39
Phi-4-mini-reasoning
Microsoft
Phi-4-mini-reasoning is a 3.8-billion parameter transformer-based language model optimized for mathematical reasoning and step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by the DeepSeek-R1 model, it balances efficiency with advanced reasoning ability. Trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level, Phi-4-mini-reasoning outperforms its base model on long sentence generation across various evaluations and surpasses larger models like OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. It features a 128K-token context window and supports function calling, enabling integration with external tools and APIs. Phi-4-mini-reasoning can be quantized using Microsoft Olive or Apple MLX Framework for deployment on edge devices such as IoT, laptops, and mobile devices. -
40
DeepScaleR
Agentica Project
DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.Starting Price: Free -
41
Llama 4 Maverick
Meta
Llama 4 Maverick is one of the most advanced multimodal AI models from Meta, featuring 17 billion active parameters and 128 experts. It surpasses its competitors like GPT-4o and Gemini 2.0 Flash in a broad range of benchmarks, especially in tasks related to coding, reasoning, and multilingual capabilities. Llama 4 Maverick combines image and text understanding, enabling it to deliver industry-leading results in image-grounding tasks and precise, high-quality output. With its efficient performance at a reduced parameter size, Maverick offers exceptional value, especially in general assistant and chat applications.Starting Price: Free -
42
Grok 4.1
xAI
Grok 4.1 is an advanced AI model developed by Elon Musk’s xAI, designed to push the limits of reasoning and natural language understanding. Built on the powerful Colossus supercomputer, it processes multimodal inputs including text and images, with upcoming support for video. The model delivers exceptional accuracy in scientific, technical, and linguistic tasks. Its architecture enables complex reasoning and nuanced response generation that rivals the best AI systems in the world. Enhanced moderation ensures more responsible and unbiased outputs than earlier versions. Grok 4.1 is a breakthrough in creating AI that can think, interpret, and respond more like a human. -
43
Qwen2.5-VL-32B
Alibaba
Qwen2.5-VL-32B is a state-of-the-art AI model designed for multimodal tasks, offering advanced capabilities in both text and image reasoning. It builds upon the earlier Qwen2.5-VL series, improving response quality with more human-like, formatted answers. The model excels in mathematical reasoning, fine-grained image understanding, and complex, multi-step reasoning tasks, such as those found in MathVista and MMMU benchmarks. Its superior performance has been demonstrated in comparison to other models, outperforming the larger Qwen2-VL-72B in certain areas. With improved image parsing and visual logic deduction, Qwen2.5-VL-32B provides a detailed, accurate analysis of images and can generate responses based on complex visual inputs. It has been optimized for both text and image tasks, making it ideal for applications requiring sophisticated reasoning and understanding across different media. -
44
Nomic Embed
Nomic
Nomic Embed is a suite of open source, high-performance embedding models designed for various applications, including multilingual text, multimodal content, and code. The ecosystem includes models like Nomic Embed Text v2, which utilizes a Mixture-of-Experts (MoE) architecture to support over 100 languages with efficient inference using 305M active parameters. Nomic Embed Text v1.5 offers variable embedding dimensions (64 to 768) through Matryoshka Representation Learning, enabling developers to balance performance and storage needs. For multimodal applications, Nomic Embed Vision v1.5 aligns with the text models to provide a unified latent space for text and image data, facilitating seamless multimodal search. Additionally, Nomic Embed Code delivers state-of-the-art performance on code embedding tasks across multiple programming languages.Starting Price: Free -
45
Ministral 3
Mistral AI
Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.Starting Price: Free -
46
Phi-4-reasoning
Microsoft
Phi-4-reasoning is a 14-billion parameter transformer-based language model optimized for complex reasoning tasks, including math, coding, algorithmic problem solving, and planning. Trained via supervised fine-tuning of Phi-4 on carefully curated "teachable" prompts and reasoning demonstrations generated using o3-mini, it generates detailed reasoning chains that effectively leverage inference-time compute. Phi-4-reasoning incorporates outcome-based reinforcement learning to produce longer reasoning traces. It outperforms significantly larger open-weight models such as DeepSeek-R1-Distill-Llama-70B and approaches the performance levels of the full DeepSeek-R1 model across a wide range of reasoning tasks. Phi-4-reasoning is designed for environments with constrained computing or latency. Fine-tuned with synthetic data generated by DeepSeek-R1, it provides high-quality, step-by-step problem solving. -
47
Amazon Nova 2 Lite
Amazon
Nova 2 Lite is a lightweight, high-speed reasoning model designed to handle everyday AI workloads across text, images, and video. It can generate clear, context-aware responses and lets users fine-tune how much internal reasoning the model performs before producing an answer. This adjustable “thinking depth” gives teams the flexibility to choose faster replies or more detailed problem-solving depending on the task. It stands out for customer service bots, automated document handling, and general business workflow support. Nova 2 Lite delivers strong performance across standard evaluation tests. It performs on par with or better than comparable compact models in most benchmark categories, demonstrating reliable comprehension and response quality. Its strengths include interpreting complex documents, pulling accurate insights from video content, generating usable code, and delivering grounded answers based on provided information. -
48
Gemini 2.5 Pro Deep Think
Google
Gemini 2.5 Pro Deep Think is a cutting-edge AI model designed to enhance the reasoning capabilities of machine learning models, offering improved performance and accuracy. This advanced version of the Gemini 2.5 series incorporates a feature called "Deep Think," allowing the model to reason through its thoughts before responding. It excels in coding, handling complex prompts, and multimodal tasks, offering smarter, more efficient execution. Whether for coding tasks, visual reasoning, or handling long-context input, Gemini 2.5 Pro Deep Think provides unparalleled performance. It also introduces features like native audio for more expressive conversations and optimizations that make it faster and more accurate than previous versions. -
49
Hugging Face Transformers
Hugging Face
Transformers is a library of pretrained natural language processing, computer vision, audio, and multimodal models for inference and training. Use Transformers to train models on your data, build inference applications, and generate text with large language models. Explore the Hugging Face Hub today to find a model and use Transformers to help you get started right away. Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech recognition, document question answering, and more. A comprehensive trainer that supports features such as mixed precision, torch.compile, and FlashAttention for training and distributed training for PyTorch models. Fast text generation with large language models and vision language models. Every model is implemented from only three main classes (configuration, model, and preprocessor) and can be quickly used for inference or training.Starting Price: $9 per month -
50
Kimi K2 Thinking
Moonshot AI
Kimi K2 Thinking is an advanced open source reasoning model developed by Moonshot AI, designed specifically for long-horizon, multi-step workflows where the system interleaves chain-of-thought processes with tool invocation across hundreds of sequential tasks. The model uses a mixture-of-experts architecture with a total of 1 trillion parameters, yet only about 32 billion parameters are activated per inference pass, optimizing efficiency while maintaining vast capacity. It supports a context window of up to 256,000 tokens, enabling the handling of extremely long inputs and reasoning chains without losing coherence. Native INT4 quantization is built in, which reduces inference latency and memory usage without performance degradation. Kimi K2 Thinking is explicitly built for agentic workflows; it can autonomously call external tools, manage sequential logic steps (up to and typically between 200-300 tool calls in a single chain), and maintain consistent reasoning.Starting Price: Free