Alternatives to Qwen3-Coder-Next

Compare Qwen3-Coder-Next alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Qwen3-Coder-Next in 2026. Compare features, ratings, user reviews, pricing, and more from Qwen3-Coder-Next competitors and alternatives in order to make an informed decision for your business.

  • 1
    MiniMax M2.5
    MiniMax M2.5 is a frontier AI model engineered for real-world productivity across coding, agentic workflows, search, and office tasks. Extensively trained with reinforcement learning in hundreds of thousands of real-world environments, it achieves state-of-the-art performance in benchmarks such as SWE-Bench Verified and BrowseComp. The model demonstrates strong architectural thinking, decomposing complex problems before generating code across more than ten programming languages. M2.5 operates at high throughput speeds of up to 100 tokens per second, enabling faster completion of multi-step tasks. It is optimized for efficient reasoning, reducing token usage and execution time compared to previous versions. With dramatically lower pricing than competing frontier models, it delivers powerful performance at minimal cost. Integrated into MiniMax Agent, M2.5 supports professional-grade office workflows, financial modeling, and autonomous task execution.
  • 2
    GPT-5.3-Codex
    GPT-5.3-Codex is OpenAI’s most advanced agentic coding model, designed to handle complex professional work on a computer. It combines frontier-level coding performance with advanced reasoning and real-world task execution. The model is faster than previous Codex versions and can manage long-running tasks involving research, tools, and deployment. GPT-5.3-Codex supports real-time interaction, allowing users to steer progress without losing context. It excels at software engineering, web development, and terminal-based workflows. Beyond code generation, it assists with debugging, documentation, testing, and analysis. GPT-5.3-Codex acts as an interactive collaborator rather than a single-turn coding tool.
  • 3
    GPT‑5.3‑Codex‑Spark
    GPT-5.3-Codex-Spark is an ultra-fast coding model designed for real-time collaboration inside Codex. Built as a smaller version of GPT-5.3-Codex, it delivers over 1000 tokens per second when served on low-latency Cerebras hardware. The model is optimized for interactive coding tasks, enabling developers to make targeted edits and see results almost instantly. With a 128k context window, Codex-Spark supports substantial project context while maintaining speed. It focuses on lightweight, precise edits and does not automatically run tests unless prompted. Infrastructure upgrades such as persistent WebSocket connections significantly reduce latency across the full request-response pipeline. Released as a research preview for ChatGPT Pro users, Codex-Spark marks the first milestone in OpenAI’s partnership with Cerebras.
  • 4
    Qwen3-Coder
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.
  • 5
    Qwen Code
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.
  • 6
    Qwen2

    Qwen2

    Alibaba

    Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
  • 7
    DeepSeek-Coder-V2
    DeepSeek-Coder-V2 is an open source code language model designed to excel in programming and mathematical reasoning tasks. It features a Mixture-of-Experts (MoE) architecture with 236 billion total parameters and 21 billion activated parameters per token, enabling efficient processing and high performance. The model was trained on an extensive dataset of 6 trillion tokens, enhancing its capabilities in code generation and mathematical problem-solving. DeepSeek-Coder-V2 supports over 300 programming languages and has demonstrated superior performance on benchmarks such surpassing other models. It is available in multiple variants, including DeepSeek-Coder-V2-Instruct, optimized for instruction-based tasks; DeepSeek-Coder-V2-Base, suitable for general text generation; and lightweight versions like DeepSeek-Coder-V2-Lite-Base and DeepSeek-Coder-V2-Lite-Instruct, designed for environments with limited computational resources.
  • 8
    Qwen2.5-Coder
    Qwen2.5-Coder-32B-Instruct has become the current SOTA open source code model, matching the coding capabilities of GPT-4o. While demonstrating strong and comprehensive coding abilities, it also possesses good general and mathematical skills. As of now, Qwen2.5-Coder has covered six mainstream model sizes to meet the needs of different developers. We explore the practicality of Qwen2.5-Coder in two scenarios, including code assistants and artifacts, with some examples showcasing the potential applications of Qwen2.5-Coder in real-world scenarios. Qwen2.5-Coder-32B-Instruct, as the flagship model of this open source release, has achieved the best performance among open source models on multiple popular code generation benchmarks and has competitive performance with GPT-4o. Code repair is an important programming skill. Qwen2.5-Coder-32B-Instruct can help users fix errors in their code, making programming more efficient.
  • 9
    Qwen3-Max

    Qwen3-Max

    Alibaba

    Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
  • 10
    Qwen2.5-Max
    Qwen2.5-Max is a large-scale Mixture-of-Experts (MoE) model developed by the Qwen team, pretrained on over 20 trillion tokens and further refined through Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). In evaluations, it outperforms models like DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro. Qwen2.5-Max is accessible via API through Alibaba Cloud and can be explored interactively on Qwen Chat.
  • 11
    QwQ-32B

    QwQ-32B

    Alibaba

    ​QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.
  • 12
    DeepCoder

    DeepCoder

    Agentica Project

    DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.
  • 13
    Kimi K2 Thinking

    Kimi K2 Thinking

    Moonshot AI

    Kimi K2 Thinking is an advanced open source reasoning model developed by Moonshot AI, designed specifically for long-horizon, multi-step workflows where the system interleaves chain-of-thought processes with tool invocation across hundreds of sequential tasks. The model uses a mixture-of-experts architecture with a total of 1 trillion parameters, yet only about 32 billion parameters are activated per inference pass, optimizing efficiency while maintaining vast capacity. It supports a context window of up to 256,000 tokens, enabling the handling of extremely long inputs and reasoning chains without losing coherence. Native INT4 quantization is built in, which reduces inference latency and memory usage without performance degradation. Kimi K2 Thinking is explicitly built for agentic workflows; it can autonomously call external tools, manage sequential logic steps (up to and typically between 200-300 tool calls in a single chain), and maintain consistent reasoning.
  • 14
    GPT-5.1-Codex
    GPT-5.1-Codex is a specialized version of the GPT-5.1 model built for software engineering and agentic coding workflows. It is optimized for both interactive development sessions and long-horizon, autonomous execution of complex engineering tasks, such as building projects from scratch, developing features, debugging, performing large-scale refactoring, and code review. It supports tool-use, integrates naturally with developer environments, and adapts reasoning effort dynamically, moving quickly on simple tasks while spending more time on deep ones. The model is described as producing cleaner and higher-quality code outputs compared to general models, with closer adherence to developer instructions and fewer hallucinations. GPT-5.1-Codex is available via the Responses API route (rather than a standard chat API) and comes in variants including “mini” for cost-sensitive usage and “max” for the highest capability.
    Starting Price: $1.25 per input
  • 15
    MiMo-V2-Flash

    MiMo-V2-Flash

    Xiaomi Technology

    MiMo-V2-Flash is an open weight large language model developed by Xiaomi based on a Mixture-of-Experts (MoE) architecture that blends high performance with inference efficiency. It has 309 billion total parameters but activates only 15 billion active parameters per inference, letting it balance reasoning quality and computational efficiency while supporting extremely long context handling, for tasks like long-document understanding, code generation, and multi-step agent workflows. It incorporates a hybrid attention mechanism that interleaves sliding-window and global attention layers to reduce memory usage and maintain long-range comprehension, and it uses a Multi-Token Prediction (MTP) design that accelerates inference by processing batches of tokens in parallel. MiMo-V2-Flash delivers very fast generation speeds (up to ~150 tokens/second) and is optimized for agentic applications requiring sustained reasoning and multi-turn interactions.
  • 16
    MiniMax-M2.1
    MiniMax-M2.1 is an open-source, agentic large language model designed for advanced coding, tool use, and long-horizon planning. It was released to the community to make high-performance AI agents more transparent, controllable, and accessible. The model is optimized for robustness in software engineering, instruction following, and complex multi-step workflows. MiniMax-M2.1 supports multilingual development and performs strongly across real-world coding scenarios. It is suitable for building autonomous applications that require reasoning, planning, and execution. The model weights are fully open, enabling local deployment and customization. MiniMax-M2.1 represents a major step toward democratizing top-tier agent capabilities.
  • 17
    GLM-4.5
    GLM‑4.5 is Z.ai’s latest flagship model in the GLM family, engineered with 355 billion total parameters (32 billion active) and a companion GLM‑4.5‑Air variant (106 billion total, 12 billion active) to unify advanced reasoning, coding, and agentic capabilities in one architecture. It operates in a “thinking” mode for complex, multi‑step reasoning and tool use, and a “non‑thinking” mode for instant responses, supporting up to 128 K token context length and native function calling. Available via the Z.ai chat platform and API, with open weights on HuggingFace and ModelScope, GLM‑4.5 ingests diverse inputs to solve general problem‑solving, common‑sense reasoning, coding from scratch or within existing projects, and end‑to‑end agent workflows such as web browsing and slide generation. Built on a Mixture‑of‑Experts design with loss‑free balance routing, grouped‑query attention, and an MTP layer for speculative decoding, it delivers enterprise‑grade performance.
  • 18
    Qwen3-Max-Thinking
    Qwen3-Max-Thinking is Alibaba’s latest flagship reasoning-enhanced large language model, built as an extension of the Qwen3-Max family and designed to deliver state-of-the-art analytical performance and multi-step reasoning capabilities. It scales up from one of the largest parameter bases in the Qwen ecosystem and incorporates advanced reinforcement learning and adaptive tool integration so the model can leverage search, memory, and code interpreter functions dynamically during inference to address difficult multi-stage tasks with higher accuracy and contextual depth compared with standard generative responses. Qwen3-Max-Thinking introduces a unique Thinking Mode that exposes deliberate, step-by-step reasoning before final outputs, enabling transparency and traceability of logical chains, and can be tuned with configurable “thinking budgets” to balance performance quality with computational cost.
  • 19
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
  • 20
    Tinker

    Tinker

    Thinking Machines Lab

    Tinker is a training API designed for researchers and developers that allows full control over model fine-tuning while abstracting away the infrastructure complexity. It supports primitives and enables users to build custom training loops, supervision logic, and reinforcement learning flows. It currently supports LoRA fine-tuning on open-weight models across both LLama and Qwen families, ranging from small models to large mixture-of-experts architectures. Users write Python code to handle data, loss functions, and algorithmic logic; Tinker handles scheduling, resource allocation, distributed training, and failure recovery behind the scenes. The service lets users download model weights at different checkpoints and doesn’t force them to manage the compute environment. Tinker is delivered as a managed offering; training jobs run on Thinking Machines’ internal GPU infrastructure, freeing users from cluster orchestration.
  • 21
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
  • 22
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
  • 23
    Holo2

    Holo2

    H Company

    H Company’s Holo2 model family delivers cost-efficient, high-performance vision-language models tailored for computer-use agents that navigate, localize UI elements, and act across web, desktop, and mobile environments. The series, available in 4 B, 8 B, and 30 B-A3B sizes, builds on their earlier Holo1 and Holo1.5 models, retaining strong UI grounding while significantly enhancing navigation capabilities. Holo2 models use a mixture-of-experts (MoE) architecture, activating only necessary parameters, to optimize efficiency. Trained on curated localization and agent datasets, they can be deployed as drop-in replacements for their predecessors. They support seamless inference in frameworks compatible with Qwen3-VL models and can be integrated into agentic pipelines like Surfer 2. In benchmark testing, Holo2-30B-A3B achieved 66.1% accuracy on ScreenSpot-Pro and 76.1% on OSWorld-G, leading the UI localization category.
  • 24
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
  • 25
    QwQ-Max-Preview
    QwQ-Max-Preview is an advanced AI model built on the Qwen2.5-Max architecture, designed to excel in deep reasoning, mathematical problem-solving, coding, and agent-related tasks. This preview version offers a sneak peek at its capabilities, which include improved performance in a wide range of general-domain tasks and the ability to handle complex workflows. QwQ-Max-Preview is slated for an official open-source release under the Apache 2.0 license, offering further advancements and refinements in its full version. It also paves the way for a more accessible AI ecosystem, with the upcoming launch of the Qwen Chat app and smaller variants of the model like QwQ-32B, aimed at developers seeking local deployment options.
  • 26
    Devstral Small 2
    Devstral Small 2 is the compact, 24 billion-parameter variant of the new coding-focused model family from Mistral AI, released under the permissive Apache 2.0 license to enable both local deployment and API use. Alongside its larger sibling (Devstral 2), this model brings “agentic coding” capabilities to environments with modest compute: it supports a large 256K-token context window, enabling it to understand and make changes across entire codebases. On the standard code-generation benchmark (SWE-Bench Verified), Devstral Small 2 scores around 68.0%, placing it among open-weight models many times its size. Because of its reduced size and efficient design, Devstral Small 2 can run on a single GPU or even CPU-only setups, making it practical for developers, small teams, or hobbyists without access to data-center hardware. Despite its compact footprint, Devstral Small 2 retains key capabilities of larger models; it can reason across multiple files and track dependencies.
  • 27
    Mistral Large 3
    Mistral Large 3 is a next-generation, open multimodal AI model built with a powerful sparse Mixture-of-Experts architecture featuring 41B active parameters out of 675B total. Designed from scratch on NVIDIA H200 GPUs, it delivers frontier-level reasoning, multilingual performance, and advanced image understanding while remaining fully open-weight under the Apache 2.0 license. The model achieves top-tier results on modern instruction benchmarks, positioning it among the strongest permissively licensed foundation models available today. With native support across vLLM, TensorRT-LLM, and major cloud providers, Mistral Large 3 offers exceptional accessibility and performance efficiency. Its design enables enterprise-grade customization, letting teams fine-tune or adapt the model for domain-specific workflows and proprietary applications. Mistral Large 3 represents a major advancement in open AI, offering frontier intelligence without sacrificing transparency or control.
  • 28
    Qwen3

    Qwen3

    Alibaba

    Qwen3, the latest iteration of the Qwen family of large language models, introduces groundbreaking features that enhance performance across coding, math, and general capabilities. With models like the Qwen3-235B-A22B and Qwen3-30B-A3B, Qwen3 achieves impressive results compared to top-tier models, thanks to its hybrid thinking modes that allow users to control the balance between deep reasoning and quick responses. The platform supports 119 languages and dialects, making it an ideal choice for global applications. Its pre-training process, which uses 36 trillion tokens, enables robust performance, and advanced reinforcement learning (RL) techniques continue to refine its capabilities. Available on platforms like Hugging Face and ModelScope, Qwen3 offers a powerful tool for developers and researchers working in diverse fields.
  • 29
    GPT-5.1-Codex-Max
    GPT-5.1-Codex-Max is the high-capability variant of the GPT-5.1-Codex series designed specifically for software engineering and agentic code workflows. It builds on the base GPT-5.1 architecture with a focus on long-horizon tasks such as full project generation, large-scale refactoring, and autonomous multi-step bug and test management. It introduces adaptive reasoning, meaning the system dynamically allocates more compute for complex problems and less for simpler ones, to improve efficiency and output quality. It also supports tool use (IDE-integrated workflows, version control, CI/CD pipelines) and offers higher fidelity in code review, debugging, and agentic behavior than general-purpose models. Alongside Max, there are lighter variants such as Codex-Mini for cost-sensitive or scale use-cases. The GPT-5.1-Codex family is available in developer previews, including via integrations like GitHub Copilot.
  • 30
    Qwen2.5-VL-32B
    Qwen2.5-VL-32B is a state-of-the-art AI model designed for multimodal tasks, offering advanced capabilities in both text and image reasoning. It builds upon the earlier Qwen2.5-VL series, improving response quality with more human-like, formatted answers. The model excels in mathematical reasoning, fine-grained image understanding, and complex, multi-step reasoning tasks, such as those found in MathVista and MMMU benchmarks. Its superior performance has been demonstrated in comparison to other models, outperforming the larger Qwen2-VL-72B in certain areas. With improved image parsing and visual logic deduction, Qwen2.5-VL-32B provides a detailed, accurate analysis of images and can generate responses based on complex visual inputs. It has been optimized for both text and image tasks, making it ideal for applications requiring sophisticated reasoning and understanding across different media.
  • 31
    EXAONE Deep
    EXAONE Deep is a series of reasoning-enhanced language models developed by LG AI Research, featuring parameter sizes of 2.4 billion, 7.8 billion, and 32 billion. These models demonstrate superior capabilities in various reasoning tasks, including math and coding benchmarks. Notably, EXAONE Deep 2.4B outperforms other models of comparable size, EXAONE Deep 7.8B surpasses both open-weight models of similar scale and the proprietary reasoning model OpenAI o1-mini, and EXAONE Deep 32B shows competitive performance against leading open-weight models. The repository provides comprehensive documentation covering performance evaluations, quickstart guides for using EXAONE Deep models with Transformers, explanations of quantized EXAONE Deep weights in AWQ and GGUF formats, and instructions for running EXAONE Deep models locally using frameworks like llama.cpp and Ollama.
  • 32
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
  • 33
    Qwen2-VL

    Qwen2-VL

    Alibaba

    Qwen2-VL is the latest version of the vision language models based on Qwen2 in the Qwen model familities. Compared with Qwen-VL, Qwen2-VL has the capabilities of: SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20 min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images
  • 34
    CodeQwen

    CodeQwen

    Alibaba

    CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
  • 35
    Sky-T1

    Sky-T1

    NovaSky

    Sky-T1-32B-Preview is an open source reasoning model developed by the NovaSky team at UC Berkeley's Sky Computing Lab. It matches the performance of proprietary models like o1-preview on reasoning and coding benchmarks, yet was trained for under $450, showcasing the feasibility of cost-effective, high-level reasoning capabilities. The model was fine-tuned from Qwen2.5-32B-Instruct using a curated dataset of 17,000 examples across diverse domains, including math and coding. The training was completed in 19 hours on eight H100 GPUs with DeepSpeed Zero-3 offloading. All aspects of the project, including data, code, and model weights, are fully open-source, empowering the academic and open-source communities to replicate and enhance the model's performance.
  • 36
    Step 3.5 Flash
    Step 3.5 Flash is an advanced open source foundation language model engineered for frontier reasoning and agentic capabilities with exceptional efficiency, built on a sparse Mixture of Experts (MoE) architecture that selectively activates only about 11 billion of its ~196 billion parameters per token to deliver high-density intelligence and real-time responsiveness. Its 3-way Multi-Token Prediction (MTP-3) enables generation throughput in the hundreds of tokens per second for complex multi-step reasoning chains and task execution, and it supports efficient long contexts with a hybrid sliding window attention approach that reduces computational overhead across large datasets or codebases. It demonstrates robust performance on benchmarks for reasoning, coding, and agentic tasks, rivaling or exceeding many larger proprietary models, and includes a scalable reinforcement learning framework for consistent self-improvement.
  • 37
    Phi-4-reasoning
    Phi-4-reasoning is a 14-billion parameter transformer-based language model optimized for complex reasoning tasks, including math, coding, algorithmic problem solving, and planning. Trained via supervised fine-tuning of Phi-4 on carefully curated "teachable" prompts and reasoning demonstrations generated using o3-mini, it generates detailed reasoning chains that effectively leverage inference-time compute. Phi-4-reasoning incorporates outcome-based reinforcement learning to produce longer reasoning traces. It outperforms significantly larger open-weight models such as DeepSeek-R1-Distill-Llama-70B and approaches the performance levels of the full DeepSeek-R1 model across a wide range of reasoning tasks. Phi-4-reasoning is designed for environments with constrained computing or latency. Fine-tuned with synthetic data generated by DeepSeek-R1, it provides high-quality, step-by-step problem solving.
  • 38
    Qwen

    Qwen

    Alibaba

    Qwen is a powerful, free AI assistant built on the advanced Qwen model series, designed to help anyone with creativity, research, problem-solving, and everyday tasks. While Qwen Chat is the main interface for most users, Qwen itself powers a broad range of intelligent capabilities including image generation, deep research, website creation, advanced reasoning, and context-aware search. Its multimodal intelligence enables Qwen to understand and process text, images, audio, and video simultaneously for richer insights. Qwen is available on web, desktop, and mobile, ensuring seamless access across all devices. For developers, the Qwen API provides OpenAI-compatible endpoints, making integration simple and allowing Qwen’s intelligence to power apps, services, and automation. Whether you're chatting through Qwen Chat or building with the Qwen API, Qwen delivers fast, flexible, and highly capable AI support.
  • 39
    Grok 4.1 Fast
    Grok 4.1 Fast is the newest xAI model designed to deliver advanced tool-calling capabilities with a massive 2-million-token context window. It excels at complex real-world tasks such as customer support, finance, troubleshooting, and dynamic agent workflows. The model pairs seamlessly with the new Agent Tools API, which enables real-time web search, X search, file retrieval, and secure code execution. This combination gives developers the power to build fully autonomous, production-grade agents that plan, reason, and use tools effectively. Grok 4.1 Fast is trained with long-horizon reinforcement learning, ensuring stable multi-turn accuracy even across extremely long prompts. With its speed, cost-efficiency, and high benchmark scores, it sets a new standard for scalable enterprise-grade AI agents.
  • 40
    Xiaomi MiMo

    Xiaomi MiMo

    Xiaomi Technology

    The Xiaomi MiMo API open platform is a developer-oriented interface for accessing and integrating Xiaomi’s MiMo family of AI models, including reasoning and language models such as MiMo-V2-Flash, into applications and services through standardized APIs and cloud endpoints, enabling developers to build AI-enabled features like conversational agents, reasoning workflows, code assistance, and search-augmented tasks without managing model infrastructure themselves. It offers REST-style API access with authentication, request signing, and structured responses so software can send prompts and receive generated text or processed outputs programmatically, and it supports common operations like text generation, prompt handling, and inference over MiMo models. By providing documentation and onboarding tools, the open platform lets teams integrate Xiaomi’s latest open source large language models, which leverage Mixture-of-Experts (MoE) architectures.
  • 41
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 42
    DeepSWE

    DeepSWE

    Agentica Project

    DeepSWE is a fully open source, state-of-the-art coding agent built on top of the Qwen3-32B foundation model and trained exclusively via reinforcement learning (RL), without supervised finetuning or distillation from proprietary models. It is developed using rLLM, Agentica’s open source RL framework for language agents. DeepSWE operates as an agent; it interacts with a simulated development environment (via the R2E-Gym environment) using a suite of tools (file editor, search, shell-execution, submit/finish), enabling it to navigate codebases, edit multiple files, compile/run tests, and iteratively produce patches or complete engineering tasks. DeepSWE exhibits emergent behaviors beyond simple code generation; when presented with bugs or feature requests, the agent reasons about edge cases, seeks existing tests in the repository, proposes patches, writes extra tests for regressions, and dynamically adjusts its “thinking” effort.
  • 43
    Devstral 2

    Devstral 2

    Mistral AI

    Devstral 2 is a next-generation, open source agentic AI model tailored for software engineering: it doesn’t just suggest code snippets, it understands and acts across entire codebases, enabling multi-file edits, bug fixes, refactoring, dependency resolution, and context-aware code generation. The Devstral 2 family includes a large 123-billion-parameter model as well as a smaller 24-billion-parameter variant (“Devstral Small 2”), giving teams flexibility; the larger model excels in heavy-duty coding tasks requiring deep context, while the smaller one can run on more modest hardware. With a vast context window of up to 256 K tokens, Devstral 2 can reason across extensive repositories, track project history, and maintain a consistent understanding of lengthy files, an advantage for complex, real-world projects. The CLI tracks project metadata, Git statuses, and directory structure to give the model context, making “vibe-coding” more powerful.
  • 44
    Claude Sonnet 5
    Claude Sonnet 5 is Anthropic’s latest frontier AI model built for advanced coding, agentic workflows, and intensive computer-based tasks. It is designed to handle long-horizon work while maintaining strong safety and alignment guarantees. Sonnet 5 delivers state-of-the-art results on the SWE-bench Verified benchmark, demonstrating high-level software engineering capabilities. The model also leads performance on OSWorld, showcasing its strength in computer use and complex task execution. It can sustain focus for over 30 hours on multi-step, demanding workflows. Improvements in tool usage, memory handling, and context processing enable deeper reasoning and understanding across domains. These enhancements make Sonnet 5 especially effective for complex, real-world problem solving.
  • 45
    Qwen3-VL

    Qwen3-VL

    Alibaba

    Qwen3-VL is the newest vision-language model in the Qwen family (by Alibaba Cloud), designed to fuse powerful text understanding/generation with advanced visual and video comprehension into one unified multimodal model. It accepts inputs in mixed modalities, text, images, and video, and handles long, interleaved contexts natively (up to 256 K tokens, with extensibility beyond). Qwen3-VL delivers major advances in spatial reasoning, visual perception, and multimodal reasoning; the model architecture incorporates several innovations such as Interleaved-MRoPE (for robust spatio-temporal positional encoding), DeepStack (to leverage multi-level features from its Vision Transformer backbone for refined image-text alignment), and text–timestamp alignment (for precise reasoning over video content and temporal events). These upgrades enable Qwen3-VL to interpret complex scenes, follow dynamic video sequences, read and reason about visual layouts.
  • 46
    Qwen2.5-VL

    Qwen2.5-VL

    Alibaba

    Qwen2.5-VL is the latest vision-language model from the Qwen series, representing a significant advancement over its predecessor, Qwen2-VL. This model excels in visual understanding, capable of recognizing a wide array of objects, including text, charts, icons, graphics, and layouts within images. It functions as a visual agent, capable of reasoning and dynamically directing tools, enabling applications such as computer and phone usage. Qwen2.5-VL can comprehend videos exceeding one hour in length and can pinpoint relevant segments within them. Additionally, it accurately localizes objects in images by generating bounding boxes or points and provides stable JSON outputs for coordinates and attributes. The model also supports structured outputs for data like scanned invoices, forms, and tables, benefiting sectors such as finance and commerce. Available in base and instruct versions across 3B, 7B, and 72B sizes, Qwen2.5-VL is accessible through platforms like Hugging Face and ModelScope.
  • 47
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
  • 48
    Ministral 3

    Ministral 3

    Mistral AI

    Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.
  • 49
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
  • 50
    Athene-V2

    Athene-V2

    Nexusflow

    ​Athene-V2 is Nexusflow's latest 72-billion-parameter model suite, fine-tuned from Qwen 2.5 72B, designed to compete with GPT-4o across key capabilities. This suite includes Athene-V2-Chat-72B, a state-of-the-art chat model that matches GPT-4o in multiple benchmarks, excelling in chat helpfulness (Arena-Hard), code completion (ranking #2 on bigcode-bench-hard), mathematics (MATH), and precise long log extraction. Additionally, Athene-V2-Agent-72B balances chat and agent functionalities, offering concise, directive responses and surpassing GPT-4o in Nexus-V2 function calling benchmarks focused on complex enterprise-level use cases. These advancements underscore the industry's shift from merely scaling model sizes to specialized customization, illustrating how targeted post-training processes can finely optimize models for distinct skills and applications. ​