Alternatives to Composer 1

Compare Composer 1 alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Composer 1 in 2026. Compare features, ratings, user reviews, pricing, and more from Composer 1 competitors and alternatives in order to make an informed decision for your business.

  • 1
    Composer 1.5
    Composer 1.5 is the latest agentic coding model from Cursor that balances speed and intelligence for everyday code tasks by scaling reinforcement learning approximately 20x more than its predecessor, enabling stronger performance on real-world programming challenges. It’s designed as a “thinking model” that generates internal reasoning tokens to analyze a user’s codebase and plan next steps, responding quickly to simple problems and engaging deeper reasoning on complex ones, while remaining interactive and fast for daily development workflows. To handle long-running tasks, Composer 1.5 introduces self-summarization, allowing the model to compress and carry forward context when it reaches context limits, which helps maintain accuracy across varying input lengths. Internal benchmarks show it surpasses Composer 1 in coding tasks, especially on more difficult issues, making it more capable for interactive use within Cursor’s environment.
  • 2
    DeepSeek-V4-Pro
    DeepSeek-V4-Pro is a large-scale Mixture-of-Experts (MoE) language model designed for advanced reasoning, coding, and long-context understanding. It features 1.6 trillion total parameters with 49 billion activated parameters, enabling high performance while maintaining efficiency. The model supports an exceptionally large context window of up to one million tokens, allowing it to process extensive documents and workflows. It uses a hybrid attention architecture to optimize long-context performance and reduce computational cost. DeepSeek-V4-Pro is trained on over 32 trillion tokens, improving its knowledge and reasoning capabilities. It also includes advanced optimization techniques for stability and faster convergence during training. The model supports multiple reasoning modes, allowing users to balance speed and accuracy based on their needs. Overall, it provides a powerful open-source solution for complex AI tasks and large-scale applications.
  • 3
    Composer 2
    Composer 2 is an advanced AI coding model integrated into Cursor, designed to deliver high-level programming performance at a cost-efficient price. It is trained on long-horizon coding tasks, enabling it to solve complex problems that require multiple steps and actions. The model demonstrates strong improvements across key benchmarks, including Terminal-Bench and SWE-bench Multilingual. With enhanced intelligence and efficiency, it provides faster and more accurate code generation. Composer 2 combines strong performance with affordable pricing, making it accessible for developers and teams.
    Starting Price: $0.50/M input
  • 4
    Grok Code Fast 1
    Grok Code Fast 1 is a high-speed, economical reasoning model designed specifically for agentic coding workflows. Unlike traditional models that can feel slow in tool-based loops, it delivers near-instant responses, excelling in everyday software development tasks. Built from scratch with a programming-rich corpus and refined on real-world pull requests, it supports languages like TypeScript, Python, Java, Rust, C++, and Go. Developers can use it for everything from zero-to-one project building to precise bug fixes and codebase Q&A. With optimized inference and caching techniques, it achieves impressive responsiveness and a 90%+ cache hit rate when integrated with partners like GitHub Copilot, Cursor, and Cline. Offered at just $0.20 per million input tokens and $1.50 per million output tokens, Grok Code Fast 1 strikes a strong balance between speed, performance, and affordability.
    Starting Price: $0.20 per million input tokens
  • 5
    DeepSeek-V4-Flash
    DeepSeek-V4-Flash is a high-efficiency Mixture-of-Experts (MoE) language model designed for fast, scalable reasoning and text generation. It features 284 billion total parameters with 13 billion activated parameters, delivering strong performance while optimizing computational cost. The model supports an extensive context window of up to one million tokens, enabling it to process large documents and complex workflows with ease. Its hybrid attention architecture enhances long-context efficiency by reducing memory and compute requirements. Trained on over 32 trillion tokens, DeepSeek-V4-Flash demonstrates solid capabilities across knowledge, reasoning, and coding tasks. It is designed for scenarios where speed and efficiency are critical, offering a balance between performance and resource usage. The model also supports multiple reasoning modes, allowing users to adjust between faster outputs and deeper analysis.
  • 6
    Devstral 2

    Devstral 2

    Mistral AI

    Devstral 2 is a next-generation, open source agentic AI model tailored for software engineering: it doesn’t just suggest code snippets, it understands and acts across entire codebases, enabling multi-file edits, bug fixes, refactoring, dependency resolution, and context-aware code generation. The Devstral 2 family includes a large 123-billion-parameter model as well as a smaller 24-billion-parameter variant (“Devstral Small 2”), giving teams flexibility; the larger model excels in heavy-duty coding tasks requiring deep context, while the smaller one can run on more modest hardware. With a vast context window of up to 256 K tokens, Devstral 2 can reason across extensive repositories, track project history, and maintain a consistent understanding of lengthy files, an advantage for complex, real-world projects. The CLI tracks project metadata, Git statuses, and directory structure to give the model context, making “vibe-coding” more powerful.
  • 7
    Qwen3.6-35B-A3B
    Qwen3.5-35B-A3B is part of the Qwen3.5 “Medium” model series, designed as a highly efficient, multimodal foundation model that balances strong reasoning ability with practical deployment requirements. It uses a Mixture-of-Experts (MoE) architecture with 35 billion total parameters but activates only about 3 billion per token, allowing it to deliver performance comparable to much larger models while significantly reducing computational cost. The model integrates a hybrid attention mechanism that combines linear attention with standard attention layers, enabling efficient long-context processing and improved scalability for complex tasks. As a native vision-language model, it can process both text and visual inputs, supporting use cases such as multimodal reasoning, coding, and agent-based workflows. It is designed to function as a general-purpose “AI agent,” capable of planning, tool use, and structured problem solving rather than just conversational responses.
  • 8
    Qwen3-Coder
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.
  • 9
    Yi-Lightning

    Yi-Lightning

    Yi-Lightning

    Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed.
  • 10
    GPT-4.1

    GPT-4.1

    OpenAI

    GPT-4.1 is an advanced AI model from OpenAI, designed to enhance performance across key tasks such as coding, instruction following, and long-context comprehension. With a large context window of up to 1 million tokens, GPT-4.1 can process and understand extensive datasets, making it ideal for tasks like software development, document analysis, and AI agent workflows. Available through the API, GPT-4.1 offers significant improvements over previous models, excelling at real-world applications where efficiency and accuracy are crucial.
    Starting Price: $2 per 1M tokens (input)
  • 11
    Qwen3.5

    Qwen3.5

    Alibaba

    Qwen3.5 is a next-generation open-weight multimodal large language model designed to power native vision-language agents. The flagship release, Qwen3.5-397B-A17B, combines a hybrid linear attention architecture with sparse mixture-of-experts, activating only 17 billion parameters per forward pass out of 397 billion total to maximize efficiency. It delivers strong benchmark performance across reasoning, coding, multilingual understanding, visual reasoning, and agent-based tasks. The model expands language support from 119 to 201 languages and dialects while introducing a 1M-token context window in its hosted version, Qwen3.5-Plus. Built for multimodal tasks, it processes text, images, and video with advanced spatial reasoning and tool integration. Qwen3.5 also incorporates scalable reinforcement learning environments to improve general agent capabilities. Designed for developers and enterprises, it enables efficient, tool-augmented, multimodal AI workflows.
  • 12
    PlayerZero

    PlayerZero

    PlayerZero

    PlayerZero is an AI-driven predictive quality platform designed to help engineering, QA, and support teams monitor, diagnose, and resolve software issues before they impact customers by deeply understanding complex codebases and simulating how code will behave in real-world conditions. It applies proprietary AI models and semantic graph analysis to integrate signals from source code, runtime telemetry, customer tickets, documentation, and historical data, giving users unified, context-rich insights into what their software does, why it’s broken, and how to fix or improve it. Its agentic debugging agents can autonomously triage, root cause analyze, and even suggest fixes for issues, reducing escalations and accelerating resolution times while preserving audit trails, governance, and approval workflows. PlayerZero also includes CodeSim, an agentic code simulation capability powered by the Sim-1 model that predicts the impact of changes.
  • 13
    MiMo-V2.5-Pro

    MiMo-V2.5-Pro

    Xiaomi Technology

    Xiaomi MiMo-V2.5-Pro is an advanced open-source AI model designed to handle complex, long-horizon tasks with strong agentic capabilities. It features a Mixture-of-Experts architecture with over one trillion parameters and a large context window of up to one million tokens. The model is built to perform sophisticated reasoning, coding, and problem-solving across extended workflows. It demonstrates high performance on benchmark tests related to software engineering, reasoning, and general intelligence. MiMo-V2.5-Pro can autonomously complete complex projects, such as building full software systems or optimizing engineering designs. It uses hybrid attention mechanisms to balance efficiency and performance across long contexts. The model is also optimized for token efficiency, reducing computational cost while maintaining strong results. By combining scalability, efficiency, and advanced reasoning, MiMo-V2.5-Pro represents a major step forward in open-source AI models.
  • 14
    MiMo-V2.5

    MiMo-V2.5

    Xiaomi Technology

    Xiaomi MiMo-V2.5 is an advanced open-source AI model designed to combine strong agentic capabilities with native multimodal understanding. It can process and reason across text, images, and audio within a single unified system. The model uses a sparse Mixture-of-Experts architecture with hundreds of billions of parameters for efficient performance. It supports an extended context window of up to one million tokens, enabling long and complex workflows. MiMo-V2.5 is built to handle tasks such as coding, reasoning, and multimodal analysis with high accuracy. It incorporates dedicated visual and audio encoders to enhance perception and cross-modal reasoning. The model demonstrates strong benchmark performance across coding, reasoning, and multimodal tasks. By combining multimodality, efficiency, and agentic intelligence, MiMo-V2.5 advances the capabilities of open-source AI systems.
  • 15
    Cursor

    Cursor

    Cursor

    Cursor is an advanced AI-powered IDE designed to make developers exponentially more productive. Built with deep codebase understanding and intelligent automation, it combines natural language interaction with precise, context-aware editing tools. Its Agent feature acts as a human-AI coding partner capable of planning and executing entire development workflows, while the Tab model delivers remarkably accurate autocompletion and targeted suggestions. Cursor seamlessly integrates across environments—from GitHub and Slack to the command line—ensuring AI assistance is available wherever you code. Supporting leading models like GPT-5, Claude Sonnet, Gemini Pro, and Grok Code, it gives developers full control over autonomy and model selection. Fast, intuitive, and built for serious builders, Cursor is redefining what an IDE can be.
    Starting Price: $20 per month
  • 16
    Qwen2.5-Max
    Qwen2.5-Max is a large-scale Mixture-of-Experts (MoE) model developed by the Qwen team, pretrained on over 20 trillion tokens and further refined through Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). In evaluations, it outperforms models like DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro. Qwen2.5-Max is accessible via API through Alibaba Cloud and can be explored interactively on Qwen Chat.
  • 17
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
  • 18
    SWE-1.5

    SWE-1.5

    Cognition

    SWE-1.5 is the latest agent-model release by Cognition, purpose-built for software engineering and characterized by a “frontier-size” architecture comprising hundreds of billions of parameters and optimized end-to-end (model, inference engine, and agent harness) for both speed and intelligence. It achieves near-state-of-the-art coding performance and sets a new benchmark in latency, delivering inference speeds up to 950 tokens/second, roughly six times faster than its predecessor Haiku 4.5 and thirteen times faster than Sonnet 4.5. The model was trained using extensive reinforcement learning in realistic coding-agent environments with multi-turn workflows, unit tests, quality rubrics, and browser-based agentic execution; it also benefits from tightly integrated software tooling and high-throughput hardware (including thousands of GB200 NVL72 chips and a custom hypervisor infrastructure).
  • 19
    Kimi K2.6

    Kimi K2.6

    Moonshot AI

    Kimi K2.6 is a next-generation agentic AI model developed by Moonshot AI, designed to push forward real-world execution, coding, and multi-step reasoning beyond earlier K2 and K2.5 versions. It builds on a Mixture-of-Experts architecture and the multimodal, agent-first foundation of the Kimi series, combining language understanding, coding, and tool use into a single system capable of planning and executing complex workflows. It introduces deeper reasoning capabilities and significantly improved agent planning, allowing it to break down tasks, coordinate tools, and handle multi-file or multi-step problems with greater accuracy and efficiency. It supports advanced tool calling with high reliability, enabling integration with external systems such as web search or APIs, and includes built-in validation mechanisms to ensure correct execution formats.
  • 20
    Qwen2

    Qwen2

    Alibaba

    Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
  • 21
    GLM-5.1

    GLM-5.1

    Zhipu AI

    GLM-5.1 is the latest iteration of Z.ai’s GLM series, designed as a frontier-level, agent-oriented AI model optimized for coding, reasoning, and long-horizon workflows. It builds on the GLM-5 architecture, which uses a Mixture-of-Experts (MoE) design to deliver high performance while keeping inference costs efficient, and is part of a broader push toward open-weight, developer-accessible models. A core focus of GLM-5.1 is enabling agentic behavior, meaning it can plan, execute, and iterate across multi-step tasks rather than simply responding to single prompts. It is specifically designed to handle complex workflows such as debugging code, navigating repositories, and executing chained operations with sustained context. Compared to earlier models, GLM-5.1 improves reliability in long interactions, maintaining coherence across extended sessions and reducing breakdowns in multi-step reasoning.
  • 22
    Qwen3-Coder-Next
    Qwen3-Coder-Next is an open-weight language model specifically designed for coding agents and local development that delivers advanced coding reasoning, complex tool usage, and robust performance on long-horizon programming tasks with high efficiency, using a mixture-of-experts architecture that balances powerful capabilities with resource-friendly operation. It provides enhanced agentic coding abilities that help software developers, AI system builders, and automated coding workflows generate, debug, and reason about code with deep contextual understanding while recovering from execution errors, making it well-suited for autonomous coding agents and development-oriented applications. By achieving strong performance comparable to much larger parameter models while requiring fewer active parameters, Qwen3-Coder-Next enables cost-effective deployment for dynamic and complex programming workloads in research and production environments.
  • 23
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
  • 24
    Claude Sonnet 4.6
    Claude Sonnet 4.6 is Anthropic’s most advanced Sonnet model to date, delivering significant upgrades across coding, computer use, long-context reasoning, agent planning, and knowledge work. It introduces a 1 million token context window in beta, allowing users to analyze entire codebases, lengthy contracts, or large research collections in a single session. The model demonstrates major improvements in instruction following, consistency, and reduced hallucinations compared to previous Sonnet versions. In developer testing, users strongly preferred Sonnet 4.6 over Sonnet 4.5 and even favored it over Opus 4.5 in many coding scenarios. Its enhanced computer-use capabilities enable it to interact with real software interfaces similarly to a human, improving automation for legacy systems without APIs. Sonnet 4.6 also performs strongly on major benchmarks, approaching Opus-level intelligence at a more accessible price point.
  • 25
    DeepCoder

    DeepCoder

    Agentica Project

    DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.
  • 26
    Gemini 3 Pro
    Gemini 3 Pro is Google’s most advanced multimodal AI model, built for developers who want to bring ideas to life with intelligence, precision, and creativity. It delivers breakthrough performance across reasoning, coding, and multimodal understanding—surpassing Gemini 2.5 Pro in both speed and capability. The model excels in agentic workflows, enabling autonomous coding, debugging, and refactoring across entire projects with long-context awareness. With superior performance in image, video, and spatial reasoning, Gemini 3 Pro powers next-generation applications in development, robotics, XR, and document intelligence. Developers can access it through the Gemini API, Google AI Studio, or Gemini Enterprise Agent Platform, integrating seamlessly into existing tools and IDEs. Whether generating code, analyzing visuals, or building interactive apps from a single prompt, Gemini 3 Pro represents the future of intelligent, multimodal AI development.
    Starting Price: $19.99/month
  • 27
    GPT-5.5 Thinking
    GPT-5.5 Thinking is an advanced AI capability from OpenAI designed to handle complex, multi-step tasks with greater intelligence and autonomy. It enables users to provide high-level instructions while the model plans, executes, and refines tasks independently. The system excels in areas such as coding, research, data analysis, and document creation. It can navigate across tools, check its own work, and adapt to ambiguous or incomplete inputs. GPT-5.5 Thinking is optimized for both speed and efficiency, delivering high-quality outputs while using fewer computational resources. It also supports long-context understanding, allowing it to process large datasets and extended workflows. Strong safeguards are built in to ensure responsible and secure usage. Overall, it represents a shift toward more autonomous, agent-like AI that can complete real-world tasks end-to-end.
  • 28
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
  • 29
    Qwen3.6

    Qwen3.6

    Alibaba

    Qwen3.6 is a large language model developed by Alibaba as part of its Qwen AI model family, designed for real-world applications and advanced reasoning tasks. It focuses on improving stability, usability, and performance compared to earlier versions. The model supports multimodal capabilities, allowing it to process and reason across text, images, and other data types. Qwen3.6 is particularly strong in coding and developer workflows, offering improved accuracy for complex programming tasks. It uses a mixture-of-experts architecture, enabling efficient performance while maintaining large-scale model capabilities. The model is designed to be deployable in production environments, including enterprise and cloud-based systems. It can be integrated into applications or run locally using open-weight variants. Overall, Qwen3.6 delivers a powerful, efficient, and versatile AI solution for modern use cases.
  • 30
    MiniMax M2.7
    MiniMax M2.7 is an advanced AI model designed to enhance real-world productivity across coding, search, and office workflows. It is trained with reinforcement learning across numerous real-world environments, enabling it to handle complex, multi-step tasks effectively. The model excels in problem-solving by breaking down challenges before generating solutions across multiple programming languages. It delivers high-speed performance with rapid token generation, allowing tasks to be completed efficiently. With optimized reasoning and cost-effective pricing, it provides powerful capabilities while minimizing resource usage. It also achieves strong performance in software engineering benchmarks, reducing incident response time and improving development efficiency. Additionally, it supports advanced agentic workflows and professional-grade office tasks, making it highly versatile for modern work environments.
  • 31
    Mercury Edit 2
    Mercury Edit 2 is part of Inception Labs’ Mercury family of AI models, designed to perform high-speed reasoning, coding, and editing tasks using a fundamentally different architecture from traditional large language models. It builds on Mercury 2, a diffusion-based reasoning model that generates and refines entire outputs in parallel rather than producing text token by token, enabling significantly faster performance and more responsive editing workflows. Instead of acting like a sequential “typewriter,” the system behaves more like an editor, starting with a rough draft and iteratively improving it across multiple tokens at once, which allows for real-time interaction and rapid iteration in tasks such as code editing, content generation, and agent-based workflows. This architecture delivers throughput of up to around 1,000 tokens per second, making it several times faster than conventional models while maintaining competitive reasoning quality across benchmarks.
    Starting Price: $0.25 per 1M input tokens
  • 32
    DeepSeek-V4

    DeepSeek-V4

    DeepSeek

    DeepSeek-V4 is a next-generation open-source language model designed for high-performance reasoning, coding, and long-context intelligence. It introduces a powerful architecture with up to one million token context length, enabling seamless handling of large datasets and complex multi-step workflows. The model comes in two variants: DeepSeek-V4-Pro for maximum performance and DeepSeek-V4-Flash for efficiency and speed. DeepSeek-V4-Pro features 1.6 trillion total parameters with 49 billion activated, delivering near state-of-the-art performance comparable to leading closed-source models. It excels in agentic coding, mathematical reasoning, and world knowledge tasks. The model integrates advanced attention mechanisms, including token-wise compression and sparse attention, significantly reducing compute and memory costs. It is also optimized for AI agents, supporting tool use and multi-step workflows.
  • 33
    GLM-5V-Turbo
    GLM-5V-Turbo is a multimodal coding foundation model designed for vision-based coding tasks, capable of natively processing inputs such as images, video, text, and files while producing text outputs. It is optimized for agent workflows, enabling a full loop of understanding environments, planning actions, and executing tasks, and integrates seamlessly with agent frameworks like Claude Code and OpenClaw. It supports long-context interactions with a context length of 200K tokens and up to 128K output tokens, making it suitable for complex, long-horizon tasks. It offers multiple thinking modes for different scenarios, strong vision comprehension across images and video, real-time streaming output for improved interaction, and advanced function-calling capabilities for integrating external tools. It also includes context caching to enhance performance in extended conversations. In practical use, it can reconstruct frontend projects from design mockups.
  • 34
    Mixtral 8x22B

    Mixtral 8x22B

    Mistral AI

    Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.
  • 35
    GPT-4.1 mini
    GPT-4.1 mini is a compact version of OpenAI’s powerful GPT-4.1 model, designed to provide high performance while significantly reducing latency and cost. With a smaller size and optimized architecture, GPT-4.1 mini still delivers impressive results in tasks such as coding, instruction following, and long-context processing. It supports up to 1 million tokens of context, making it an efficient solution for applications that require fast responses without sacrificing accuracy or depth.
    Starting Price: $0.40 per 1M tokens (input)
  • 36
    Qwen3.6-Plus
    Qwen3.6-Plus is an advanced AI model developed by Alibaba Cloud, designed to power real-world intelligent agents and complex workflows. It introduces significant improvements in agentic coding, enabling developers to handle everything from frontend development to large-scale codebase management. The model features a massive 1 million token context window, allowing it to process and reason over long and complex inputs. It integrates reasoning, memory, and execution capabilities to deliver highly accurate and reliable results. Qwen3.6-Plus also enhances multimodal capabilities, enabling it to understand and analyze images, videos, and documents. The platform is optimized for real-world applications, including automation, planning, and tool-based workflows. Overall, it provides a powerful foundation for building next-generation AI agents and intelligent systems.
  • 37
    GPT-5.2 Pro
    GPT-5.2 Pro is the highest-capability variant of OpenAI’s latest GPT-5.2 model family, built to deliver professional-grade reasoning, complex task performance, and enhanced accuracy for demanding knowledge work, creative problem-solving, and enterprise-level applications. It builds on the foundational improvements of GPT-5.2, including stronger general intelligence, superior long-context understanding, better factual grounding, and improved tool use, while using more compute and deeper processing to produce more thoughtful, reliable, and context-rich responses for users with intricate, multi-step requirements. GPT-5.2 Pro is designed to handle challenging workflows such as advanced coding and debugging, deep data analysis, research synthesis, extensive document comprehension, and complex project planning with greater precision and fewer errors than lighter variants.
  • 38
    MiniMax M2.5
    MiniMax M2.5 is a frontier AI model engineered for real-world productivity across coding, agentic workflows, search, and office tasks. Extensively trained with reinforcement learning in hundreds of thousands of real-world environments, it achieves state-of-the-art performance in benchmarks such as SWE-Bench Verified and BrowseComp. The model demonstrates strong architectural thinking, decomposing complex problems before generating code across more than ten programming languages. M2.5 operates at high throughput speeds of up to 100 tokens per second, enabling faster completion of multi-step tasks. It is optimized for efficient reasoning, reducing token usage and execution time compared to previous versions. With dramatically lower pricing than competing frontier models, it delivers powerful performance at minimal cost. Integrated into MiniMax Agent, M2.5 supports professional-grade office workflows, financial modeling, and autonomous task execution.
  • 39
    DeepSeek-Coder-V2
    DeepSeek-Coder-V2 is an open source code language model designed to excel in programming and mathematical reasoning tasks. It features a Mixture-of-Experts (MoE) architecture with 236 billion total parameters and 21 billion activated parameters per token, enabling efficient processing and high performance. The model was trained on an extensive dataset of 6 trillion tokens, enhancing its capabilities in code generation and mathematical problem-solving. DeepSeek-Coder-V2 supports over 300 programming languages and has demonstrated superior performance on benchmarks such surpassing other models. It is available in multiple variants, including DeepSeek-Coder-V2-Instruct, optimized for instruction-based tasks; DeepSeek-Coder-V2-Base, suitable for general text generation; and lightweight versions like DeepSeek-Coder-V2-Lite-Base and DeepSeek-Coder-V2-Lite-Instruct, designed for environments with limited computational resources.
  • 40
    Grok 4.1 Fast
    Grok 4.1 Fast is an xAI model designed to deliver advanced tool-calling capabilities with a massive 2-million-token context window. It excels at complex real-world tasks such as customer support, finance, troubleshooting, and dynamic agent workflows. The model pairs seamlessly with the new Agent Tools API, which enables real-time web search, X search, file retrieval, and secure code execution. This combination gives developers the power to build fully autonomous, production-grade agents that plan, reason, and use tools effectively. Grok 4.1 Fast is trained with long-horizon reinforcement learning, ensuring stable multi-turn accuracy even across extremely long prompts. With its speed, cost-efficiency, and high benchmark scores, it sets a new standard for scalable enterprise-grade AI agents.
  • 41
    Sarvam 105B
    Sarvam-105B is the flagship large language model in Sarvam’s open source model family, designed to deliver high-performance reasoning, multilingual understanding, and agent-based execution within a single scalable system. Built as a Mixture-of-Experts (MoE) model with approximately 105 billion total parameters, of which only a fraction are activated per token, it achieves strong computational efficiency while maintaining high capability across complex tasks. The model is optimized for advanced reasoning, coding, mathematics, and agentic workflows, making it suitable for tasks that require multi-step problem solving and structured outputs rather than simple conversational responses. Sarvam-105B supports long-context processing of up to around 128K tokens, enabling it to handle large documents, extended conversations, and deep analytical queries without losing coherence.
  • 42
    DBRX

    DBRX

    Databricks

    Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B in programming, in addition to its strength as a general-purpose LLM. This state-of-the-art quality comes with marked improvements in training and inference performance. DBRX advances the state-of-the-art in efficiency among open models thanks to its fine-grained mixture-of-experts (MoE) architecture. Inference is up to 2x faster than LLaMA2-70B, and DBRX is about 40% of the size of Grok-1 in terms of both total and active parameter counts.
  • 43
    LTM-2-mini

    LTM-2-mini

    Magic AI

    LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.
  • 44
    Qwen3.6-Max-Preview
    Qwen3.6-Max-Preview is a next-generation frontier language model designed to push the limits of intelligence, instruction following, and real-world agent capabilities within the Qwen ecosystem. Building on the Qwen3 series, this preview release introduces stronger world knowledge, sharper instruction alignment, and significant improvements in agentic coding performance, enabling the model to better handle complex, multi-step tasks and software engineering workflows. It is engineered for advanced reasoning and execution scenarios, where the model not only generates responses but also interacts with tools, processes long contexts, and supports structured problem-solving across domains such as coding, research, and enterprise workflows. The architecture continues the Qwen focus on large-scale, high-efficiency models capable of handling extensive context windows and delivering consistent performance across multilingual and knowledge-intensive tasks.
  • 45
    Xiaomi MiMo

    Xiaomi MiMo

    Xiaomi Technology

    The Xiaomi MiMo API open platform is a developer-oriented interface for accessing and integrating Xiaomi’s MiMo family of AI models, including reasoning and language models such as MiMo-V2-Flash, into applications and services through standardized APIs and cloud endpoints, enabling developers to build AI-enabled features like conversational agents, reasoning workflows, code assistance, and search-augmented tasks without managing model infrastructure themselves. It offers REST-style API access with authentication, request signing, and structured responses so software can send prompts and receive generated text or processed outputs programmatically, and it supports common operations like text generation, prompt handling, and inference over MiMo models. By providing documentation and onboarding tools, the open platform lets teams integrate Xiaomi’s latest open source large language models, which leverage Mixture-of-Experts (MoE) architectures.
  • 46
    DeepSWE

    DeepSWE

    Agentica Project

    DeepSWE is a fully open source, state-of-the-art coding agent built on top of the Qwen3-32B foundation model and trained exclusively via reinforcement learning (RL), without supervised finetuning or distillation from proprietary models. It is developed using rLLM, Agentica’s open source RL framework for language agents. DeepSWE operates as an agent; it interacts with a simulated development environment (via the R2E-Gym environment) using a suite of tools (file editor, search, shell-execution, submit/finish), enabling it to navigate codebases, edit multiple files, compile/run tests, and iteratively produce patches or complete engineering tasks. DeepSWE exhibits emergent behaviors beyond simple code generation; when presented with bugs or feature requests, the agent reasons about edge cases, seeks existing tests in the repository, proposes patches, writes extra tests for regressions, and dynamically adjusts its “thinking” effort.
  • 47
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 48
    Amazon Nova Micro
    Amazon Nova Micro is an AI model designed for high-speed, low-cost text processing and generation. It excels in language understanding, translation, code completion, and mathematical problem-solving, providing fast responses with a generation speed of over 200 tokens per second. The model supports fine-tuning for text input and is ideal for applications requiring real-time processing and efficiency. With support for 200+ languages and a maximum of 128k tokens, Nova Micro is perfect for interactive AI applications that prioritize speed and affordability.
  • 49
    Grok 4.20
    Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
  • 50
    GPT-5.2-Codex
    GPT-5.2-Codex is OpenAI’s most advanced agentic coding model, built for complex, real-world software engineering and defensive cybersecurity work. It is a specialized version of GPT-5.2 optimized for long-horizon coding tasks such as large refactors, migrations, and feature development. The model maintains full context over extended sessions through native context compaction. GPT-5.2-Codex delivers state-of-the-art performance on benchmarks like SWE-Bench Pro and Terminal-Bench 2.0. It operates reliably across large repositories and native Windows environments. Stronger vision capabilities allow it to interpret screenshots, diagrams, and UI designs during development. GPT-5.2-Codex is designed to be a dependable partner for professional engineering workflows.