Compare the Top AI Models for Linux as of April 2026 - Page 4

  • 1
    Mistral Small 4
    Mistral Small 4 is an advanced open-source AI model developed by Mistral AI that combines reasoning, coding, and multimodal capabilities into a single system. It unifies the strengths of previous models such as Magistral for reasoning, Pixtral for multimodal processing, and Devstral for agentic coding tasks. The model can handle both text and image inputs, allowing it to perform tasks ranging from conversational chat to visual analysis and document understanding. Built with a mixture-of-experts architecture, Mistral Small 4 delivers efficient performance while scaling to complex workloads. It also features a configurable reasoning parameter that allows users to switch between fast responses and deeper analytical outputs. With a large context window and optimized inference performance, the model supports long-form interactions and complex workflows.
    Starting Price: Free
  • 2
    Leanstral

    Leanstral

    Mistral AI

    Leanstral is an open-source code agent developed by Mistral AI specifically designed to work with the Lean 4 proof assistant. The model focuses on generating code while also formally verifying its correctness against strict mathematical or software specifications. Unlike traditional coding assistants, Leanstral integrates directly with formal proof systems to ensure that generated code satisfies defined logical requirements. Its architecture is optimized for proof engineering tasks and operates efficiently with sparse model parameters. Leanstral is released under the Apache 2.0 license, making it freely accessible for developers, researchers, and organizations to use and customize. The model is designed to operate within real-world formal repositories rather than isolated problem environments. By combining code generation with formal verification, Leanstral aims to reduce the need for manual human review in complex software and mathematical development.
    Starting Price: Free
  • 3
    MiniMax M2.7
    MiniMax M2.7 is an advanced AI model designed to enhance real-world productivity across coding, search, and office workflows. It is trained with reinforcement learning across numerous real-world environments, enabling it to handle complex, multi-step tasks effectively. The model excels in problem-solving by breaking down challenges before generating solutions across multiple programming languages. It delivers high-speed performance with rapid token generation, allowing tasks to be completed efficiently. With optimized reasoning and cost-effective pricing, it provides powerful capabilities while minimizing resource usage. It also achieves strong performance in software engineering benchmarks, reducing incident response time and improving development efficiency. Additionally, it supports advanced agentic workflows and professional-grade office tasks, making it highly versatile for modern work environments.
    Starting Price: Free
  • 4
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
    Starting Price: Free
  • 5
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 6
    Code Llama
    Code Llama is a large language model (LLM) that can use text prompts to generate code. Code Llama is state-of-the-art for publicly available LLMs on code tasks, and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. Code Llama is free for research and commercial use. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model; Codel Llama - Python specialized for Python; and Code Llama - Instruct, which is fine-tuned for understanding natural language instructions.
    Starting Price: Free
  • 7
    CogVideoX

    CogVideoX

    CogVideoX

    CogVideoX is a text-to-video generation tool. Before running the model, please refer to this guide to see how we use the GLM-4 model to optimize the prompt. This is crucial because the model is trained with long prompts, and a good prompt directly affects the quality of the generated video. Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development. A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at a playful environment.
    Starting Price: Free
  • 8
    TinyLlama

    TinyLlama

    TinyLlama

    The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
    Starting Price: Free
  • 9
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • 10
    Yi-Large
    Yi-Large is a proprietary large language model developed by 01.AI, offering a 32k context length with both input and output costs at $2 per million tokens. It stands out with its advanced capabilities in natural language processing, common-sense reasoning, and multilingual support, performing on par with leading models like GPT-4 and Claude3 in various benchmarks. Yi-Large is designed for tasks requiring complex inference, prediction, and language understanding, making it suitable for applications like knowledge search, data classification, and creating human-like chatbots. Its architecture is based on a decoder-only transformer with enhancements such as pre-normalization and Group Query Attention, and it has been trained on a vast, high-quality multilingual dataset. This model's versatility and cost-efficiency make it a strong contender in the AI market, particularly for enterprises aiming to deploy AI solutions globally.
    Starting Price: $0.19 per 1M input token
  • 11
    DeepSeek R2

    DeepSeek R2

    DeepSeek

    DeepSeek R2 is the anticipated successor to DeepSeek R1, a groundbreaking AI reasoning model launched in January 2025 by the Chinese AI startup DeepSeek. Building on R1’s success, which disrupted the AI industry with its cost-effective performance rivaling top-tier models like OpenAI’s o1, R2 promises a quantum leap in capabilities. It is expected to deliver exceptional speed and human-like reasoning, excelling in complex tasks such as advanced coding and high-level mathematical problem-solving. Leveraging DeepSeek’s innovative Mixture-of-Experts architecture and efficient training methods, R2 aims to outperform its predecessor while maintaining a low computational footprint, potentially expanding its reasoning abilities to languages beyond English.
    Starting Price: Free
  • 12
    BitNet

    BitNet

    Microsoft

    The BitNet b1.58 2B4T is a cutting-edge 1-bit Large Language Model (LLM) developed by Microsoft, designed to enhance computational efficiency while maintaining high performance. This model, built with approximately 2 billion parameters and trained on 4 trillion tokens, uses innovative quantization techniques to optimize memory usage, energy consumption, and latency. The platform supports multiple modalities and is particularly valuable for applications in AI-powered text generation, offering substantial efficiency gains compared to full-precision models.
    Starting Price: Free
  • 13
    Gemma 3n

    Gemma 3n

    Google DeepMind

    Gemma 3n is our state-of-the-art open multimodal model, engineered for on-device performance and efficiency. Made for responsive, low-footprint local inference, Gemma 3n empowers a new wave of intelligent, on-the-go applications. It analyzes and responds to combined images and text, with video and audio coming soon. Build intelligent, interactive features that put user privacy first and work reliably offline. Mobile-first architecture, with a significantly reduced memory footprint. Co-designed by Google's mobile hardware teams and industry leaders. 4B active memory footprint with the ability to create submodels for quality-latency tradeoffs. Gemma 3n is our first open model built on this groundbreaking, shared architecture, allowing developers to begin experimenting with this technology today in an early preview.
  • 14
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
    Starting Price: Free
  • 15
    Composer 1.5
    Composer 1.5 is the latest agentic coding model from Cursor that balances speed and intelligence for everyday code tasks by scaling reinforcement learning approximately 20x more than its predecessor, enabling stronger performance on real-world programming challenges. It’s designed as a “thinking model” that generates internal reasoning tokens to analyze a user’s codebase and plan next steps, responding quickly to simple problems and engaging deeper reasoning on complex ones, while remaining interactive and fast for daily development workflows. To handle long-running tasks, Composer 1.5 introduces self-summarization, allowing the model to compress and carry forward context when it reaches context limits, which helps maintain accuracy across varying input lengths. Internal benchmarks show it surpasses Composer 1 in coding tasks, especially on more difficult issues, making it more capable for interactive use within Cursor’s environment.
  • 16
    DeepSeek-V4

    DeepSeek-V4

    DeepSeek

    DeepSeek V4 is an advanced AI model designed to push the boundaries of large-scale artificial intelligence with an estimated 1 trillion parameters. It utilizes a Mixture-of-Experts architecture, activating only a fraction of its parameters per task to improve efficiency. The model supports a massive context window of up to 1 million tokens, enabling it to process long documents and complex codebases. It is natively multimodal, allowing it to understand and generate text, images, audio, and video. DeepSeek V4 introduces innovations such as Engram memory, sparse attention mechanisms, and improved training stability techniques. It is expected to deliver high performance in areas like software engineering and reasoning while maintaining lower operational costs. Overall, DeepSeek V4 aims to combine scalability, efficiency, and affordability to compete with leading AI models.
    Starting Price: Free
  • 17
    Gemma 4

    Gemma 4

    Google

    Gemma 4 is an AI model introduced by Google and built on the Gemini architecture to deliver improved performance and flexibility. The model is designed to run efficiently on a single GPU or TPU, making it more accessible to developers and researchers. Gemma 4 enhances capabilities in natural language understanding and text generation, supporting a wide range of AI-driven applications. Its architecture allows it to handle complex tasks while maintaining efficient resource usage. Developers can use the model to build applications that rely on advanced language processing and automation. The design emphasizes scalability so that it can support both smaller projects and larger AI systems. By combining efficiency with powerful language capabilities, Gemma 4 helps advance the development of modern AI solutions.
    Starting Price: Free
  • 18
    PaLM

    PaLM

    Google

    PaLM API is an easy and safe way to build on top of our best language models. Today, we’re making an efficient model available, in terms of size and capabilities, and we’ll add other sizes soon. The API also comes with an intuitive tool called MakerSuite, which lets you quickly prototype ideas and, over time, will have features for prompt engineering, synthetic data generation and custom-model tuning — all supported by robust safety tools. Select developers can access the PaLM API and MakerSuite in Private Preview today, and stay tuned for our waitlist soon.
  • 19
    PaLM 2

    PaLM 2

    Google

    PaLM 2 is our next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.
  • 20
    Gemma

    Gemma

    Google

    Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models. Gemma models share technical and infrastructure components with Gemini, our largest and most capable AI model widely available today. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs.
  • 21
    Gemma 2

    Gemma 2

    Google

    A family of state-of-the-art, light-open models created from the same research and technology that were used to create Gemini models. These models incorporate comprehensive security measures and help ensure responsible and reliable AI solutions through selected data sets and rigorous adjustments. Gemma models achieve exceptional comparative results in their 2B, 7B, 9B, and 27B sizes, even outperforming some larger open models. With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, allowing you to effortlessly choose and change frameworks based on task. Redesigned to deliver outstanding performance and unmatched efficiency, Gemma 2 is optimized for incredibly fast inference on various hardware. The Gemma family of models offers different models that are optimized for specific use cases and adapt to your needs. Gemma models are large text-to-text lightweight language models with a decoder, trained in a huge set of text data, code, and mathematical content.
  • 22
    FLUX1.1 Pro

    FLUX1.1 Pro

    Black Forest Labs

    The FLUX1.1 Pro from Black Forest Labs sets a new benchmark in AI-powered image generation, delivering remarkable improvements in both speed and quality. This next-gen model outperforms its predecessor, FLUX.1 Pro, by being six times faster while enhancing image fidelity, prompt accuracy, and creative diversity. Key innovations include ultra-high-resolution rendering up to 4K and a Raw Mode for more natural, organic visuals. Available via the BFL API and integrated with platforms like Replicate and Freepik, FLUX1.1 Pro is the ultimate solution for professionals seeking advanced, scalable AI-generated imagery.
    Starting Price: Free
  • 23
    HunyuanCustom
    HunyuanCustom is a multi-modal customized video generation framework that emphasizes subject consistency while supporting image, audio, video, and text conditions. Built upon HunyuanVideo, it introduces a text-image fusion module based on LLaVA for enhanced multi-modal understanding, along with an image ID enhancement module that leverages temporal concatenation to reinforce identity features across frames. To enable audio- and video-conditioned generation, it further proposes modality-specific condition injection mechanisms, an AudioNet module that achieves hierarchical alignment via spatial cross-attention, and a video-driven injection module that integrates latent-compressed conditional video through a patchify-based feature-alignment network. Extensive experiments on single- and multi-subject scenarios demonstrate that HunyuanCustom significantly outperforms state-of-the-art open and closed source methods in terms of ID consistency, realism, and text-video alignment.
  • 24
    gpt-oss-20b
    gpt-oss-20b is a 20-billion-parameter, text-only reasoning model released under the Apache 2.0 license and governed by OpenAI’s gpt-oss usage policy, built to enable seamless integration into custom AI workflows via the Responses API without reliance on proprietary infrastructure. Trained for robust instruction following, it supports adjustable reasoning effort, full chain-of-thought outputs, and native tool use (including web search and Python execution), producing structured, explainable answers. Developers must implement their own deployment safeguards, such as input filtering, output monitoring, and usage policies, to match the system-level protections of hosted offerings and mitigate risks from malicious or unintended behaviors. Its open-weight design makes it ideal for on-premises or edge deployments where control, customization, and transparency are paramount.
  • 25
    gpt-oss-120b
    gpt-oss-120b is a reasoning model engineered for deep, transparent thinking, delivering full chain-of-thought explanations, adjustable reasoning depth, and structured outputs, while natively invoking tools like web search and Python execution via the API. Built to slot seamlessly into self-hosted or edge deployments, it eliminates dependence on proprietary infrastructure. Although it includes default safety guardrails, its open-weight architecture allows fine-tuning that could override built-in controls, so implementers are responsible for adding input filtering, output monitoring, and governance measures to achieve enterprise-grade security. As a community–driven model card rather than a managed service spec, it emphasizes transparency, customization, and the need for downstream safety practices.
  • 26
    ERNIE 3.0 Titan
    Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, We design a self-supervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts.
  • 27
    EXAONE
    EXAONE is a large language model developed by LG AI Research with the goal of nurturing "Expert AI" in multiple domains. The Expert AI Alliance was formed as a collaborative effort among leading companies in various fields to advance the capabilities of EXAONE. Partner companies within the alliance will serve as mentors, providing skills, knowledge, and data to help EXAONE gain expertise in relevant domains. EXAONE, described as being akin to a college student who has completed general elective courses, requires additional intensive training to become an expert in specific areas. LG AI Research has already demonstrated EXAONE's abilities through real-world applications, such as Tilda, an AI human artist that debuted at New York Fashion Week, as well as AI applications for summarizing customer service conversations and extracting information from complex academic papers.
  • 28
    Jurassic-1

    Jurassic-1

    AI21 Labs

    Jurassic-1 models come in two sizes, where the Jumbo version, at 178B parameters, is the largest and most sophisticated language model ever released for general use by developers. AI21 Studio is currently in open beta, allowing anyone to sign up and immediately start querying Jurassic-1 using our API and interactive web environment. Our mission at AI21 Labs is to fundamentally reimagine the way humans read and write by introducing machines as thought partners, and the only way we can achieve this is if we take on this challenge together. We’ve been researching language models since our Mesozoic Era (aka 2017 😉). Jurassic-1 builds on this research, and it is the first generation of models we’re making available for widespread use.
  • 29
    OmniHuman-1

    OmniHuman-1

    ByteDance

    OmniHuman-1 is a cutting-edge AI framework developed by ByteDance that generates realistic human videos from a single image and motion signals, such as audio or video. The platform utilizes multimodal motion conditioning to create lifelike avatars with accurate gestures, lip-syncing, and expressions that align with speech or music. OmniHuman-1 can work with a range of inputs, including portraits, half-body, and full-body images, and is capable of producing high-quality video content even from weak signals like audio-only input. The model's versatility extends beyond human figures, enabling the animation of cartoons, animals, and even objects, making it suitable for various creative applications like virtual influencers, education, and entertainment. OmniHuman-1 offers a revolutionary way to bring static images to life, with realistic results across different video formats and aspect ratios.
  • 30
    Hunyuan-TurboS
    Tencent's Hunyuan-TurboS is a next-generation AI model designed to offer rapid responses and outstanding performance in various domains such as knowledge, mathematics, and creative tasks. Unlike previous models that require "slow thinking," Hunyuan-TurboS enhances response speed, doubling word output speed and reducing first-word latency by 44%. Through innovative architecture, it provides superior performance while lowering deployment costs. This model combines fast thinking (intuition-based responses) with slow thinking (logical analysis), ensuring quicker, more accurate solutions across diverse scenarios. Hunyuan-TurboS excels in benchmarks, competing with leading models like GPT-4 and DeepSeek V3, making it a breakthrough in AI-driven performance.
MongoDB Logo MongoDB