Best Large Language Models - Page 9

Compare the Top Large Language Models as of May 2026 - Page 9

  • 1
    GPT-5.2 Thinking
    GPT-5.2 Thinking is the highest-capability configuration in OpenAI’s GPT-5.2 model family, engineered for deep, expert-level reasoning, complex task execution, and advanced problem solving across long contexts and professional domains. Built on the foundational GPT-5.2 architecture with improvements in grounding, stability, and reasoning quality, this variant applies more compute and reasoning effort to generate responses that are more accurate, structured, and contextually rich when handling highly intricate workflows, multi-step analysis, and domain-specific challenges. GPT-5.2 Thinking excels at tasks that require sustained logical coherence, such as detailed research synthesis, advanced coding and debugging, complex data interpretation, strategic planning, and sophisticated technical writing, and it outperforms lighter variants on benchmarks that test professional skills and deep comprehension.
  • 2
    GPT-5.2 Instant
    GPT-5.2 Instant is the fast, capable variant of OpenAI’s GPT-5.2 model family designed for everyday work and learning with clear improvements in information-seeking questions, how-tos and walkthroughs, technical writing, and translation compared to prior versions. It builds on the warmer conversational tone introduced in GPT-5.1 Instant and produces clearer explanations that surface key information upfront, making it easier for users to get concise, accurate answers quickly. GPT-5.2 Instant delivers speed and responsiveness for typical tasks like answering queries, generating summaries, assisting with research, and helping with writing and editing, while incorporating broader enhancements from the GPT-5.2 series in reasoning, long-context handling, and factual grounding. As part of the GPT-5.2 lineup, it shares the same foundational improvements that boost overall reliability and performance across a wide range of everyday activities.
  • 3
    GPT-5.2 Pro
    GPT-5.2 Pro is the highest-capability variant of OpenAI’s latest GPT-5.2 model family, built to deliver professional-grade reasoning, complex task performance, and enhanced accuracy for demanding knowledge work, creative problem-solving, and enterprise-level applications. It builds on the foundational improvements of GPT-5.2, including stronger general intelligence, superior long-context understanding, better factual grounding, and improved tool use, while using more compute and deeper processing to produce more thoughtful, reliable, and context-rich responses for users with intricate, multi-step requirements. GPT-5.2 Pro is designed to handle challenging workflows such as advanced coding and debugging, deep data analysis, research synthesis, extensive document comprehension, and complex project planning with greater precision and fewer errors than lighter variants.
  • 4
    Gemini 3 Flash
    Gemini 3 Flash is Google’s latest AI model built to deliver frontier intelligence with exceptional speed and efficiency. It combines Pro-level reasoning with Flash-level latency, making advanced AI more accessible and affordable. The model excels in complex reasoning, multimodal understanding, and agentic workflows while using fewer tokens for everyday tasks. Gemini 3 Flash is designed to scale across consumer apps, developer tools, and enterprise platforms. It supports rapid coding, data analysis, video understanding, and interactive application development. By balancing performance, cost, and speed, Gemini 3 Flash redefines what fast AI can achieve.
  • 5
    GLM-4.7-FlashX
    GLM-4.7 FlashX is a lightweight, high-speed version of the GLM-4.7 large language model created by Z.ai that balances efficiency and performance for real-time AI tasks across English and Chinese while offering the core capabilities of the broader GLM-4.7 family in a more resource-friendly package. It is positioned alongside GLM-4.7 and GLM-4.7 Flash, delivering optimized agentic coding and general language understanding with faster response times and lower resource needs, making it suitable for applications that require rapid inference without heavy infrastructure. As part of the GLM-4.7 model series, it inherits the model’s strengths in programming, multi-step reasoning, and robust conversational understanding, and it supports long contexts for complex tasks while remaining lightweight enough for deployment with constrained compute budgets.
    Starting Price: $0.07 per 1M tokens
  • 6
    Qwen3-Max-Thinking
    Qwen3-Max-Thinking is Alibaba’s latest flagship reasoning-enhanced large language model, built as an extension of the Qwen3-Max family and designed to deliver state-of-the-art analytical performance and multi-step reasoning capabilities. It scales up from one of the largest parameter bases in the Qwen ecosystem and incorporates advanced reinforcement learning and adaptive tool integration so the model can leverage search, memory, and code interpreter functions dynamically during inference to address difficult multi-stage tasks with higher accuracy and contextual depth compared with standard generative responses. Qwen3-Max-Thinking introduces a unique Thinking Mode that exposes deliberate, step-by-step reasoning before final outputs, enabling transparency and traceability of logical chains, and can be tuned with configurable “thinking budgets” to balance performance quality with computational cost.
  • 7
    Seed2.0 Lite

    Seed2.0 Lite

    ByteDance

    Seed2.0 Lite is part of ByteDance’s Seed2.0 family of general-purpose multimodal AI agent models designed to handle complex, real-world tasks with a balanced focus on performance and efficiency. It offers enhanced multimodal understanding and instruction-following capabilities compared with earlier Seed models, enabling it to process and reason about text, visual elements, and structured information reliably for production-grade applications. As a mid-sized model in the series, Lite is optimized to deliver good quality outputs with responsive performance at lower cost and faster inference than the Pro variant while surpassing the previous generation’s capabilities, making it suitable for workflows that require stable reasoning, long-context understanding, and multimodal task execution without needing the highest possible raw performance.
  • 8
    Seed2.0 Mini

    Seed2.0 Mini

    ByteDance

    Seed2.0 Mini is the smallest member of ByteDance’s Seed2.0 series of general-purpose multimodal agent models, designed for high-throughput inference and dense deployment while retaining the core strengths of its larger siblings in multimodal understanding and instruction following. Part of a family that also includes Pro and Lite, the Mini variant is optimized for high-concurrency and batch generation workloads, making it suitable for applications where efficient processing of many requests at scale matters as much as capability. Like other Seed2.0 models, it benefits from systematic enhancements in visual reasoning, motion perception, structured extraction from complex inputs like text and images, and reliable execution of multi-step instructions, but it trades some raw reasoning and output quality for faster, more cost-effective inference and better deployment efficiency.
  • 9
    Seed1.8

    Seed1.8

    ByteDance

    Seed1.8 is ByteDance’s latest generalized agentic AI model designed to bridge understanding and real-world action by combining multimodal perception, agent-like task execution, and wide-ranging reasoning capabilities into a single foundation model that goes beyond simple language generation. It supports multimodal inputs, including text, images, and video, processes very large context windows (hundreds of thousands of tokens at once), and is optimized to handle complex workflows in real environments, such as information retrieval, code generation, GUI interaction, and multi-step decision logic, with efficient, accurate responses suitable for real-world applications. Seed1.8 unifies skills such as search, code understanding, visual context interpretation, and autonomous reasoning so developers and AI systems can build interactive agents and next-generation workflows capable of synthesizing evidence, following instructions deeply, and acting on tasks like automation.
  • 10
    Qwen3.5-Plus
    Qwen3.5-Plus is a high-performance native vision-language model designed for efficient text generation, deep reasoning, and multimodal understanding. Built on a hybrid architecture that combines linear attention with a sparse mixture-of-experts design, it delivers strong performance while optimizing inference efficiency. The model supports text, image, and video inputs and produces text outputs, making it suitable for complex multimodal workflows. With a massive 1 million token context window and up to 64K output tokens, Qwen3.5-Plus enables long-form reasoning and large-scale document analysis. It includes advanced capabilities such as structured outputs, function calling, web search, and tool integration via the Responses API. The model supports prefix continuation, caching, batch processing, and fine-tuning for flexible deployment. Designed for developers and enterprises, Qwen3.5-Plus provides scalable, high-throughput AI performance with OpenAI-compatible API access.
    Starting Price: $0.4 per 1M tokens
  • 11
    Gemma 4

    Gemma 4

    Google

    Gemma 4 is an AI model introduced by Google and built on the Gemini architecture to deliver improved performance and flexibility. The model is designed to run efficiently on a single GPU or TPU, making it more accessible to developers and researchers. Gemma 4 enhances capabilities in natural language understanding and text generation, supporting a wide range of AI-driven applications. Its architecture allows it to handle complex tasks while maintaining efficient resource usage. Developers can use the model to build applications that rely on advanced language processing and automation. The design emphasizes scalability so that it can support both smaller projects and larger AI systems. By combining efficiency with powerful language capabilities, Gemma 4 helps advance the development of modern AI solutions.
    Starting Price: Free
  • 12
    Qwen3.5-Omni
    Qwen3.5-Omni is a next-generation, fully multimodal AI model developed by Alibaba that natively understands and generates text, images, audio, and video within a single unified system, enabling more natural and real-time human-AI interaction. Unlike traditional models that treat modalities separately, it is trained from the ground up on massive audiovisual datasets, allowing it to process complex inputs such as long audio streams, video, and spoken instructions simultaneously while maintaining strong performance across all formats. It supports long-context inputs of up to 256K tokens and can handle over 10 hours of audio or extended video sequences, making it suitable for demanding real-world applications. A key feature is its advanced voice interaction capabilities, including end-to-end speech dialogue, emotional tone control, and voice cloning, enabling highly natural conversational experiences that can whisper, shout, or adapt speaking style dynamically.
  • 13
    Qwen3.6

    Qwen3.6

    Alibaba

    Qwen3.6 is a large language model developed by Alibaba as part of its Qwen AI model family, designed for real-world applications and advanced reasoning tasks. It focuses on improving stability, usability, and performance compared to earlier versions. The model supports multimodal capabilities, allowing it to process and reason across text, images, and other data types. Qwen3.6 is particularly strong in coding and developer workflows, offering improved accuracy for complex programming tasks. It uses a mixture-of-experts architecture, enabling efficient performance while maintaining large-scale model capabilities. The model is designed to be deployable in production environments, including enterprise and cloud-based systems. It can be integrated into applications or run locally using open-weight variants. Overall, Qwen3.6 delivers a powerful, efficient, and versatile AI solution for modern use cases.
    Starting Price: Free
  • 14
    LUIS

    LUIS

    Microsoft

    Language Understanding (LUIS): A machine learning-based service to build natural language into apps, bots, and IoT devices. Quickly create enterprise-ready, custom models that continuously improve. Add natural language to your apps. Designed to identify valuable information in conversations, LUIS interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. LUIS integrates seamlessly with the Azure Bot Service, making it easy to create a sophisticated bot. Powerful developer tools are combined with customizable pre-built apps and entity dictionaries, such as Calendar, Music, and Devices, so you can build and deploy a solution more quickly. Dictionaries are mined from the collective knowledge of the web and supply billions of entries, helping your model to correctly identify valuable information from user conversations. Active learning is used to continuously improve the quality of the models.
  • 15
    Sparrow

    Sparrow

    DeepMind

    Sparrow is a research model and proof of concept, designed with the goal of training dialogue agents to be more helpful, correct, and harmless. By learning these qualities in a general dialogue setting, Sparrow advances our understanding of how we can train agents to be safer and more useful – and ultimately, to help build safer and more useful artificial general intelligence (AGI). Sparrow is not yet available for public use. Training a conversational AI is an especially challenging problem because it’s difficult to pinpoint what makes a dialogue successful. To address this problem, we turn to a form of reinforcement learning (RL) based on people's feedback, using the study participants’ preference feedback to train a model of how useful an answer is. To get this data, we show our participants multiple model answers to the same question and ask them which answer they like the most.
  • 16
    NVIDIA NeMo
    NVIDIA NeMo LLM is a service that provides a fast path to customizing and using large language models trained on several frameworks. Developers can deploy enterprise AI applications using NeMo LLM on private and public clouds. They can also experience Megatron 530B—one of the largest language models—through the cloud API or experiment via the LLM service. Customize your choice of various NVIDIA or community-developed models that work best for your AI applications. Within minutes to hours, get better responses by providing context for specific use cases using prompt learning techniques. Leverage the power of NVIDIA Megatron 530B, one of the largest language models, through the NeMo LLM Service or the cloud API. Take advantage of models for drug discovery, including in the cloud API and NVIDIA BioNeMo framework.
  • 17
    ERNIE Bot
    ERNIE Bot is an AI-powered conversational assistant developed by Baidu, designed to facilitate seamless and natural interactions with users. Built on the ERNIE (Enhanced Representation through Knowledge Integration) model, ERNIE Bot excels at understanding complex queries and generating human-like responses across various domains. Its capabilities include processing text, generating images, and engaging in multimodal communication, making it suitable for a wide range of applications such as customer support, virtual assistants, and enterprise automation. With its advanced contextual understanding, ERNIE Bot offers an intuitive and efficient solution for businesses seeking to enhance their digital interactions and automate workflows.
    Starting Price: Free
  • 18
    PaLM

    PaLM

    Google

    PaLM API is an easy and safe way to build on top of our best language models. Today, we’re making an efficient model available, in terms of size and capabilities, and we’ll add other sizes soon. The API also comes with an intuitive tool called MakerSuite, which lets you quickly prototype ideas and, over time, will have features for prompt engineering, synthetic data generation and custom-model tuning — all supported by robust safety tools. Select developers can access the PaLM API and MakerSuite in Private Preview today, and stay tuned for our waitlist soon.
  • 19
    Med-PaLM 2

    Med-PaLM 2

    Google Cloud

    Healthcare breakthroughs change the world and bring hope to humanity through scientific rigor, human insight, and compassion. We believe AI can contribute to this, with thoughtful collaboration between researchers, healthcare organizations, and the broader ecosystem. Today, we're sharing exciting progress on these initiatives, with the announcement of limited access to Google’s medical large language model, or LLM, called Med-PaLM 2. It will be available in the coming weeks to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback as we investigate safe, responsible, and meaningful ways to use this technology. Med-PaLM 2 harnesses the power of Google’s LLMs, aligned to the medical domain to more accurately and safely answer medical questions. As a result, Med-PaLM 2 was the first LLM to perform at an “expert” test-taker level performance on the MedQA dataset of US Medical Licensing Examination (USMLE)-style questions.
  • 20
    Gopher

    Gopher

    Google DeepMind

    Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans. As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose.
  • 21
    PaLM 2

    PaLM 2

    Google

    PaLM 2 is our next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.
  • 22
    Hippocratic AI

    Hippocratic AI

    Hippocratic AI

    Hippocratic AI is the new state of the art (SOTA) model, outperforming GPT-4 on 105 of 114 healthcare exams and certifications. Hippocratic AI has outperformed GPT-4 on 105 out of 114 tests and certifications, outperformed by a margin of five percent or more on 74 of the certifications, and outperformed by a margin of ten percent or more on 43 of the certifications. Most language models pre-train on the common crawl of the Internet, which may include incorrect and misleading information. Unlike these LLMs, Hippocratic AI is investing heavily in legally acquiring evidence-based healthcare content. We’re conducting a unique Reinforcement Learning with Human Feedback process using healthcare professionals to train and validate the model’s readiness for deployment. We call this RLHF-HP. Hippocratic AI will not release the model until a large number of these licensed professionals deem it safe.
  • 23
    YandexGPT
    Take advantage of the capabilities of generative language models to improve and optimize your applications and web services. Get an aggregated result of accumulated textual data whether it be information from work chats, user reviews, or other types of data. YandexGPT will help both summarize and interpret the information. Speed up text creation as you improve their quality and style. Create template texts for newsletters, product descriptions for online stores and other applications. Develop a chatbot for your support service: teach the bot to answer various user questions, both common and more complicated. Use the API to integrate the service with your applications and automate processes.
  • 24
    Ntropy

    Ntropy

    Ntropy

    Ship faster integrating with our Python SDK or Rest API in minutes. No prior setups or data formatting. You can get going straight away as soon as you have incoming data and your first customers. We have built and fine-tuned custom language models to recognize entities, automatically crawl the web in real-time and pick the best match, as well as assign labels with superhuman accuracy in a fraction of the time. Everybody has a data enrichment model that is trying to be good at one thing, US or Europe, business or consumer. These models are poor at generalizing and are not capable of human-level output. With us, you can leverage the power of the world's largest and most performant models embedded in your products, at a fraction of cost and time.
  • 25
    Giga ML

    Giga ML

    Giga ML

    We just launched X1 large series of Models. Giga ML's most powerful model is available for pre-training and fine-tuning with on-prem deployment. Since we are Open AI compatible, your existing integrations with long chain, llama-index, and all others work seamlessly. You can continue pre-training of LLM's with domain-specific data books or docs or company docs. The world of large language models (LLMs) rapidly expanding, offering unprecedented opportunities for natural language processing across various domains. However, some critical challenges have remained unaddressed. At Giga ML, we proudly introduce the X1 Large 32k model, a pioneering on-premise LLM solution that addresses these critical issues.
  • 26
    Martian

    Martian

    Martian

    By using the best-performing model for each request, we can achieve higher performance than any single model. Martian outperforms GPT-4 across OpenAI's evals (open/evals). We turn opaque black boxes into interpretable representations. Our router is the first tool built on top of our model mapping method. We are developing many other applications of model mapping including turning transformers from indecipherable matrices into human-readable programs. If a company experiences an outage or high latency period, automatically reroute to other providers so your customers never experience any issues. Determine how much you could save by using the Martian Model Router with our interactive cost calculator. Input your number of users, tokens per session, and sessions per month, and specify your cost/quality tradeoff.
  • 27
    Phi-2

    Phi-2

    Microsoft

    We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.
  • 28
    Hyperplane

    Hyperplane

    Hyperplane

    Better audiences from the richness of transaction data. Create nuanced personas and effective marketing campaigns based on financial behaviors and consumer interests. Increase user limits, without worrying about default. Leverage user income estimates that are precise and always up-to-date. The Hyperplane platform enables financial institutions to launch personalized consumer experiences through specialized foundation models (LLMs). Upgrade your feature sets with embeddings for credit, collections, and lookalike modeling. Segment users based on various criteria, enabling you to target specific audience groups for personalized marketing campaigns, content delivery, and user analysis. Segmentation is achieved through facets, which are key attributes or characteristics used to categorize users, Hyperplane offers the capability to enrich user segmentation by employing additional attributes to fine-tune the filtering of responses from certain audience segmentation endpoints.
  • 29
    Smaug-72B
    Smaug-72B is a powerful open-source large language model (LLM) known for several key features: High Performance: It currently holds the top spot on the Hugging Face Open LLM leaderboard, surpassing models like GPT-3.5 in various benchmarks. This means it excels at tasks like understanding, responding to, and generating human-like text. Open Source: Unlike many other advanced LLMs, Smaug-72B is freely available for anyone to use and modify, fostering collaboration and innovation in the AI community. Focus on Reasoning and Math: It specifically shines in handling reasoning and mathematical tasks, attributing this strength to unique fine-tuning techniques developed by Abacus AI, the creators of Smaug-72B. Based on Qwen-72B: It's technically a fine-tuned version of another powerful LLM called Qwen-72B, released by Alibaba, further improving upon its capabilities. Overall, Smaug-72B represents a significant step forward in open-source AI.
    Starting Price: Free
  • 30
    Gemma

    Gemma

    Google

    Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models. Gemma models share technical and infrastructure components with Gemini, our largest and most capable AI model widely available today. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs.
MongoDB Logo MongoDB