Alternatives to LFM2
Compare LFM2 alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to LFM2 in 2025. Compare features, ratings, user reviews, pricing, and more from LFM2 competitors and alternatives in order to make an informed decision for your business.
-
1
Ministral 8B
Mistral AI
Mistral AI has introduced two advanced models for on-device computing and edge applications, named "les Ministraux": Ministral 3B and Ministral 8B. These models excel in knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B parameter range. They support up to 128k context length and are designed for various applications, including on-device translation, offline smart assistants, local analytics, and autonomous robotics. Ministral 8B features an interleaved sliding-window attention pattern for faster and more memory-efficient inference. Both models can function as intermediaries in multi-step agentic workflows, handling tasks like input parsing, task routing, and API calls based on user intent with low latency and cost. Benchmark evaluations indicate that les Ministraux consistently outperforms comparable models across multiple tasks. As of October 16, 2024, both models are available, with Ministral 8B priced at $0.1 per million tokens.Starting Price: Free -
2
Mistral Small 3.1
Mistral
Mistral Small 3.1 is a state-of-the-art, multimodal, and multilingual AI model released under the Apache 2.0 license. Building upon Mistral Small 3, this enhanced version offers improved text performance, and advanced multimodal understanding, and supports an expanded context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, delivering inference speeds of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in tasks such as instruction following, conversational assistance, image understanding, and function calling, making it suitable for both enterprise and consumer-grade AI applications. Its lightweight architecture allows it to run efficiently on a single RTX 4090 or a Mac with 32GB RAM, facilitating on-device deployments. It is available for download on Hugging Face, accessible via Mistral AI's developer playground, and integrated into platforms like Google Cloud Vertex AI, with availability on NVIDIA NIM andStarting Price: Free -
3
Ministral 3B
Mistral AI
Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.Starting Price: Free -
4
Ai2 OLMoE
The Allen Institute for Artificial Intelligence
Ai2 OLMoE is a fully open source mixture-of-experts language model that is capable of running completely on-device, allowing you to try our model privately and securely. Our app is intended to help researchers better explore how to make on-device intelligence better and to enable developers to quickly prototype new AI experiences, all with no cloud connectivity required. OLMoE is a highly efficient mixture-of-experts version of the Ai2 OLMo family of models. Experience which real-world tasks state-of-the-art local models are capable of. Research how to improve small AI models. Test your own models locally using our open-source codebase. Integrate OLMoE into other iOS applications. The Ai2 OLMoE app provides privacy and security by operating completely on-device. Easily share the output of your conversations with friends or colleagues. The OLMoE model and the application code are fully open source.Starting Price: Free -
5
Gemma 2
Google
A family of state-of-the-art, light-open models created from the same research and technology that were used to create Gemini models. These models incorporate comprehensive security measures and help ensure responsible and reliable AI solutions through selected data sets and rigorous adjustments. Gemma models achieve exceptional comparative results in their 2B, 7B, 9B, and 27B sizes, even outperforming some larger open models. With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, allowing you to effortlessly choose and change frameworks based on task. Redesigned to deliver outstanding performance and unmatched efficiency, Gemma 2 is optimized for incredibly fast inference on various hardware. The Gemma family of models offers different models that are optimized for specific use cases and adapt to your needs. Gemma models are large text-to-text lightweight language models with a decoder, trained in a huge set of text data, code, and mathematical content. -
6
Mistral 7B
Mistral AI
Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.Starting Price: Free -
7
Qwen2
Alibaba
Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.Starting Price: Free -
8
Alpaca
Stanford Center for Research on Foundation Models (CRFM)
Instruction-following models such as GPT-3.5 (text-DaVinci-003), ChatGPT, Claude, and Bing Chat have become increasingly powerful. Many users now interact with these models regularly and even use them for work. However, despite their widespread deployment, instruction-following models still have many deficiencies: they can generate false information, propagate social stereotypes, and produce toxic language. To make maximum progress on addressing these pressing problems, it is important for the academic community to engage. Unfortunately, doing research on instruction-following models in academia has been difficult, as there is no easily accessible model that comes close in capabilities to closed-source models such as OpenAI’s text-DaVinci-003. We are releasing our findings about an instruction-following language model, dubbed Alpaca, which is fine-tuned from Meta’s LLaMA 7B model. -
9
Tülu 3
Ai2
Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.Starting Price: Free -
10
Falcon-7B
Technology Innovation Institute (TII)
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-7B? It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions.Starting Price: Free -
11
Dolly
Databricks
Dolly is a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly works by taking an existing open source 6 billion parameter model from EleutherAI and modifying it ever so slightly to elicit instruction following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.Starting Price: Free -
12
Mistral Large
Mistral AI
Mistral Large is Mistral AI's flagship language model, designed for advanced text generation and complex multilingual reasoning tasks, including text comprehension, transformation, and code generation. It supports English, French, Spanish, German, and Italian, offering a nuanced understanding of grammar and cultural contexts. With a 32,000-token context window, it can accurately recall information from extensive documents. The model's precise instruction-following and native function-calling capabilities facilitate application development and tech stack modernization. Mistral Large is accessible through Mistral's platform, Azure AI Studio, and Azure Machine Learning, and can be self-deployed for sensitive use cases. Benchmark evaluations indicate that Mistral Large achieves strong results, making it the world's second-ranked model generally available through an API, next to GPT-4.Starting Price: Free -
13
Mu
Microsoft
Mu is a 330-million-parameter encoder–decoder language model designed to power the agent in Windows settings by mapping natural-language queries to Settings function calls, running fully on-device via NPUs at over 100 tokens per second while maintaining high accuracy. Drawing on Phi Silica optimizations, Mu’s encoder–decoder architecture reuses a fixed-length latent representation to cut computation and memory overhead, yielding 47 percent lower first-token latency and 4.7× higher decoding speed on Qualcomm Hexagon NPUs compared to similar decoder-only models. Hardware-aware tuning, including a 2/3–1/3 encoder–decoder parameter split, weight sharing between input and output embeddings, Dual LayerNorm, rotary positional embeddings, and grouped-query attention, enables fast inference at over 200 tokens per second on devices like Surface Laptop 7 and sub-500 ms response times for settings queries. -
14
fullmoon
fullmoon
Fullmoon is a free, open source application that enables users to interact with large language models directly on their devices, ensuring privacy and offline accessibility. Optimized for Apple silicon, it operates seamlessly across iOS, iPadOS, macOS, and visionOS platforms. Users can personalize the app by adjusting themes, fonts, and system prompts, and it integrates with Apple's Shortcuts for enhanced functionality. Fullmoon supports models like Llama-3.2-1B-Instruct-4bit and Llama-3.2-3B-Instruct-4bit, facilitating efficient on-device AI interactions without the need for an internet connection.Starting Price: Free -
15
Reka Flash 3
Reka
Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization. -
16
Mixtral 8x22B
Mistral AI
Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.Starting Price: Free -
17
Falcon-40B
Technology Innovation Institute (TII)
Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-40B? It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. ⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-40B-Instruct.Starting Price: Free -
18
Jurassic-2
AI21
Announcing the launch of Jurassic-2, the latest generation of AI21 Studio’s foundation models, a game-changer in the field of AI, with top-tier quality and new capabilities. And that's not all, we're also releasing our task-specific APIs, with plug-and-play reading and writing capabilities that outperform competitors. Our focus at AI21 Studio is to help developers and businesses leverage reading and writing AI to build real-world products with tangible value. Today marks two important milestones with the release of Jurassic-2 and Task-Specific APIs, empowering you to bring generative AI to production. Jurassic-2 (or J2, as we like to call it) is the next generation of our foundation models with significant improvements in quality and new capabilities including zero-shot instruction-following, reduced latency, and multi-language support. Task-specific APIs provide developers with industry-leading APIs that perform specialized reading and writing tasks out-of-the-box.Starting Price: $29 per month -
19
Llama 3.2
Meta
The open-source AI model you can fine-tune, distill and deploy anywhere is now available in more versions. Choose from 1B, 3B, 11B or 90B, or continue building with Llama 3.1. Llama 3.2 is a collection of large language models (LLMs) pretrained and fine-tuned in 1B and 3B sizes that are multilingual text only, and 11B and 90B sizes that take both text and image inputs and output text. Develop highly performative and efficient applications from our latest release. Use our 1B or 3B models for on device applications such as summarizing a discussion from your phone or calling on-device tools like calendar. Use our 11B or 90B models for image use cases such as transforming an existing image into something new or getting more information from an image of your surroundings.Starting Price: Free -
20
LLaVA
LLaVA
LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.Starting Price: Free -
21
MiniMax M2
MiniMax
MiniMax M2 is an open source foundation model built specifically for agentic applications and coding workflows, striking a new balance of performance, speed, and cost. It excels in end-to-end development scenarios, handling programming, tool-calling, and complex, long-chain workflows with capabilities such as Python integration, while delivering inference speeds of around 100 tokens per second and offering API pricing at just ~8% of the cost of comparable proprietary models. The model supports “Lightning Mode” for high-speed, lightweight agent tasks, and “Pro Mode” for in-depth full-stack development, report generation, and web-based tool orchestration; its weights are fully open source and available for local deployment with vLLM or SGLang. MiniMax M2 positions itself as a production-ready model that enables agents to complete independent tasks, such as data analysis, programming, tool orchestration, and large-scale multi-step logic at real organizational scale.Starting Price: $0.30 per million input tokens -
22
OpenAI o1
OpenAI
OpenAI o1 represents a new series of AI models designed by OpenAI, focusing on enhanced reasoning capabilities. These models, including o1-preview and o1-mini, are trained using a novel reinforcement learning approach to spend more time "thinking" through problems before providing answers. This approach allows o1 to excel in complex problem-solving tasks in areas like coding, mathematics, and science, outperforming previous models like GPT-4o in certain benchmarks. The o1 series aims to tackle challenges that require deeper thought processes, marking a significant step towards AI systems that can reason more like humans, although it's still in the preview stage with ongoing improvements and evaluations. -
23
Claude Opus 3
Anthropic
Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence. All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.Starting Price: Free -
24
Arcee-SuperNova
Arcee.ai
Our new flagship model is a small Language Model (SLM) with all the power and performance of leading closed-source LLMs. Excels at generalized tasks, instruction-following, and human preferences. The best 70B model on the market. SuperNova can be utilized for any generalized task, much like Open AI’s GPT4o, Claude Sonnet 3.5, and Cohere. Trained with the most advanced learning & optimization techniques, SuperNova generates highly accurate responses in human-like text. It's the most flexible, secure, and cost-effective language model on the market, saving customers up to 95% on total deployment costs vs. traditional closed-source models. Use SuperNova to integrate AI into apps and products, for general chat purposes, and for diverse use cases. Regularly update your models with the latest open-source tech, ensuring you're never locked into any one solution. Protect your data with industry-leading privacy measures.Starting Price: Free -
25
Kimi K2
Moonshot AI
Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.Starting Price: Free -
26
Mistral NeMo
Mistral AI
Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.Starting Price: Free -
27
Mixtral 8x7B
Mistral AI
Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT-3.5 on most standard benchmarks.Starting Price: Free -
28
ByteDance Seed
ByteDance
Seed Diffusion Preview is a large-scale, code-focused language model that uses discrete-state diffusion to generate code non-sequentially, achieving dramatically faster inference without sacrificing quality by decoupling generation from the token-by-token bottleneck of autoregressive models. It combines a two-stage curriculum, mask-based corruption followed by edit-based augmentation, to robustly train a standard dense Transformer, striking a balance between speed and accuracy and avoiding shortcuts like carry-over unmasking to preserve principled density estimation. The model delivers an inference speed of 2,146 tokens/sec on H20 GPUs, outperforming contemporary diffusion baselines while matching or exceeding their accuracy on standard code benchmarks, including editing tasks, thereby establishing a new speed-quality Pareto frontier and demonstrating discrete diffusion’s practical viability for real-world code generation.Starting Price: Free -
29
LFM-3B
Liquid AI
LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. -
30
GPT-3
OpenAI
Our GPT-3 models can understand and generate natural language. We offer four main models with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. The main GPT-3 models are meant to be used with the text completion endpoint. We also offer models that are specifically meant to be used with other endpoints. Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models.Starting Price: $0.0200 per 1000 tokens -
31
GPT-4o
OpenAI
GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time (opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.Starting Price: $5.00 / 1M tokens -
32
GPT-3.5
OpenAI
GPT-3.5 is the next evolution of GPT 3 large language model from OpenAI. GPT-3.5 models can understand and generate natural language. We offer four main models with different levels of power suitable for different tasks. The main GPT-3.5 models are meant to be used with the text completion endpoint. We also offer models that are specifically meant to be used with other endpoints. Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models.Starting Price: $0.0200 per 1000 tokens -
33
CodeQwen
Alibaba
CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.Starting Price: Free -
34
DeepSeek-V2
DeepSeek
DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.Starting Price: Free -
35
OpenAI o3-pro
OpenAI
OpenAI’s o3-pro is a high-performance reasoning model designed for tasks that require deep analysis and precision. It is available exclusively to ChatGPT Pro and Team subscribers, succeeding the earlier o1-pro model. The model excels in complex fields like mathematics, science, and coding by employing detailed step-by-step reasoning. It integrates advanced tools such as real-time web search, file analysis, Python execution, and visual input processing. While powerful, o3-pro has slower response times and lacks support for features like image generation and temporary chats. Despite these trade-offs, o3-pro demonstrates superior clarity, accuracy, and adherence to instructions compared to its predecessor.Starting Price: $20 per 1 million tokens -
36
Hunyuan-TurboS
Tencent
Tencent's Hunyuan-TurboS is a next-generation AI model designed to offer rapid responses and outstanding performance in various domains such as knowledge, mathematics, and creative tasks. Unlike previous models that require "slow thinking," Hunyuan-TurboS enhances response speed, doubling word output speed and reducing first-word latency by 44%. Through innovative architecture, it provides superior performance while lowering deployment costs. This model combines fast thinking (intuition-based responses) with slow thinking (logical analysis), ensuring quicker, more accurate solutions across diverse scenarios. Hunyuan-TurboS excels in benchmarks, competing with leading models like GPT-4 and DeepSeek V3, making it a breakthrough in AI-driven performance. -
37
Galactica
Meta
Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. Galactica is a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. -
38
Mistral Medium 3
Mistral AI
Mistral Medium 3 is a powerful AI model designed to deliver state-of-the-art performance at a fraction of the cost compared to other models. It offers simpler deployment options, allowing for hybrid or on-premises configurations. Mistral Medium 3 excels in professional applications like coding and multimodal understanding, making it ideal for enterprise use. Its low-cost structure makes it highly accessible while maintaining top-tier performance, outperforming many larger models in specific domains.Starting Price: Free -
39
GigaChat 3 Ultra
Sberbank
GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.Starting Price: Free -
40
Qwen-7B
Alibaba
Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.Starting Price: Free -
41
QwQ-32B
Alibaba
QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.Starting Price: Free -
42
GPT-5.1 Instant
OpenAI
GPT-5.1 Instant is a high-performance AI model designed for everyday users that combines speed, responsiveness, and improved conversational warmth. The model uses adaptive reasoning to instantly select how much computation is required for a task, allowing it to deliver fast answers without sacrificing understanding. It emphasizes stronger instruction-following, enabling users to give precise directions and expect consistent compliance. The model also introduces richer personality controls so chat tone can be set to Default, Friendly, Professional, Candid, Quirky, or Efficient, with experiments in deeper voice modulation. Its core value is to make interactions feel more natural and less robotic while preserving high intelligence across writing, coding, analysis, and reasoning. GPT-5.1 Instant routes user requests automatically from the base interface, with the system choosing whether this variant or the deeper “Thinking” model is applied. -
43
Sparrow
DeepMind
Sparrow is a research model and proof of concept, designed with the goal of training dialogue agents to be more helpful, correct, and harmless. By learning these qualities in a general dialogue setting, Sparrow advances our understanding of how we can train agents to be safer and more useful – and ultimately, to help build safer and more useful artificial general intelligence (AGI). Sparrow is not yet available for public use. Training a conversational AI is an especially challenging problem because it’s difficult to pinpoint what makes a dialogue successful. To address this problem, we turn to a form of reinforcement learning (RL) based on people's feedback, using the study participants’ preference feedback to train a model of how useful an answer is. To get this data, we show our participants multiple model answers to the same question and ask them which answer they like the most. -
44
Selene 1
atla
Atla's Selene 1 API offers state-of-the-art AI evaluation models, enabling developers to define custom evaluation criteria and obtain precise judgments on their AI applications' performance. Selene outperforms frontier models on commonly used evaluation benchmarks, ensuring accurate and reliable assessments. Users can customize evaluations to their specific use cases through the Alignment Platform, allowing for fine-grained analysis and tailored scoring formats. The API provides actionable critiques alongside accurate evaluation scores, facilitating seamless integration into existing workflows. Pre-built metrics, such as relevance, correctness, helpfulness, faithfulness, logical coherence, and conciseness, are available to address common evaluation scenarios, including detecting hallucinations in retrieval-augmented generation applications or comparing outputs to ground truth data. -
45
Stable Beluga
Stability AI
Stability AI and its CarperAI lab proudly announce Stable Beluga 1 and its successor Stable Beluga 2 (formerly codenamed FreeWilly), two powerful new, open access, Large Language Models (LLMs). Both models demonstrate exceptional reasoning ability across varied benchmarks. Stable Beluga 1 leverages the original LLaMA 65B foundation model and was carefully fine-tuned with a new synthetically-generated dataset using Supervised Fine-Tune (SFT) in standard Alpaca format. Similarly, Stable Beluga 2 leverages the LLaMA 2 70B foundation model to achieve industry-leading performance.Starting Price: Free -
46
K2 Think
Institute of Foundation Models
K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.Starting Price: Free -
47
Llama 4 Behemoth
Meta
Llama 4 Behemoth is Meta's most powerful AI model to date, featuring a massive 288 billion active parameters. It excels in multimodal tasks, outperforming previous models like GPT-4.5 and Gemini 2.0 Pro across multiple STEM-focused benchmarks such as MATH-500 and GPQA Diamond. As the teacher model for the Llama 4 series, Behemoth sets the foundation for models like Llama 4 Maverick and Llama 4 Scout. While still in training, Llama 4 Behemoth demonstrates unmatched intelligence, pushing the boundaries of AI in fields like math, multilinguality, and image understanding.Starting Price: Free -
48
Grok 4
xAI
Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses. -
49
OPT
Meta
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models. -
50
MiniMax M1
MiniMax
MiniMax‑M1 is a large‑scale hybrid‑attention reasoning model released by MiniMax AI under the Apache 2.0 license. It supports an unprecedented 1 million‑token context window and up to 80,000-token outputs, enabling extended reasoning across long documents. Trained using large‑scale reinforcement learning with a novel CISPO algorithm, MiniMax‑M1 completed full training on 512 H800 GPUs in about three weeks. It achieves state‑of‑the‑art performance on benchmarks in mathematics, coding, software engineering, tool usage, and long‑context understanding, matching or outperforming leading models. Two model variants are available (40K and 80K thinking budgets), with weights and deployment scripts provided via GitHub and Hugging Face.