Mistral Saba
Mistral Saba is a 24-billion-parameter model trained on meticulously curated datasets from across the Middle East and South Asia. The model provides more accurate and relevant responses than models that are over five times its size while being significantly faster and lower cost. It can also serve as a strong base to train highly specific regional adaptations. Mistral Saba is available as an API and can be deployed locally within customers' security premises. Like the recently released Mistral Small 3, the model is lightweight and can be deployed on single-GPU systems, responding at speeds of over 150 tokens per second. In keeping with the rich cultural cross-pollination between the Middle East and South Asia, Mistral Saba supports Arabic and many Indian-origin languages and is particularly strong in South Indian-origin languages such as Tamil. This capability enhances its versatility in multinational use across these interconnected regions.
Learn more
Command A
Command A, introduced by Cohere, is a high-performance AI model designed to maximize efficiency with minimal computational resources. This model outperforms or matches other top-tier models like GPT-4 and DeepSeek-V3 in agentic enterprise tasks while significantly reducing compute costs. It is tailored for applications requiring fast, efficient AI-driven solutions, providing businesses with the capability to perform advanced tasks across various domains, all while optimizing performance and computational demands.
Learn more
Mixtral 8x22B
Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.
Learn more
Gemini
Gemini is Google’s advanced AI assistant designed to help users think, create, learn, and complete tasks with a new level of intelligence. Powered by Google’s most capable models, including Gemini 3, it enables users to ask complex questions, generate content, analyze information, and explore ideas through natural conversation. Gemini can create images, videos, summaries, study plans, and first drafts while also providing feedback on uploaded files and written work. The platform is grounded in Google Search, allowing it to deliver accurate, up-to-date information and support deep follow-up questions. Gemini connects seamlessly with Google apps like Gmail, Docs, Calendar, Maps, YouTube, and Photos to help users complete tasks without switching tools. Features such as Gemini Live, Deep Research, and Gems enhance brainstorming, research, and personalized workflows. Available through flexible free and paid plans, Gemini supports everyday users, students, and professionals across devices.
Learn more