8 Integrations with Mirai
View a list of Mirai integrations and software that integrates with Mirai below. Compare the best Mirai integrations as well as features, ratings, user reviews, and pricing of software that integrates with Mirai. Here are the current Mirai integrations in 2026:
-
1
DeepSeek R1
DeepSeek
DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.Starting Price: Free -
2
Polaris
n2y LLC
When n2y subscribers began telling us everything they wanted in a comprehensive IEP software solution, we listened and got right to work. Today, Polaris is setting the new standard for effortless, effective collaboration throughout every stage of the IEP process, planning, writing, executing, and reporting. Creating accurate, data‑driven roadmaps has never been easier, saving educators weeks of time per individual student and clearly pointing the way to compliance. As an essential stand‑alone solution or a crucial part of the n2y total solution, Polaris is mapping better outcomes and brighter futures. Polaris ensures everyone on the IEP team is working completely in sync to provide a consistent, customized learning journey with students at the heart of their vision. Use accurate, up-to-date, highly visible student performance data to inform decisions. All of the included assessments, rubrics, and profiles produce truly reliable data for determining levels of academic performance.Starting Price: $499.99 per year -
3
SmolLM2
Hugging Face
SmolLM2 is a collection of state-of-the-art, compact language models developed for on-device applications. The models in this collection range from 1.7B parameters to smaller 360M and 135M versions, designed to perform efficiently even on less powerful hardware. These models excel in text generation tasks and are optimized for real-time, low-latency applications, providing high-quality results across various use cases, including content creation, coding assistance, and natural language processing. SmolLM2's flexibility makes it a suitable choice for developers looking to integrate powerful AI into mobile devices, edge computing, and other resource-constrained environments.Starting Price: Free -
4
Qwen3
Alibaba
Qwen3, the latest iteration of the Qwen family of large language models, introduces groundbreaking features that enhance performance across coding, math, and general capabilities. With models like the Qwen3-235B-A22B and Qwen3-30B-A3B, Qwen3 achieves impressive results compared to top-tier models, thanks to its hybrid thinking modes that allow users to control the balance between deep reasoning and quick responses. The platform supports 119 languages and dialects, making it an ideal choice for global applications. Its pre-training process, which uses 36 trillion tokens, enables robust performance, and advanced reinforcement learning (RL) techniques continue to refine its capabilities. Available on platforms like Hugging Face and ModelScope, Qwen3 offers a powerful tool for developers and researchers working in diverse fields.Starting Price: Free -
5
Gemma 3
Google
Gemma 3, introduced by Google, is a new AI model built on the Gemini 2.0 architecture, designed to offer enhanced performance and versatility. This model is capable of running efficiently on a single GPU or TPU, making it accessible for a wide range of developers and researchers. Gemma 3 focuses on improving natural language understanding, generation, and other AI-driven tasks. By offering scalable, powerful AI capabilities, Gemma 3 aims to advance the development of AI systems across various industries and use cases.Starting Price: Free -
6
LFM-3B
Liquid AI
LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. -
7
gpt-oss-120b
OpenAI
gpt-oss-120b is a reasoning model engineered for deep, transparent thinking, delivering full chain-of-thought explanations, adjustable reasoning depth, and structured outputs, while natively invoking tools like web search and Python execution via the API. Built to slot seamlessly into self-hosted or edge deployments, it eliminates dependence on proprietary infrastructure. Although it includes default safety guardrails, its open-weight architecture allows fine-tuning that could override built-in controls, so implementers are responsible for adding input filtering, output monitoring, and governance measures to achieve enterprise-grade security. As a community–driven model card rather than a managed service spec, it emphasizes transparency, customization, and the need for downstream safety practices. -
8
Llama
Meta
Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
- Previous
- You're on page 1
- Next