Qwen3-Max
Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
Learn more
Sarvam 105B
Sarvam-105B is the flagship large language model in Sarvam’s open source model family, designed to deliver high-performance reasoning, multilingual understanding, and agent-based execution within a single scalable system. Built as a Mixture-of-Experts (MoE) model with approximately 105 billion total parameters, of which only a fraction are activated per token, it achieves strong computational efficiency while maintaining high capability across complex tasks. The model is optimized for advanced reasoning, coding, mathematics, and agentic workflows, making it suitable for tasks that require multi-step problem solving and structured outputs rather than simple conversational responses. Sarvam-105B supports long-context processing of up to around 128K tokens, enabling it to handle large documents, extended conversations, and deep analytical queries without losing coherence.
Learn more
Trinity-Large-Thinking
Trinity Large Thinking is a frontier open source reasoning model developed by Arcee AI, designed specifically for complex, multi-step problem solving and autonomous agent workflows that require long-horizon planning and tool use. Built on a sparse Mixture-of-Experts architecture with roughly 400 billion total parameters but only about 13 billion active per token, the model achieves high efficiency while maintaining strong reasoning performance across tasks such as mathematical problem solving, code generation, and multi-step analysis. It introduces extended chain-of-thought reasoning capabilities, allowing the model to generate intermediate “thinking traces” before producing final answers, which improves accuracy and reliability in complex scenarios. Trinity Large Thinking supports a very large context window of up to 262K tokens, enabling it to process long documents, maintain state across extended interactions, and operate effectively in continuous agent loops.
Learn more
SWE-1.6
SWE-1.6 is an engineering–focused AI model developed by Cognition and integrated into the Windsurf environment, designed to optimize both raw intelligence and what the company calls “model UX,” or the overall feel and efficiency of interacting with an AI agent. It represents a new iteration in the SWE model family, improving performance on benchmarks such as SWE-Bench Pro by over 10% compared to SWE-1.5 while maintaining similar underlying capabilities. It was trained from scratch to jointly improve reasoning quality and user experience, addressing issues observed in earlier versions such as overthinking simple problems, taking too many steps, looping in repetitive reasoning, and relying excessively on terminal commands instead of specialized tools. SWE-1.6 introduces behavioral improvements such as more frequent parallel tool usage, faster context retrieval, and reduced need for user input, resulting in smoother and more efficient workflows.
Learn more