Showing 27 open source projects for "total"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    Kimi K2

    Kimi K2

    Kimi K2 is the large language model series developed by Moonshot AI

    Kimi K2 is Moonshot AI’s advanced open-source large language model built on a scalable Mixture-of-Experts (MoE) architecture that combines a trillion total parameters with a subset of ~32 billion active parameters to deliver powerful and efficient performance on diverse tasks. It was trained on an enormous corpus of over 15.5 trillion tokens to push frontier capabilities in coding, reasoning, and general agentic tasks while addressing training stability through novel optimizer and architecture design strategies. ...
    Downloads: 42 This Week
    Last Update:
    See Project
  • 2
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely integrates large-scale reinforcement learning (RL) without relying on supervised fine-tuning, enabling the model to develop advanced reasoning capabilities. This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. ...
    Downloads: 96 This Week
    Last Update:
    See Project
  • 3
    GLM-4.5

    GLM-4.5

    GLM-4.5: Open-source LLM for intelligent agents by Z.ai

    GLM-4.5 is a cutting-edge open-source large language model designed by Z.ai for intelligent agent applications. The flagship GLM-4.5 model has 355 billion total parameters with 32 billion active parameters, while the compact GLM-4.5-Air version offers 106 billion total parameters and 12 billion active parameters. Both models unify reasoning, coding, and intelligent agent capabilities, providing two modes: a thinking mode for complex reasoning and tool usage, and a non-thinking mode for immediate responses. ...
    Downloads: 45 This Week
    Last Update:
    See Project
  • 4
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    DeepSeek-V3 is a robust Mixture-of-Experts (MoE) language model developed by DeepSeek, featuring a total of 671 billion parameters, with 37 billion activated per token. It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. ...
    Downloads: 109 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    Wan2.2

    Wan2.2

    Wan2.2: Open and Advanced Large-Scale Video Generative Model

    Wan2.2 is a major upgrade to the Wan series of open and advanced large-scale video generative models, incorporating cutting-edge innovations to boost video generation quality and efficiency. It introduces a Mixture-of-Experts (MoE) architecture that splits the denoising process across specialized expert models, increasing total model capacity without raising computational costs. Wan2.2 integrates meticulously curated cinematic aesthetic data, enabling precise control over lighting, composition, color tone, and more, for high-quality, customizable video styles. The model is trained on significantly larger datasets than its predecessor, greatly enhancing motion complexity, semantic understanding, and aesthetic diversity. ...
    Downloads: 103 This Week
    Last Update:
    See Project
  • 6
    MiMo-V2-Flash

    MiMo-V2-Flash

    MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation

    MiMo-V2-Flash is a large Mixture-of-Experts language model designed to deliver strong reasoning, coding, and agentic-task performance while keeping inference fast and cost-efficient. It uses an MoE setup where a very large total parameter count is available, but only a smaller subset is activated per token, which helps balance capability with runtime efficiency. The project positions the model for workflows that require tool use, multi-step planning, and higher throughput, rather than only single-turn chat. Architecturally, it highlights attention and prediction choices aimed at accelerating generation while preserving instruction-following quality in complex prompts. ...
    Downloads: 22 This Week
    Last Update:
    See Project
  • 7
    MiniMax-M2

    MiniMax-M2

    MiniMax-M2, a model built for Max coding & agentic workflows

    MiniMax-M2 is an open-weight large language model designed specifically for high-end coding and agentic workflows while staying compact and efficient. It uses a Mixture-of-Experts (MoE) architecture with 230 billion total parameters but only 10 billion activated per token, giving it the behavior of a very large model at a fraction of the runtime cost. The model is tuned for end-to-end developer flows such as multi-file edits, compile–run–fix loops, and test-validated repairs across real repositories and diverse programming languages. It is also optimized for multi-step agent tasks, planning and executing long toolchains that span shell commands, browsers, retrieval systems, and code runners. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    MiniMax-01

    MiniMax-01

    Large-language-model & vision-language-model based on Linear Attention

    ...MiniMax-Text-01 uses a hybrid attention architecture that blends Lightning Attention, standard softmax attention, and Mixture-of-Experts (MoE) routing to achieve both high throughput and long-context reasoning. It has 456 billion total parameters with 45.9 billion activated per token and is trained with advanced parallel strategies such as LASP+, varlen ring attention, and Expert Tensor Parallelism, enabling a training context of 1 million tokens and up to 4 million tokens at inference. MiniMax-VL-01 extends this core by adding a 303M-parameter Vision Transformer and a two-layer MLP projector in a ViT–MLP–LLM framework, allowing the model to process images at dynamic resolutions up to 2016×2016.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Step 3.5 Flash

    Step 3.5 Flash

    Fast, Sharp & Reliable Agentic Intelligence

    ...Unlike dense models that activate all their parameters for every token, Step 3.5 Flash uses a sparse Mixture-of-Experts (MoE) architecture that selectively engages only about 11 billion of its roughly 196 billion total parameters per token, delivering high-quality reasoning and interaction at far lower compute cost and latency than traditional large models. Its design targets deep reasoning, long-context handling, coding, and real-time responsiveness, making it suitable for building autonomous agents, advanced assistants, and long-chain cognitive workflows without sacrificing performance.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Add Two Lines of Code. Get Full APM. Icon
    Add Two Lines of Code. Get Full APM.

    AppSignal installs in minutes and auto-configures dashboards, alerts, and error tracking.

    Works out of the box for Rails, Django, Express, Phoenix, and more. Monitoring exceptions and performance in no time.
    Start Free
  • 10
    VisualGLM-6B

    VisualGLM-6B

    Chinese and English multimodal conversational language model

    VisualGLM-6B is an open-source multimodal conversational language model developed by ZhipuAI that supports both images and text in Chinese and English. It builds on the ChatGLM-6B backbone, with 6.2 billion language parameters, and incorporates a BLIP2-Qformer visual module to connect vision and language. In total, the model has 7.8 billion parameters. Trained on a large bilingual dataset — including 30 million high-quality Chinese image-text pairs from CogView and 300 million English pairs — VisualGLM-6B is designed for image understanding, description, and question answering. Fine-tuning on long visual QA datasets further aligns the model’s responses with human preferences. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    StudioOllamaUI

    StudioOllamaUI

    StudioOllamaUI is a local, portable interface for Ollama

    ...It doesn't clutter your registry or leave traces on your disk. AI for Everyone: No expensive GPU? No problem. Optimized to run smoothly on your CPU and RAM. Total Privacy: Everything stays on your machine. No data leaves for the cloud, and no hidden files are left on your system.
    Leader badge
    Downloads: 7 This Week
    Last Update:
    See Project
  • 12
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    ...For example, their MoE variant with 16.4B parameters claims comparable or better performance to standard dense models like DeepSeek 7B or LLaMA2 7B using about 40% of the total compute. The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Qwen3.6-35B-A3B

    Qwen3.6-35B-A3B

    Open multimodal model for coding, agents, and long-context tasks

    ...A notable addition is thinking preservation, which allows the model to retain reasoning context from earlier messages, improving iterative work and reducing redundant computation. Architecturally, it uses a Mixture-of-Experts design with 35B total parameters and 3B active, supports a native 262K-token context window, and can be extended to about 1M tokens with YaRN. It also performs strongly across coding, agent, vision, reasoning, and document-understanding benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    GLM-4.5-Air

    GLM-4.5-Air

    Compact hybrid reasoning language model for intelligent responses

    GLM-4.5-Air is a multilingual large language model with 106 billion total parameters and 12 billion active parameters, designed for conversational AI and intelligent agents. It is part of the GLM-4.5 family developed by Zhipu AI, offering hybrid reasoning capabilities via two modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for immediate responses. The model is optimized for efficiency and deployment, delivering strong results across 12 industry benchmarks, with a composite score of 59.8. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    MiMo-V2.5

    MiMo-V2.5

    Omnimodal AI model for agents, coding, and long-context tasks

    MiMo-V2.5 is a native omnimodal large language model developed by Xiaomi, designed for advanced agentic workflows, multimodal reasoning, and long-context processing. Built on a Mixture-of-Experts architecture with approximately 309B total parameters and around 15B activated per inference, it balances high capability with efficient execution. The model natively processes text, images, video, and audio within a unified system, enabling cross-modal understanding and complex task execution in a single pipeline. With a context window of up to 1 million tokens, it can handle large documents, extended conversations, and multi-step workflows without fragmentation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DeepSeek-V4-Flash

    DeepSeek-V4-Flash

    Efficient MoE model for million-token reasoning and coding

    DeepSeek-V4-Flash is a preview Mixture-of-Experts language model built for efficient million-token context intelligence. It has 284B total parameters with 13B activated and supports a 1M-token context window, making it suitable for long-document reasoning, complex coding, agentic workflows, and large-scale information processing. The model uses a hybrid attention architecture that combines Compressed Sparse Attention and Heavily Compressed Attention to improve long-context efficiency, while Manifold-Constrained Hyper-Connections strengthen signal stability across layers. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Qwen3.6-35B-A3B-FP8

    Qwen3.6-35B-A3B-FP8

    FP8 Qwen model for efficient multimodal coding and agent tasks

    ...A key capability is thinking preservation, which allows the model to retain reasoning traces from earlier messages, helping reduce repeated computation and improving consistency in iterative tasks. The model uses a Mixture-of-Experts design with 35B total parameters and 3B active, supports a native context window of 262,144 tokens, and can be extended to about 1,010,000 tokens with YaRN. It is compatible with major inference frameworks such as Transformers, vLLM, SGLang, and KTransformers, making it a practical high-performance option.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    GigaChat 3 Ultra

    GigaChat 3 Ultra

    High-performance MoE model with MLA, MTP, and multilingual reasoning

    GigaChat 3 Ultra is a flagship instruct-model built on a custom Mixture-of-Experts architecture with 702B total and 36B active parameters. It leverages Multi-head Latent Attention to compress the KV cache into latent vectors, dramatically reducing memory demand and improving inference speed at scale. The model also employs Multi-Token Prediction, enabling multi-step token generation in a single pass for up to 40% faster output through speculative and parallel decoding techniques. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Hunyuan-A13B-Instruct

    Hunyuan-A13B-Instruct

    Efficient 13B MoE language model with long context and reasoning modes

    Hunyuan-A13B-Instruct is a powerful instruction-tuned large language model developed by Tencent using a fine-grained Mixture-of-Experts (MoE) architecture. While the total model includes 80 billion parameters, only 13 billion are active per forward pass, making it highly efficient while maintaining strong performance across benchmarks. It supports up to 256K context tokens, advanced reasoning (CoT) abilities, and agent-based workflows with tool parsing. The model offers both fast and slow thinking modes, letting users trade off speed for deeper reasoning. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    Mistral Large 3 675B Base 2512 is the foundational, pre-trained version of the Mistral Large 3 family, built as a frontier-scale multimodal Mixture-of-Experts model with 41B active parameters and a total size of 675B. It is trained from scratch using 3000 H200 GPUs, making it one of the most advanced and compute-intensive open-weight models available. As the base version, it is not fine-tuned for instruction following or reasoning, making it ideal for teams planning their own domain-specific finetuning or custom training pipelines. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Mistral Large 3 675B Instruct 2512 NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4

    Quantized 675B multimodal instruct model optimized for NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4 is a frontier-scale multimodal Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. This NVFP4 checkpoint is a post-training-activation quantized version of the original instruct model, created through a collaboration between Mistral AI, vLLM, and Red Hat using llm-compressor. It retains the same instruction-tuned behavior as the FP8 model, making it ideal for production assistants, agentic workflows, scientific tasks, and long-context enterprise systems. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Mistral Large 3 675B Instruct 2512

    Mistral Large 3 675B Instruct 2512

    Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis

    Mistral Large 3 675B Instruct 2512 is a state-of-the-art multimodal granular Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. As the instruct-tuned FP8 variant, it is optimized for reliable instruction following, agentic workflows, production-grade assistants, and long-context enterprise tasks. It incorporates a massive 673B-parameter language MoE backbone and a 2.5B-parameter vision encoder, enabling rich multimodal understanding across text and images. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    GPT-OSS-20B is OpenAI’s smaller, open-weight language model optimized for low-latency, agentic tasks, and local deployment. With 21B total parameters and 3.6B active parameters (MoE), it fits within 16GB of memory thanks to native MXFP4 quantization. Designed for high-performance reasoning, it supports Harmony response format, function calling, web browsing, and code execution. Like its larger sibling (gpt-oss-120b), it offers adjustable reasoning depth and full chain-of-thought visibility for better interpretability. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    gpt-oss-120b

    gpt-oss-120b

    OpenAI’s open-weight 120B model optimized for reasoning and tooling

    GPT-OSS-120B is a powerful open-weight language model by OpenAI, optimized for high-level reasoning, tool use, and agentic tasks. With 117B total parameters and 5.1B active parameters, it’s designed to fit on a single H100 GPU using native MXFP4 quantization. The model supports fine-tuning, chain-of-thought reasoning, and structured outputs, making it ideal for complex workflows. It operates in OpenAI’s Harmony response format and can be deployed via Transformers, vLLM, Ollama, LM Studio, and PyTorch. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MiMo-V2.5-Pro

    MiMo-V2.5-Pro

    Flagship MoE model for long-context agents and complex coding

    MiMo-V2.5-Pro is Xiaomi’s flagship Mixture-of-Experts (MoE) model built for the most demanding agentic, software engineering, and long-horizon reasoning tasks. It features approximately 1.02 trillion total parameters with 42B activated per inference, balancing extreme capability with efficient execution. The model supports a 1 million token context window, enabling it to maintain coherence across long workflows involving thousands of tool calls and multi-step reasoning chains. Architecturally, it uses a hybrid attention system combining Sliding Window Attention and Global Attention to significantly reduce memory usage while preserving long-context performance. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB