Showing 146 open source projects for "mixture"

View related business solutions
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    Nemotron 3 Super

    Nemotron 3 Super

    Open language model developed by NVIDIA as part of Nemotron-3 family

    NVIDIA-Nemotron-3-Super-120B-A12B-FP8 is a large-scale open language model developed by NVIDIA as part of the Nemotron-3 family of generative AI systems designed for advanced reasoning, conversational interaction, and agent-based workflows. The model contains approximately 120 billion parameters, but employs a Mixture-of-Experts architecture that activates only a smaller subset of parameters during inference, improving computational efficiency while maintaining high capability. Its architecture combines Transformer attention layers with Mamba state-space components to balance long-context reasoning, memory efficiency, and high-quality language generation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Nemotron 3

    Nemotron 3

    Large language model developed and released by NVIDIA

    ...It is the post-trained and FP8-quantized variant of the Nemotron 3 Nano model, meaning its weights and activations are represented in 8-bit floating point (FP8) to dramatically reduce memory usage and computational cost while retaining high accuracy. The base Nano architecture uses a hybrid Mamba-Transformer Mixture-of-Experts (MoE) design, allowing the model to activate only a small fraction of its 31.6 billion parameters per token, which improves speed and efficiency without sacrificing quality on complex queries. This configuration supports a massive context length of up to 1 million tokens, making it suitable for long-context reasoning, agentic tasks, extended dialogues, and applications like code generation or document summarization.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Leanstral

    Leanstral

    Open-source code agent designed for Lean 4

    ...By focusing on theorem proving and formal reasoning, Leanstral represents a specialized direction within large language models, targeting domains that require strict correctness and logical rigor rather than general conversational tasks. It leverages modern large-scale architectures, likely incorporating mixture-of-experts techniques, to balance efficiency and capability while handling structured symbolic reasoning tasks. The model can assist in writing proofs, exploring mathematical structures, and validating logical properties in code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Kimi K2.6

    Kimi K2.6

    Multimodal agent model for coding, orchestration, and autonomy

    ...One of its most distinctive capabilities is horizontal agent scaling, supporting up to 300 sub-agents and 4,000 coordinated steps in a single run, which enables parallel task decomposition and end-to-end completion of outputs such as documents, websites, and spreadsheets. Architecturally, it uses a 1T-parameter Mixture-of-Experts design with 32B activated parameters, a MoonViT vision encoder, and a 256K context window.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    Qwen3.6-35B-A3B

    Qwen3.6-35B-A3B

    Open multimodal model for coding, agents, and long-context tasks

    ...A notable addition is thinking preservation, which allows the model to retain reasoning context from earlier messages, improving iterative work and reducing redundant computation. Architecturally, it uses a Mixture-of-Experts design with 35B total parameters and 3B active, supports a native 262K-token context window, and can be extended to about 1M tokens with YaRN. It also performs strongly across coding, agent, vision, reasoning, and document-understanding benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    A mixture of Zelda II with Dungeans and Dragons. Huge world to play.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    DeepSeek-V4-Pro

    DeepSeek-V4-Pro

    Flagship MoE model for advanced reasoning, coding, and agents

    DeepSeek-V4-Pro is a flagship open-weight Mixture-of-Experts language model designed for high-performance reasoning, coding, and agent-based workflows at scale. It features approximately 1.6 trillion total parameters with around 49B activated during inference, enabling strong efficiency while maintaining frontier-level capability. The model supports an ultra-long context window of up to 1 million tokens, making it highly suitable for long-document reasoning, large codebases, and complex multi-step tasks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    DeepSeek-V4-Flash

    DeepSeek-V4-Flash

    Efficient MoE model for million-token reasoning and coding

    DeepSeek-V4-Flash is a preview Mixture-of-Experts language model built for efficient million-token context intelligence. It has 284B total parameters with 13B activated and supports a 1M-token context window, making it suitable for long-document reasoning, complex coding, agentic workflows, and large-scale information processing. The model uses a hybrid attention architecture that combines Compressed Sparse Attention and Heavily Compressed Attention to improve long-context efficiency, while Manifold-Constrained Hyper-Connections strengthen signal stability across layers. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Qwen3.6-35B-A3B-FP8

    Qwen3.6-35B-A3B-FP8

    FP8 Qwen model for efficient multimodal coding and agent tasks

    ...A key capability is thinking preservation, which allows the model to retain reasoning traces from earlier messages, helping reduce repeated computation and improving consistency in iterative tasks. The model uses a Mixture-of-Experts design with 35B total parameters and 3B active, supports a native context window of 262,144 tokens, and can be extended to about 1,010,000 tokens with YaRN. It is compatible with major inference frameworks such as Transformers, vLLM, SGLang, and KTransformers, making it a practical high-performance option.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 10
    GigaChat 3 Ultra

    GigaChat 3 Ultra

    High-performance MoE model with MLA, MTP, and multilingual reasoning

    GigaChat 3 Ultra is a flagship instruct-model built on a custom Mixture-of-Experts architecture with 702B total and 36B active parameters. It leverages Multi-head Latent Attention to compress the KV cache into latent vectors, dramatically reducing memory demand and improving inference speed at scale. The model also employs Multi-Token Prediction, enabling multi-step token generation in a single pass for up to 40% faster output through speculative and parallel decoding techniques. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    Qwen3-Next-80B-A3B-Instruct is the flagship release in the Qwen3-Next series, designed as a next-generation foundation model for ultra-long context and efficient reasoning. With 80B total parameters and 3B activated at a time, it leverages hybrid attention (Gated DeltaNet + Gated Attention) and a high-sparsity Mixture-of-Experts architecture to achieve exceptional efficiency. The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE scaling (YaRN), making it highly capable for processing large documents and extended conversations. Multi-Token Prediction (MTP) boosts both training and inference, while stability optimizations such as weight-decayed and zero-centered layernorm ensure robustness. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Hunyuan-A13B-Instruct

    Hunyuan-A13B-Instruct

    Efficient 13B MoE language model with long context and reasoning modes

    Hunyuan-A13B-Instruct is a powerful instruction-tuned large language model developed by Tencent using a fine-grained Mixture-of-Experts (MoE) architecture. While the total model includes 80 billion parameters, only 13 billion are active per forward pass, making it highly efficient while maintaining strong performance across benchmarks. It supports up to 256K context tokens, advanced reasoning (CoT) abilities, and agent-based workflows with tool parsing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    H.A.Z.A.R.D (short for *H*acking *A*liens, *Z*ombies and *R*aging Demons) is an Hack'n'Slay RPG. The setting is a mixture between Sci-Fi and Fantasy.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Mistral Large 3 675B Base 2512

    Mistral Large 3 675B Base 2512

    Frontier-scale 675B multimodal base model for custom AI training

    Mistral Large 3 675B Base 2512 is the foundational, pre-trained version of the Mistral Large 3 family, built as a frontier-scale multimodal Mixture-of-Experts model with 41B active parameters and a total size of 675B. It is trained from scratch using 3000 H200 GPUs, making it one of the most advanced and compute-intensive open-weight models available. As the base version, it is not fine-tuned for instruction following or reasoning, making it ideal for teams planning their own domain-specific finetuning or custom training pipelines. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Mistral Large 3 675B Instruct 2512 Eagle

    Mistral Large 3 675B Instruct 2512 Eagle

    Speculative-decoding accelerator for the 675B Mistral Large 3

    ...It works alongside the primary 675B instruct model, enabling faster response times by predicting several tokens ahead using Mistral’s Eagle speculative method. Built on the same frontier-scale multimodal Mixture-of-Experts architecture, it complements a system featuring 41B active parameters and a 2.5B-parameter vision encoder. The Eagle variant is specialized rather than standalone, serving as a performance accelerator for production-grade assistants, agentic workflows, long-context applications, and retrieval-augmented reasoning pipelines. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    A video game with a mixture of 2D, 3D graphics, but mostly 2D. Currently in the planning phase. Database oriented, with multiplayer gameplay intended. Also with an underlying translation framework for Hindi and Japanese.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    This project is about rendering of very large and very detailed terrain. An mixture of chunked-lod, geomipmap and fractals is used to handle the large terrain and make it very detailed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Mistral Large 3 675B Instruct 2512

    Mistral Large 3 675B Instruct 2512

    Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis

    Mistral Large 3 675B Instruct 2512 is a state-of-the-art multimodal granular Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. As the instruct-tuned FP8 variant, it is optimized for reliable instruction following, agentic workflows, production-grade assistants, and long-context enterprise tasks. It incorporates a massive 673B-parameter language MoE backbone and a 2.5B-parameter vision encoder, enabling rich multimodal understanding across text and images. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    GLM-4.5-Air

    GLM-4.5-Air

    Compact hybrid reasoning language model for intelligent responses

    GLM-4.5-Air is a multilingual large language model with 106 billion total parameters and 12 billion active parameters, designed for conversational AI and intelligent agents. It is part of the GLM-4.5 family developed by Zhipu AI, offering hybrid reasoning capabilities via two modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for immediate responses. The model is optimized for efficiency and deployment, delivering strong results across 12 industry benchmarks,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Mistral Large 3 675B Instruct 2512 NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4

    Quantized 675B multimodal instruct model optimized for NVFP4

    Mistral Large 3 675B Instruct 2512 NVFP4 is a frontier-scale multimodal Mixture-of-Experts model featuring 675B total parameters and 41B active parameters, trained from scratch on 3,000 H200 GPUs. This NVFP4 checkpoint is a post-training-activation quantized version of the original instruct model, created through a collaboration between Mistral AI, vLLM, and Red Hat using llm-compressor. It retains the same instruction-tuned behavior as the FP8 model, making it ideal for production assistants, agentic workflows, scientific tasks, and long-context enterprise systems. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21

    Advanced Traffic Simulation

    A mixture of game and simulation for public transport.

    This project is about a game and a simulation to work together. The focus lies on public transport. Manage your line network in your home area. Upgrade your vehicles, build new roads, deal with accidents or plan replacement busses for railroad construction works. But it's also possible to drive the vehicle yourself or just walk around the world and enjoy your line network. By vehicle we mean: train, tram, subway, bus and monorail. By using an addon structure it is possible to customize the...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB