Fast, flexible and easy to use probabilistic modelling in Python
A Powerful Native Multimodal Model for Image Generation
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Qwen3-Coder is the code version of Qwen3
Open-source, high-performance AI model with advanced reasoning
From nobody to big model (LLM) hero
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
A Multi-Modal World Model for Reconstructing, Generating, Simulation
kaldi-asr/kaldi is the official location of the Kaldi project
Fully automatic censorship removal for language models
Powerful AI language model (MoE) optimized for efficiency/performance
Ling is a MoE LLM provided and open-sourced by InclusionAI
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Open-weight, large-scale hybrid-attention reasoning model
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Pretrained (Language) Models for Probabilistic Time Series Forecasting
Large-language-model & vision-language-model based on Linear Attention
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Open-source large language model family from Tencent Hunyuan
Decentralized deep learning in PyTorch. Built to train models
Qwen3-omni is a natively end-to-end, omni-modal LLM
Open-source, high-performance Mixture-of-Experts large language model
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
Run Mixtral-8x7B models in Colab or consumer desktops