New family of code large language models (LLMs)
Open-weight, large-scale hybrid-attention reasoning model
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
Lightweight 24B agentic coding model with vision and long context
Agentic 123B coding model optimized for large-scale engineering
Instruction-tuned 7B language model for chat and complex tasks
Powerful 14B LLM with strong instruction and long-text handling
High-performance MoE model with MLA, MTP, and multilingual reasoning
Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens
Efficient 8B multimodal model tuned for advanced reasoning tasks.
High-precision 14B multimodal model built for advanced reasoning tasks
Compact 3B-param multimodal model for efficient on-device reasoning
Compact hybrid reasoning language model for intelligent responses
JetBrains’ 4B parameter code model for completions
Efficient 13B MoE language model with long context and reasoning modes