Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens
Jan-v1-edge: efficient 1.7B reasoning model optimized for edge devices
Efficient 13B MoE language model with long context and reasoning modes
Small 3B-base multimodal model ideal for custom AI on edge hardware
Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis
Compact 3B-param multimodal model for efficient on-device reasoning
Versatile 8B-base multimodal LLM, flexible foundation for custom AI
Powerful 14B-base multimodal model — flexible base for fine-tuning