Efficient 14B multimodal instruct model with edge deployment and FP8
Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis
Compact 3B-param multimodal model for efficient on-device reasoning
Versatile 8B-base multimodal LLM, flexible foundation for custom AI
Powerful 14B-base multimodal model — flexible base for fine-tuning
Open multimodal model for coding, agents, and long-context tasks
Flexible text-to-text transformer model for multilingual NLP tasks
Summarization model fine-tuned on CNN/DailyMail articles
FP8 Qwen model for efficient multimodal coding and agent tasks
CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B