Qwen3-omni is a natively end-to-end, omni-modal LLM
Open-weight, large-scale hybrid-attention reasoning model
tiktoken is a fast BPE tokeniser for use with OpenAI's models
GLM-4 series: Open Multilingual Multimodal Chat LMs
Multi-modal large language model designed for audio understanding
State-of-the-art (SoTA) text-to-video pre-trained model
Open-source framework for intelligent speech interaction
This repository contains the official implementation of FastVLM
FAIR Sequence Modeling Toolkit 2
Towards Ultimate Expert Specialization in Mixture-of-Experts Language
Pushing the Limits of Mathematical Reasoning in Open Language Models
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
High-resolution models for human tasks
ChatGPT interface with better UI
Unified Multimodal Understanding and Generation Models
Language modeling in a sentence representation space
Dataset of GPT-2 outputs for research in detection, biases, and more
ICLR2024 Spotlight: curation/training code, metadata, distribution
Diffusion Transformer with Fine-Grained Chinese Understanding
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
The ChatGPT Retrieval Plugin lets you easily find personal documents
Large Multimodal Models for Video Understanding and Editing
LLM-based Reinforcement Learning audio edit model
Qwen2.5-Coder is the code version of Qwen2.5, the large language model