Diffusion Transformer with Fine-Grained Chinese Understanding
Qwen3-omni is a natively end-to-end, omni-modal LLM
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
High-resolution models for human tasks
This repository contains the official implementation of FastVLM
FAIR Sequence Modeling Toolkit 2
ICLR2024 Spotlight: curation/training code, metadata, distribution
Towards Ultimate Expert Specialization in Mixture-of-Experts Language
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Unified Multimodal Understanding and Generation Models
Language modeling in a sentence representation space
The ChatGPT Retrieval Plugin lets you easily find personal documents
Open-source, high-performance Mixture-of-Experts large language model
Open-Source Financial Large Language Models!
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM
Open Multilingual Multimodal Chat LMs
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Chinese LLaMA & Alpaca large language model + local CPU/GPU training
Repo for external large-scale work
Official PyTorch Implementation of "Scalable Diffusion Models"
PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)
LLaMA: Open and Efficient Foundation Language Models
Implementation of model parallel autoregressive transformers on GPUs
A minimal PyTorch re-implementation of the OpenAI GPT