A coding-free framework built on PyTorch
A very simple framework for state-of-the-art NLP
BitNet: Scaling 1-bit Transformers for Large Language Models
Accelerate local LLM inference and finetuning
Hackable and optimized Transformers building blocks
Build AI-powered semantic search applications
Hunyuan Translation Model Version 1.5
Training Large Language Model to Reason in a Continuous Latent Space
End-to-end speech processing toolkit
A series of math-specific large language models of our Qwen2 series
An MLOps framework to package, deploy, monitor and manage models
Implementation for MatMul-free LM
GLM-4 series: Open Multilingual Multimodal Chat LMs
Qwen3-omni is a natively end-to-end, omni-modal LLM
Unifying 3D Mesh Generation with Language Models
New family of code large language models (LLMs)
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Flexible Photo Recrafting While Preserving Your Identity
An easy-to-use LLMs quantization package with user-friendly apis
Implementation of Recurrent Interface Network (RIN)
Transformers4Rec is a flexible and efficient library
Towards Ultimate Expert Specialization in Mixture-of-Experts Language
Text-to-Image generation. The repo for NeurIPS 2021 paper
Inference code and configs for the ReplitLM model family
Official PyTorch Implementation of "Scalable Diffusion Models"