Qwen2.5-VL is the multimodal large language model series
Tool for exploring and debugging transformer model behaviors
A state-of-the-art open visual language model
GPT4V-level open-source multi-modal model based on Llama3-8B
Qwen3-Coder is the code version of Qwen3
GLM-4-Voice | End-to-End Chinese-English Conversational Model
A series of math-specific large language models of our Qwen2 series
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Qwen-Image is a powerful image generation foundation model
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Chat & pretrained large audio language model proposed by Alibaba Cloud
Renderer for the harmony response format to be used with gpt-oss
Capable of understanding text, audio, vision, video
CodeGeeX2: A More Powerful Multilingual Code Generation Model
CLIP, Predict the most relevant text snippet given an image
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Inference framework for 1-bit LLMs
Pushing the Limits of Mathematical Reasoning in Open Language Models
GLM-4 series: Open Multilingual Multimodal Chat LMs
tiktoken is a fast BPE tokeniser for use with OpenAI's models
An AI-powered security review GitHub Action using Claude
Designed for text embedding and ranking tasks
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Dataset of GPT-2 outputs for research in detection, biases, and more
ChatGPT interface with better UI