GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
A series of math-specific large language models of our Qwen2 series
Multilingual sentence & image embeddings with BERT
GLM-4-Voice | End-to-End Chinese-English Conversational Model
BISHENG is an open LLM devops platform for next generation apps
A state-of-the-art open visual language model
Repo of Qwen2-Audio chat & pretrained large audio language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Framework and no-code GUI for fine-tuning LLMs
Phi-3.5 for Mac: Locally-run Vision and Language Models
Integrating LLMs into structured NLP pipelines
Revolutionizing Database Interactions with Private LLM Technology
Qwen3-omni is a natively end-to-end, omni-modal LLM
PyTorch library of curated Transformer models and their components
Seamlessly integrate LLMs into scikit-learn
State-of-the-art Parameter-Efficient Fine-Tuning
Chat & pretrained large audio language model proposed by Alibaba Cloud
Inference Llama 2 in one file of pure C
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Training Large Language Model to Reason in a Continuous Latent Space
Ling is a MoE LLM provided and open-sourced by InclusionAI
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible
AI agent that streamlines the entire process of data analysis
Gorilla: An API store for LLMs
Low-code framework for building custom LLMs, neural networks