A series of math-specific large language models of our Qwen2 series
Capable of understanding text, audio, vision, video
Ling is a MoE LLM provided and open-sourced by InclusionAI
Multilingual sentence & image embeddings with BERT
Repo of Qwen2-Audio chat & pretrained large audio language model
An LLM-powered knowledge curation system that researches topics
The unofficial python package that returns response of Google Bard
Chat & pretrained large audio language model proposed by Alibaba Cloud
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Leveraging BERT and c-TF-IDF to create easily interpretable topics
A state-of-the-art open visual language model
⚡ Building applications with LLMs through composability ⚡
Chat & pretrained large vision language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Operating LLMs in production
Open-weight, large-scale hybrid-attention reasoning model
Qwen3-omni is a natively end-to-end, omni-modal LLM
GPT4V-level open-source multi-modal model based on Llama3-8B
Chinese and English multimodal conversational language model
Tensor search for humans
Qwen2.5-VL is the multimodal large language model series
Large-language-model & vision-language-model based on Linear Attention
GLM-4 series: Open Multilingual Multimodal Chat LMs
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Code for the paper "Evaluating Large Language Models Trained on Code"