Chat & pretrained large audio language model proposed by Alibaba Cloud
The Multi-Agent Framework
Large Audio Language Model built for natural interactions
GPT4V-level open-source multi-modal model based on Llama3-8B
Designed for text embedding and ranking tasks
Set of tools to assess and improve LLM security
Qwen3-omni is a natively end-to-end, omni-modal LLM
GLM-4-Voice | End-to-End Chinese-English Conversational Model
Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph
Chat & pretrained large vision language model
From Paper to Presentation in One Click
Gemma open-weight LLM library, from Google DeepMind
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
95% token savings. 155x faster queries. 16 languages
Central interface to connect your LLM's with external data
950 line, minimal, extensible LLM inference engine built from scratch
Qwen2.5-VL is the multimodal large language model series
User toolkit for analyzing and interfacing with Large Language Models
Repo of Qwen2-Audio chat & pretrained large audio language model
Inference code for CodeLlama models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Code for Language models can explain neurons in language models paper
Capable of understanding text, audio, vision, video
Open-source, high-performance Mixture-of-Experts large language model
An interpretable and efficient predictor using pre-trained models