Qwen-Image is a powerful image generation foundation model
Chat & pretrained large vision language model
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Guiding Instruction-based Image Editing via Multimodal Large Language
Capable of understanding text, audio, vision, video
GPT4V-level open-source multi-modal model based on Llama3-8B
Chinese and English multimodal conversational language model
Tensor search for humans
Phi-3.5 for Mac: Locally-run Vision and Language Models
Multilingual sentence & image embeddings with BERT
Inference code for CodeLlama models
The unofficial python package that returns response of Google Bard
The Multi-Agent Framework
Open source libraries and APIs to build custom preprocessing pipelines
A state-of-the-art open visual language model
Gemma open-weight LLM library, from Google DeepMind
Qwen3-omni is a natively end-to-end, omni-modal LLM
Refer and Ground Anything Anywhere at Any Granularity
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Data Lake for Deep Learning. Build, manage, and query datasets
An open-source framework for training large multimodal models