Qwen-Image is a powerful image generation foundation model
Chat & pretrained large vision language model
Guiding Instruction-based Image Editing via Multimodal Large Language
CogView4, CogView3-Plus and CogView3(ECCV 2024)
GPT4V-level open-source multi-modal model based on Llama3-8B
Chinese and English multimodal conversational language model
Capable of understanding text, audio, vision, video
Tensor search for humans
Multilingual sentence & image embeddings with BERT
Phi-3.5 for Mac: Locally-run Vision and Language Models
The unofficial python package that returns response of Google Bard
Inference code for CodeLlama models
The Multi-Agent Framework
Gemma open-weight LLM library, from Google DeepMind
Open source libraries and APIs to build custom preprocessing pipelines
A state-of-the-art open visual language model
Qwen3-omni is a natively end-to-end, omni-modal LLM
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Refer and Ground Anything Anywhere at Any Granularity
Data Lake for Deep Learning. Build, manage, and query datasets
An open-source framework for training large multimodal models