Repo of Qwen2-Audio chat & pretrained large audio language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
GLM-4 series: Open Multilingual Multimodal Chat LMs
CogView4, CogView3-Plus and CogView3(ECCV 2024)
BISHENG is an open LLM devops platform for next generation apps
Central interface to connect your LLM's with external data
Open source libraries and APIs to build custom preprocessing pipelines
Database system for building simpler and faster AI-powered application
Phi-3.5 for Mac: Locally-run Vision and Language Models
Revolutionizing Database Interactions with Private LLM Technology
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Leveraging BERT and c-TF-IDF to create easily interpretable topics
Capable of understanding text, audio, vision, video
PyTorch library of curated Transformer models and their components
Seamlessly integrate LLMs into scikit-learn
Swirl queries any number of data sources with APIs
Chat & pretrained large vision language model
Qwen2.5-VL is the multimodal large language model series
Training Large Language Model to Reason in a Continuous Latent Space
Ling is a MoE LLM provided and open-sourced by InclusionAI
AI agent that streamlines the entire process of data analysis
Gorilla: An API store for LLMs
Guiding Instruction-based Image Editing via Multimodal Large Language
Refer and Ground Anything Anywhere at Any Granularity
The official Meta Llama 3 GitHub site