File Parser optimised for LLM Ingestion with no loss
Framework to easily create LLM powered bots over any dataset
GPT4V-level open-source multi-modal model based on Llama3-8B
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible
Technical principles related to large models
LLM abstractions that aren't obstructions
Repo of Qwen2-Audio chat & pretrained large audio language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Open source libraries and APIs to build custom preprocessing pipelines
GLM-4 series: Open Multilingual Multimodal Chat LMs
CogView4, CogView3-Plus and CogView3(ECCV 2024)
BISHENG is an open LLM devops platform for next generation apps
Central interface to connect your LLM's with external data
Database system for building simpler and faster AI-powered application
Phi-3.5 for Mac: Locally-run Vision and Language Models
Revolutionizing Database Interactions with Private LLM Technology
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Leveraging BERT and c-TF-IDF to create easily interpretable topics
PyTorch library of curated Transformer models and their components
Seamlessly integrate LLMs into scikit-learn
Capable of understanding text, audio, vision, video
Swirl queries any number of data sources with APIs
Chat & pretrained large vision language model
Qwen2.5-VL is the multimodal large language model series
Training Large Language Model to Reason in a Continuous Latent Space