Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Ling is a MoE LLM provided and open-sourced by InclusionAI
Revolutionizing Database Interactions with Private LLM Technology
Advanced techniques for RAG systems
Utilities intended for use with Llama models
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Train a 26M-parameter GPT from scratch in just 2h
ChatGLM2-6B: An Open Bilingual Chat LLM
Tongyi Deep Research, the Leading Open-source Deep Research Agent
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Training Large Language Model to Reason in a Continuous Latent Space
A state-of-the-art open visual language model
Repo of Qwen2-Audio chat & pretrained large audio language model
Central interface to connect your LLM's with external data
GLM-4 series: Open Multilingual Multimodal Chat LMs
Database system for building simpler and faster AI-powered application
Refer and Ground Anything Anywhere at Any Granularity
LLM powered fuzzing via OSS-Fuzz
A series of math-specific large language models of our Qwen2 series
Inference code for Llama models
Chinese LLaMA & Alpaca large language model + local CPU/GPU training
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA
Code for the paper Fine-Tuning Language Models from Human Preferences