Code for Language models can explain neurons in language models paper
Evals is a framework for evaluating LLMs and LLM systems
Designed for text embedding and ranking tasks
A series of math-specific large language models of our Qwen2 series
Replace OpenAI GPT with another LLM in your app
LLM training code for MosaicML foundation models
Leveraging BERT and c-TF-IDF to create easily interpretable topics
Diversity-driven optimization and large-model reasoning ability
This repository provides an advanced RAG
Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph
Repo of Qwen2-Audio chat & pretrained large audio language model
Open-weight, large-scale hybrid-attention reasoning model
Capable of understanding text, audio, vision, video
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Visual Instruction Tuning: Large Language-and-Vision Assistant
AI agent that streamlines the entire process of data analysis
Guiding Instruction-based Image Editing via Multimodal Large Language
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Ray Aviary - evaluate multiple LLMs easily
Beyond the Imitation Game collaborative benchmark for measuring
Open-source, high-performance Mixture-of-Experts large language model
Infinite Craft but in Pyside6 and Python with local LLM
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
Framework that is dedicated to making neural data processing
Database system for building simpler and faster AI-powered application