Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Training Large Language Model to Reason in a Continuous Latent Space
MobileLLM Optimizing Sub-billion Parameter Language Models
Visual Instruction Tuning: Large Language-and-Vision Assistant
Code for Language models can explain neurons in language models paper
A state-of-the-art open visual language model
Chat & pretrained large vision language model
LLM powered fuzzing via OSS-Fuzz
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
A series of math-specific large language models of our Qwen2 series
PyTorch library of curated Transformer models and their components
Adding guardrails to large language models
Seamlessly integrate LLMs into scikit-learn
State-of-the-art Parameter-Efficient Fine-Tuning
OpenDAN is an open source Personal AI OS
Inference code for CodeLlama models
Set of tools to assess and improve LLM security
LLM training in simple, raw C/CUDA
LLM training code for MosaicML foundation models
LLM based data scientist, AI native data application
Simple, Pythonic building blocks to evaluate LLM applications
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
The official Meta Llama 3 GitHub site
Train a 26M-parameter GPT from scratch in just 2h
Open-source observability for your LLM application