LLMs-from-scratch is an educational codebase that walks through implementing modern large-language-model components step by step. It emphasizes building blocks—tokenization, embeddings, attention, feed-forward layers, normalization, and training loops—so learners understand not just how to use a model but how it works internally. The repository favors clear Python and NumPy or PyTorch implementations that can be run and modified without heavyweight frameworks obscuring the logic. Chapters and notebooks progress from tiny toy models to more capable transformer stacks, including sampling strategies and evaluation hooks. The focus is on readability, correctness, and experimentation, making it ideal for students and practitioners transitioning from theory to working systems. By the end, you have a grounded sense of how data pipelines, optimization, and inference interact to produce fluent text.
Features
- Stepwise implementations of tokenizer, attention, and transformer blocks
- Clear Python notebooks and scripts designed for learning and tinkering
- Training and sampling loops that expose the full data and compute flow
- Explorations of scaling choices, regularization, and evaluation metrics
- Minimal dependencies to keep the math and code transparent
- Serves as a foundation for extending to larger models and custom datasets