AirLLM is an open source Python library that enables extremely large language models to run on consumer hardware with very limited GPU memory. The project addresses one of the main barriers to local LLM experimentation by introducing a memory-efficient inference technique that loads model layers sequentially rather than storing the entire model in GPU memory. This layer-wise inference approach allows models with tens of billions of parameters to run on devices with only a few gigabytes of VRAM. AirLLM preprocesses model weights so that each transformer layer can be loaded independently during computation, reducing the memory footprint while still performing full inference. As a result, developers can experiment with models that previously required specialized high-end GPUs.
Features
- Memory-optimized inference for very large language models
- Layer-by-layer loading to minimize GPU memory usage
- Ability to run 70B-parameter models on small GPUs
- Compatibility with Hugging Face model weights
- Simple Python API for running local inference
- Support for consumer-grade hardware environments