gpu_poor is an open-source tool designed to help developers determine whether their hardware is capable of running a specific large language model and to estimate the performance they can expect from it. The project focuses on calculating GPU memory requirements and predicted inference speed for different models, hardware configurations, and quantization strategies. By analyzing factors such as model size, context length, batch size, and GPU specifications, the system estimates how much VRAM will be required and how fast tokens can be generated during inference. The tool also provides a detailed breakdown of where GPU memory is allocated, including model weights, KV cache, activations, and other runtime overhead. This information allows developers to evaluate trade-offs between different quantization methods such as GGML, bitsandbytes, and QLoRA before attempting to deploy a model. gpu_poor is particularly useful for researchers and hobbyists.
Features
- GPU memory requirement estimation for running large language models
- Token generation speed prediction based on model and hardware configuration
- Support for quantization approaches including GGML, bitsandbytes, and QLoRA
- Breakdown of memory usage across model weights, activations, and KV cache
- Estimation of training iteration time for fine-tuning workflows
- Hardware compatibility evaluation for GPUs and CPU-based inference