LLM-Pruner is an open-source framework designed to compress large language models through structured pruning techniques while maintaining their general capabilities. Large language models often require enormous computational resources, making them expensive to deploy and inefficient for many practical applications. LLM-Pruner addresses this issue by identifying and removing non-essential components within transformer architectures, such as redundant attention heads or feed-forward structures. The framework relies on gradient-based analysis to determine which parameters contribute least to model performance, enabling targeted structural pruning rather than simple weight removal. After pruning, the framework applies lightweight fine-tuning methods such as LoRA to recover performance using relatively small datasets and short training times.

Features

  • Structured pruning of transformer components such as layers and attention heads
  • Gradient-based importance scoring for identifying removable parameters
  • Compatibility with multiple LLM architectures including LLaMA and Vicuna
  • Lightweight performance recovery using LoRA fine-tuning
  • Automated scripts for pruning and model compression workflows
  • Reduced memory usage and faster inference for large language models

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow LLM-Pruner

LLM-Pruner Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of LLM-Pruner!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM)

Registered

3 days ago