Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.
Features
- Accelerate for large scale models leveraging DeepSpeed and Big Model Inference
- Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware
- GPU memory required for adapting LLMs on the few-shot dataset
- Parameter Efficient Tuning of Diffusion Models
- GPU memory required by different settings
- Parameter Efficient Tuning of LLMs for RLHF components such as Ranker and Policy
License
Apache License V2.0Follow PEFT
Other Useful Business Software
Forever Free Full-Stack Observability | Grafana Cloud
Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of PEFT!