Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.

Features

  • Accelerate for large scale models leveraging DeepSpeed and Big Model Inference
  • Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware
  • GPU memory required for adapting LLMs on the few-shot dataset
  • Parameter Efficient Tuning of Diffusion Models
  • GPU memory required by different settings
  • Parameter Efficient Tuning of LLMs for RLHF components such as Ranker and Policy

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow PEFT

PEFT Web Site

Other Useful Business Software
Forever Free Full-Stack Observability | Grafana Cloud Icon
Forever Free Full-Stack Observability | Grafana Cloud

Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
Create free account
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of PEFT!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM), Python LLM Inference Tool

Registered

2023-04-10