Download Latest Version 0.17.0_ SHiRA, MiSS, LoRA for MoE, and more source code.tar.gz (7.8 MB)
Email in envelope

Get an email when there's a new version of PEFT

Home / v0.17.0
Name Modified Size InfoDownloads / Week
Parent folder
0.17.0_ SHiRA, MiSS, LoRA for MoE, and more source code.tar.gz 2025-08-01 7.8 MB
0.17.0_ SHiRA, MiSS, LoRA for MoE, and more source code.zip 2025-08-01 8.3 MB
README.md 2025-08-01 8.1 kB
Totals: 3 Items   16.1 MB 0

Highlights

peft-v0 17 0

New Methods

SHiRA

@kkb-code contributed Sparse High Rank Adapters (SHiRA, paper) which promise to offer a potential gain in performance over LoRAs - especially the concept loss when using multiple adapters is improved. Since the adapters only train on 1-2% of the weights and are inherently sparse, switching between adapters may be cheaper than with LoRAs. (#2584)

MiSS

@JL-er added a new PEFT method, MiSS (Matrix Shard Sharing) in [#2604]. This method is an evolution of Bone, which, according to our PEFT method comparison benchmark, gives excellent results when it comes to performance and memory efficiency. If you haven't tried it, you should do so now.

At the same time, Bone will be deprecated in favor of MiSS and will be removed in PEFT v0.19.0. If you already have a Bone checkpoint, you can use scripts/convert-bone-to-miss.py to convert it into a MiSS checkpoint and proceed with training using MiSS.

Enhancements

LoRA for nn.Parameter

LoRA is now able to target nn.Parameter directly (#2638, [#2665])! Ever had this complicated nn.Module with promising parameters inside but it was too custom to be supported by your favorite fine-tuning library? No worries, now you can target nn.Parameters directly using the target_parameters config attribute which works similarly to target_modules.

This option can be especially useful for models with Mixture of Expert (MoE) layers, as those often use nn.Parameters directly and cannot be targeted with target_modules. For example, for the Llama4 family of models, use the following config to target the MoE weights:

:::python
config = LoraConfig(
    ...,
    target_modules=[],  # <= prevent targeting any modules
    target_parameters=["feed_forward.experts.down_proj", "feed_forward.experts.gate_up_proj"],
)

Note that this feature is still experimental as it comes with a few caveats and therefore might change in the future. Also, MoE weights with many experts can be quite huge, so expect a higher memory usage than compared to targeting normal nn.Linear layers.

Injecting adapters based on a state_dict

Sometimes, it is possible that there is a PEFT adapter checkpoint but the corresponding PEFT config is not known for whatever reason. To inject the PEFT layers for this checkpoint, you would usually have to reverse-engineer the corresponding PEFT config, most notably the target_modules argument, based on the state_dict from the checkpoint. This can be cumbersome and error prone. To avoid this, it is also possible to call inject_adapter_in_model and pass the loaded state_dict as an argument:

:::python
from safetensors.torch import load_file
from peft import LoraConfig, inject_adapter_in_model

model = ...
state_dict = load_file(<path-to-safetensors-file>)
lora_config = LoraConfig()  # <= no need to specify further
model = inject_adapter_in_model(lora_config, model, state_dict=state_dict)

Find more on state_dict based injection in the docs.

Changes

Compatibility

A bug in prompt learning methods caused modules_to_save to be ignored. Especially classification tasks are affected since they usually add the classification/score layer to modules_to_save. In consequence, these layers were neither trained nor stored after training. This has been corrected now. (#2646)

All Changes

New Contributors

Full Changelog: https://github.com/huggingface/peft/compare/v0.16.0...v0.17.0

Source: README.md, updated 2025-08-01