| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| Adapters v1.0.0 source code.tar.gz | 2024-08-10 | 15.1 MB | |
| Adapters v1.0.0 source code.zip | 2024-08-10 | 15.3 MB | |
| README.md | 2024-08-10 | 1.3 kB | |
| Totals: 3 Items | 30.4 MB | 0 | |
Blog post: https://adapterhub.ml/blog/2024/08/adapters-update-reft-qlora-merging-models
This version is built for Hugging Face Transformers v4.43.x.
New Adapter Methods & Model Support
- Add Representation Fine-Tuning (ReFT) implementation (LoReFT, NoReFT, DiReFT) (@calpt via [#705])
- Add LoRA weight merging with Task Arithmetics (@lenglaender via [#698])
- Add Whisper model support + notebook (@TimoImhof via [#693]; @julian-fong via [#717])
- Add Mistral model support (@KorventennFR via [#609])
- Add PLBart model support (@FahadEbrahim via [#709])
Breaking Changes & Deprecations
- Remove support for loading from archived Hub repository (@calpt via [#724])
- Remove deprecated add_fusion() & train_fusion() methods (@calpt via [#714])
- Remove deprecated arguments in
push_adapter_to_hub()method (@calpt via [#724]) - Deprecate support for passing Python lists to adapter activation (@calpt via [#714])
Minor Fixes & Changes
- Upgrade supported Transformers version (@calpt & @lenglaender via [#712], [#719], [#727])
- Fix SDPA/ Flash attention support for Llama (@calpt via [#722])
- Fix gradient checkpointing for Llama and for Bottleneck adapters (@calpt via [#730])