Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
MMPreTrain Release v1.0.0_ Backbones, Self-Supervised Learning and Multi-Modalilty.tar.gz | 2023-07-05 | 2.9 MB | |
MMPreTrain Release v1.0.0_ Backbones, Self-Supervised Learning and Multi-Modalilty.zip | 2023-07-05 | 4.1 MB | |
README.md | 2023-07-05 | 4.7 kB | |
Totals: 3 Items | 7.0 MB | 0 |
MMPreTrain Release v1.0.0: Backbones, Self-Supervised Learning and Multi-Modalilty
- Support inference of more multi-modal algorithms, such as LLaVA, MiniGPT-4, Otter, etc.
- Support around 10 multi-modal datasets!
- Add iTPN, SparK self-supervised learning algorithms.
- Provide examples of New Config and DeepSpeed/FSDP
New Features
- Transfer shape-bias tool from mmselfsup (#1658)
- Download dataset by using MIM&OpenDataLab (#1630)
- Support New Configs (#1639, #1647, #1665)
- Support Flickr30k Retrieval dataset (#1625)
- Support SparK (#1531)
- Support LLaVA (#1652)
- Support Otter (#1651)
- Support MiniGPT-4 (#1642)
- Add support for VizWiz dataset (#1636)
- Add support for vsr dataset (#1634)
- Add InternImage Classification project (#1569)
- Support OCR-VQA dataset (#1621)
- Support OK-VQA dataset (#1615)
- Support TextVQA dataset (#1569)
- Support iTPN and HiViT (#1584)
- Add retrieval mAP metric (#1552)
- Support NoCap dataset based on BLIP. (#1582)
- Add GQA dataset (#1585)
Improvements
- Update fsdp vit-huge and vit-large config (#1675)
- Support deepspeed with flexible runner (#1673)
- Update Otter and LLaVA docs and config. (#1653)
- Add image_only param of ScienceQA (#1613)
- Support to use "split" to specify training set/validation (#1535)
Bug Fixes
- Refactor _prepare_pos_embed in ViT (#1656, #1679)
- Freeze pre norm in vision transformer (#1672)
- Fix bug loading IN1k dataset (#1641)
- Fix sam bug (#1633)
- Fixed circular import error for new transform (#1609)
- Update torchvision transform wrapper (#1595)
- Set default out_type in CAM visualization (#1586)
Docs Update
New Contributors
- @alexwangxiang made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1555
- @InvincibleWyq made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1615
- @yyk-wew made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1634
- @fanqiNO1 made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1673
- @Ben-Louis made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1679
- @Lamply made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1671
- @minato-ellie made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1644
- @liweiwp made their first contribution in https://github.com/open-mmlab/mmpretrain/pull/1629