Download Latest Version MMPreTrain Release v1.0.1.zip (4.1 MB)
Email in envelope

Get an email when there's a new version of MMClassification

Home / v1.0.0rc7
Name Modified Size InfoDownloads / Week
Parent folder
MMPreTrain Release v1.0.0rc7_ Providing powerful backbones with various pre-training strategies.tar.gz 2023-04-07 2.7 MB
MMPreTrain Release v1.0.0rc7_ Providing powerful backbones with various pre-training strategies.zip 2023-04-07 3.6 MB
README.md 2023-04-07 5.8 kB
Totals: 3 Items   6.3 MB 0

MMPreTrain v1.0.0rc7 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Bug Fixes
  • Docs Update

Highlights

We are excited to announce that MMClassification and MMSelfSup have been merged into ONE codebase, named MMPreTrain, which has the following highlights: - Integrated Self-supervised learning algorithms from MMSelfSup, such as MAE, BEiT, etc. Users could find that in our directory mmpretrain/models, where a new folder selfsup was made, which support 18 recent self-supervised learning algorithms.

Contrastive leanrning Masked image modeling
MoCo series BEiT series
SimCLR MAE
BYOL SimMIM
SwAV MaskFeat
DenseCL CAE
SimSiam MILAN
BarlowTwins EVA
DenseCL MixMIM
  • Support RIFormer, which is a way to keep a vision backbone effective while removing token mixers in its basic building blocks. Equipped with our proposed optimization strategy, we are able to build an extremely simple vision backbone with encouraging performance, while enjoying high efficiency during inference.
  • Support LeViT, XCiT, ViG, and ConvNeXt-V2 backbone, thus currently we support 68 backbones or algorithms and 472 checkpoints.

  • Add t-SNE visualization, users could visualize t-SNE to analyze the ability of your backbone. An example of visualization: left is from MoCoV2_ResNet50 and the right is from MAE_ViT-base:

  • Refactor dataset pipeline visualization, now we could also visualize the pipeline of mask image modeling, such as BEiT:

New Features

  • Support RIFormer. (#1453)
  • Support XCiT Backbone. (#1305)
  • Support calculate confusion matrix and plot it. (#1287)
  • Support RetrieverRecall metric & Add ArcFace config (#1316)
  • Add ImageClassificationInferencer. (#1261)
  • Support InShop Dataset (Image Retrieval). (#1019)
  • Support LeViT backbone. (#1238)
  • Support VIG Backbone. (#1304)
  • Support ConvNeXt-V2 backbone. (#1294)

Improvements

  • Use PyTorch official scaled_dot_product_attention to accelerate MultiheadAttention. (#1434)
  • Add ln to vit avg_featmap output (#1447)
  • Update analysis tools and documentations. (#1359)
  • Unify the --out and --dump in tools/test.py. (#1307)
  • Enable to toggle whether Gem Pooling is trainable or not. (#1246)
  • Update registries of mmcls. (#1306)
  • Add metafile fill and validation tools. (#1297)
  • Remove useless EfficientnetV2 config files. (#1300)

Bug Fixes

  • Fix precise bn hook (#1466)
  • Fix retrieval multi gpu bug (#1319)
  • Fix error repvgg-deploy base config path. (#1357)
  • Fix bug in test tools. (#1309)

Docs Update

  • Translate some tools tutorials to Chinese. (#1321)
  • Add Chinese translation for runtime.md. (#1313)

Contributors

A total of 13 developers contributed to this release. Thanks to @techmonsterwang , @qingtian5 , @mzr1996 , @okotaku , @zzc98 , @aso538 , @szwlh-c , @fangyixiao18 , @yukkyo , @Ezra-Yu , @csatsurnh , @2546025323 , @GhaSiKey .

New Contributors

Full Changelog: https://github.com/open-mmlab/mmpretrain/compare/v1.0.0rc5...v1.0.0rc7

Source: README.md, updated 2023-04-07