Download Latest Version Pytorch Toolbelt 0.8.0 source code.tar.gz (513.7 kB)
Email in envelope

Get an email when there's a new version of Pytorch-toolbelt

Home / 0.7.0
Name Modified Size InfoDownloads / Week
Parent folder
pytorch_toolbelt_mit_b5_imagenet.pth 2023-10-25 326.1 MB
pytorch_toolbelt_mit_b4_imagenet.pth 2023-10-25 243.6 MB
pytorch_toolbelt_mit_b3_imagenet.pth 2023-10-25 176.5 MB
pytorch_toolbelt_mit_b2_imagenet.pth 2023-10-25 96.9 MB
pytorch_toolbelt_mit_b1_imagenet.pth 2023-10-25 52.7 MB
pytorch_toolbelt_mit_b0_imagenet.pth 2023-10-25 13.3 MB
Pytorch Toolbelt 0.7.0 source code.tar.gz 2023-05-04 503.5 kB
Pytorch Toolbelt 0.7.0 source code.zip 2023-05-04 559.5 kB
README.md 2023-05-04 3.1 kB
Totals: 9 Items   910.2 MB 0

New stuff

  • All encoders & decoders & heads are now inherit from HasOutputFeaturesSpecification interface to query number of output channels and strides this module outputs.
  • New loss class QualityFocalLoss from https://arxiv.org/abs/2006.04388
  • New function pad_tensor_to_size -A generic padding function for N-dimensional tensors [B,C, ...] shape.
  • Added DropPath layer (aka DropConnect)
  • Pretrained weights for SegFormer backbones
  • first_class_background_init for initializing last output convolution/linear block to have zeros in weights and bias layer set to [logit(bg_prob), logit(1-bg_prob), ...]
  • New function instantiate_normalization_block to create normalization layer by name. This is used in some decoder layers / heads.

  • Improvements

  • Improve numeric accuracy of focal_loss_with_logits function by explicitly disabling AMP autocast for this function and casting preds & targets to float32.

  • MultiscaleTTA now allows setting interpolation mode and align_corners for resizing input and predictions.
  • BinaryFocalLoss now has __repr__
  • name_for_stride now accepts stride argument to be None. In this case the function is noop and returns input argument name.
  • RandomSubsetDataset now takes optional weights argument to select samples with given probability.

Bugfixes

  • Implementation of get_collate_fn for RandomSubsetDataset is not correct and returns collate function instead of calling it

Breaking Changes

  • Signature of decoders changes to require first argument input_spec to be of type FeatureMapsSpecification.
  • Rewritten BiFPN decoder to support arbitrary number of input feature maps, user-defined normalization & activation & BiFPN block.
  • Rewritten UNetDecoder to allow setting upsample block as string type

  • WeightedLoss and JointLoss classes has been removed. If your code was using these classes here they are - copy paste them to your project and live happily, but I strongly suggest to use modern deep learning frameworks that support defining losses from configuration files.

    :::python class WeightedLoss(_Loss): """Wrapper class around loss function that applies weighted with fixed factor. This class helps to balance multiple losses if they have different scales """

    def __init__(self, loss, weight=1.0):
        super().__init__()
        self.loss = loss
        self.weight = weight
    
    def forward(self, *input):
        return self.loss(*input) * self.weight
    

    class JointLoss(_Loss): """ Wrap two loss functions into one. This class computes a weighted sum of two losses. """

    def __init__(self, first: nn.Module, second: nn.Module, first_weight=1.0, second_weight=1.0):
        super().__init__()
        self.first = WeightedLoss(first, first_weight)
        self.second = WeightedLoss(second, second_weight)
    
    def forward(self, *input):
        return self.first(*input) + self.second(*input)
    
Source: README.md, updated 2023-05-04