Download Latest Version Release 0.9.0 source code.zip (8.0 MB)
Email in envelope

Get an email when there's a new version of TensorLy

Home / 0.7.0
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2021-11-08 3.7 kB
TensorLy Release 0.7.0 source code.tar.gz 2021-11-08 805.7 kB
TensorLy Release 0.7.0 source code.zip 2021-11-08 910.3 kB
Totals: 3 Items   1.7 MB 0

TensorLy 0.7.0 is out!

In this new version of TensorLy, the whole team has been working hard to bring you lots of improvements, from new decompositions to new functions, faster code and better documentation.

Major improvements and new features

New decompositions

We added some great new tensor decompositions, including

  • Coupled Matrix-Tensor Factorisation: [ CMTF-ALS [#293] thanks to @IsabellLehmann and @aarmey ]
  • Tensor-Ring (e.g. MPS with periodic boundary conditions):
 [ Tensor Ring implementation [#229] thanks to @merajhashemi ]
  • Non-negative Tucker decomposition via Hierarchical ALS:
 [ NN-HALS Tucker @caglayantuna @cohenjer [#254] ]
  • A new CP decomposition that supports various constraints on each mode, including monotony, non-negativity, l1/l2 regularization, smoothness, sparsity, etc! [ 
Constrained parafac [#284], thanks to @caglayantuna and @cohenjer ]

Brand new features

We added a brand new tensordot that supports batching! [ Adding a new Batched Tensor Dot + API simplification [#309] ]

Normalization for Tucker factors, [#283] thanks to @caglayantuna and @cohenjer!

Added a convenient function to compute the gradient of the difference norm between a CP and dense tensor, [#294], thanks to @aarmey

Backend refactoring

In an effort to make the TensorLy backend even more flexible and fast, we refactored the main backend as well as the tensor algebra backend. We make lots of small quality of life improvement in the process! In particular, reconstructing a tt-matrix is a lot more efficient now. [ Backend refactoring : use a BackendManager class and use it directly as tensorly.backend's Module class [#330], @JeanKossaifi ]

Enhancements

Improvements to Parafac2 (convergence criteria, etc) [#267], thanks to @marieRoald HALS convergence FIX TODO, @marieRoald and @IsabellLehmann, [#271] Ensuring consistency between the object oriented API and the functional one thanks to @yngvem, [#268] Added lstsq to backend, [#305], thanks to @merajhashemi Fix documentation for case insensitive clashes between the function and class: https://github.com/tensorly/tensorly/issues/219 Added random-seed for TT-cross, [#304] thanks to @yngvem Fix svd sign indeterminacy [#216], thanks to @merajhashemi Rewrote vonneumann_entropy to handle multidimensional tensors. [#270], thanks to @taylorpatti Adding check for all modes fixed case and if true then to just return the initialization [#325], thanks to @ParvaH We now provide a prod function that works like math.prod for users using Python < 3.8, in tensorly.utils.prod

New backend functions

All backend now support matmul, tensor dot (#306), as well as sin, cos, flip, argsort, count_nonzero, cumsum, any, lstsq and trace.

Bug Fixes

Fixed NN-Tucker hals sparsity coefficient issue, thanks to @caglayantuna [#295] Fix svd for pytorch < 1.8 [#312] thanks to @merajhashemi Fix dot and matmul in PyTorch and TF [#313] thanks to @merajhashemi Fix tl.partial_unfold [#315], thanks to @merajhashemi Fixed behaviour of diag for TensorFlow backend. Fix tl.partial_svd : now explicitly check for NaN values, [#318] thanks to @merajhashemi fix diag function for tensorflow and pytorch backends [#321], thanks to @caglayantuna Fix singular vectors to be orthonormal [#320] thanks to @merajhashemi fix active set and hals tests [#323] thanks to @caglayantuna Add test for matmul [#322] thanks to @merajhashemi Sparse backend usage fix by @caglayantuna in [#280]

Source: README.md, updated 2021-11-08