Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
0.1.4 source code.tar.gz | 2023-01-19 | 5.0 MB | |
0.1.4 source code.zip | 2023-01-19 | 5.2 MB | |
README.md | 2023-01-19 | 3.8 kB | |
Totals: 3 Items | 10.2 MB | 0 |
This code release is also associated with arXiv v3 paper release.
Major Updates
- Add sparse solvers support to LM adaptive damping by @luisenp in https://github.com/facebookresearch/theseus/pull/360
- Add a differentiable sparse matrix vector product on top of our ops by @luisenp in https://github.com/facebookresearch/theseus/pull/392
- Dogleg by @luisenp in https://github.com/facebookresearch/theseus/pull/371
- Add support for masking jacobians of zero weights in the batch by @luisenp in https://github.com/facebookresearch/theseus/pull/398
- Add a
labs
package for experimental stuff by @luisenp in https://github.com/facebookresearch/theseus/pull/424
Other Changes
- Minor vmap fix in SO2 by @luisenp in https://github.com/facebookresearch/theseus/pull/362
- Change Objective.error() so it can be vectorized w/o changing vectorized cache by @luisenp in https://github.com/facebookresearch/theseus/pull/363
- Added missing square in
test_theseus_layer
loss. by @luisenp in https://github.com/facebookresearch/theseus/pull/372 - Add proper accept/reject logic for LM optimizer by @luisenp in https://github.com/facebookresearch/theseus/pull/364
- Cleaned up sparse solvers code by @luisenp in https://github.com/facebookresearch/theseus/pull/386
- Remove softmax from end to end test and do some clean up by @luisenp in https://github.com/facebookresearch/theseus/pull/389
- Add a workaround for NonlinearOptimizer rejecting all batch steps by @luisenp in https://github.com/facebookresearch/theseus/pull/388
- Add code to automatically pick number of LUCuda contexts by @luisenp in https://github.com/facebookresearch/theseus/pull/390
- Bug fix in NonlinearOptimizer.reset() when using LUCudaSparseSolver by @luisenp in https://github.com/facebookresearch/theseus/pull/396
- Allow float data to be used with our sparse solver extensions by @luisenp in https://github.com/facebookresearch/theseus/pull/391
- Add diagonal scaling method and re-enable LM adaptive + ellipsoidal by @luisenp in https://github.com/facebookresearch/theseus/pull/393
- Add a timer util that adapts to a torch.device. by @luisenp in https://github.com/facebookresearch/theseus/pull/399
- Made torch's CUDA_GCC_VERSION check optional in setup.py. by @luisenp in https://github.com/facebookresearch/theseus/pull/387
- Fix black and wheel script errors by @luisenp in https://github.com/facebookresearch/theseus/pull/421
- Fix bug in Vectorization of autodiff costs. by @luisenp in https://github.com/facebookresearch/theseus/pull/400
- Objective.error() no longer updates vectorization if also_update=False. by @luisenp in https://github.com/facebookresearch/theseus/pull/401
- update flake8 to github by @bamos in https://github.com/facebookresearch/theseus/pull/428
- Use IMPLICIT for test_theseus_layer and fix related bugs by @luisenp in https://github.com/facebookresearch/theseus/pull/431
- Remove nox dependency by @luisenp in https://github.com/facebookresearch/theseus/pull/436
- Bumped version to 0.1.4. by @luisenp in https://github.com/facebookresearch/theseus/pull/435
Development in labs
package
- Add theseus.geometry.functional.so3 module with exp() and jexp() implementations by @fantaosha in https://github.com/facebookresearch/theseus/pull/365
- Add theseus.geometry.functional.so3.adjoint() by @fantaosha in https://github.com/facebookresearch/theseus/pull/373
- Add theseus.geometry.functional.so3.inverse() by @fantaosha in https://github.com/facebookresearch/theseus/pull/374
- Add hat() and vee() operators to theseus.geometry.so3 by @fantaosha in https://github.com/facebookresearch/theseus/pull/378
Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.3...0.1.4