Home / v2.2.0
Name Modified Size InfoDownloads / Week
Parent folder
PyTorch_XLA 2.2 Release Notes source code.tar.gz 2024-01-30 6.8 MB
PyTorch_XLA 2.2 Release Notes source code.zip 2024-01-30 7.2 MB
README.md 2024-01-30 4.9 kB
Totals: 3 Items   14.0 MB 0

Cloud TPUs now support the PyTorch 2.2 release, via PyTorch/XLA integration. On top of the underlying improvements and bug fixes in the PyTorch 2.2 release, this release introduces several features, and PyTorch/XLA specific bug fixes.

Installing PyTorch and PyTorch/XLA 2.2.0 wheel:

pip install torch~=2.2.0 torch_xla[tpu]~=2.2.0 -f https://storage.googleapis.com/libtpu-releases/index.html

Please note that you might have to re-install the libtpu on your TPUVM depending on your previous installation:

pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html

Stable Features

PJRT

Beta Features

GSPMD

  • Support DTensor API integration and move GSPMD out of experimental (#5776).
  • Enable debug visualization func visualize_tensor_sharding (#5742), added doc.
  • Support mark_shard scalar tensors (#6158).
  • Add apply_backward_optimization_barrier (#6157).

Export

CoreAtenOpSet

  • PyTorch/XLA aims to support all PyTorch core ATen ops in the 2.3 release. We’re actively working on this, remaining issues to be closed can be found at issue list.

Benchmark

  • Support of benchmark running automation and metric report analysis on both TPU and GPU.

Experimental Features

FSDP via SPMD

  • Introduce FSDP via SPMD, or FSDPv2 (#6187). The RFC can be found (#6379).
  • Add FSDPv2 user guide (#6386).

Distributed Op

Persistent Compilation

Checkpointing

Usability

Quantization

GPU

Bug Fixes and Improvements

Lowering

Source: README.md, updated 2024-01-30