Download Latest Version v2.0.0 source code.zip (154.4 MB)
Email in envelope

Get an email when there's a new version of NVIDIA PhysicsNeMo

Home / v1.2.0
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2025-08-22 6.6 kB
v1.2.0 source code.tar.gz 2025-08-22 116.8 MB
v1.2.0 source code.zip 2025-08-22 118.1 MB
Totals: 3 Items   234.9 MB 0

PhysicsNeMo General Release v1.2.0

Added

  • Diffusion Transformer (DiT) model. The DiT model can be accessed in physicsnemo.experimental.models.dit.DiT. ⚠️Warning: - Experimental feature subject to future API changes.
  • Improved documentation for diffusion models and diffusion utils.
  • Safe API to override __init__'s arguments saved in checkpoint file with Module.from_checkpoint("chkpt.mdlus", override_args=set(...)).
  • PyTorch Geometric MeshGraphNet backend.
  • Functionality in DoMINO to take arbitrary number of scalar or vector global parameters and encode them using class ParameterModel
  • TopoDiff model and example.
  • Added ability for DoMINO model to return volume neighbors.
  • Added functionality in DoMINO recipe to introduce physics residual losses.
  • Diffusion models, metrics, and utils: implementation of Student-t distribution for EDM-based diffusion models (t-EDM). This feature is adapted from the paper Heavy-Tailed Diffusion Models, Pandey et al.. This includes a new EDM preconditioner (tEDMPrecondSuperRes), a loss function (tEDMResidualLoss), and a new option in corrdiff diffusion_step. ⚠️ This is an experimental feature that can be accessed through the physicsnemo.experimental module; it might also be subjected to API changes without notice.
  • Bumped Ruff version from 0.0.290 to 0.12.5. Replaced Black with ruff-format.
  • Domino improvements with Unet attention module and user configs
  • Hybrid MeshGraphNet for modeling structural deformation
  • Enabled TransformerEngine backend in the transolver model.
  • Inference code for x-meshgraphnet example for external aerodynamics.
  • Added a new example for external_aerodynamics: training transolver on irregular mesh data for DrivaerML surface data.
  • Added a new example for external aerodynamics for finetuning pretrained models.

Changed

  • Diffusion utils: physicsnemo.utils.generative renamed into physicsnemo.utils.diffusion
  • Diffusion models: in CorrDiff model wrappers (EDMPrecondSuperResolution and UNet), the arguments profile_mode and amp_mode cannot be overriden by from_checkpoint. They are now properties that can be dynamically changed after the model instantiation with, for example, model.amp_mode = True and model.profile_mode = False.
  • Updated healpix data module to use correct DistributedSampler target for test data loader
  • Existing DGL-based vortex shedding example has been renamed to vortex_shedding_mgn_dgl. Added new vortex_shedding_mgn example that uses PyTorch Geometric instead.
  • HEALPixLayer can now use earth2grid HEALPix padding ops, if desired
  • Migrated Vortex Shedding Reduced Mesh example to PyTorch Geometric.
  • CorrDiff example: fixed bugs when training regression UNet.
  • Diffusion models: fixed bugs related to gradient checkpointing on non-square images.
  • Diffusion models: created a separate class Attention for clarity and modularity. Updated UNetBlock accordingly to use the Attention class instead of custom attention logic. This will update the model architecture for SongUNet-based diffusion models. Changes are not BC-breaking and are transparent to the user.
  • ⚠️ BC-breaking: refactored the automatic mixed precision (AMP) API in layers and models defined in physicsnemo/models/diffusion/ for improved usability. Note: it is now, not only possible, but required to explicitly set model.amp_mode = True in order to use the model in a torch.autocast clause. This applies to all SongUNet-based models.
  • Diffusion models: fixed and improved API to enable fp16 forward pass in UNet and EDMPrecondSuperResolution model wrappers; fp16 forward pass can now be toggled/untoggled by setting model.use_fp16 = True.
  • Diffusion models: improved API for Apex group norm. SongUNet-based models will automatically perform conversion of the input tensors to torch.channels_last memory format when model.use_apex_gn is True. New warnings are raised when attempting to use Apex group norm on CPU.
  • Diffusion utils: systematic compilation of patching operations in stochastic_sampler for improved performance.
  • CorrDiff example: added option for Student-t EDM (t-EDM) in train.py and generate.py. When training a CorrDiff diffusion model, this feature can be enabled with the hydra overrides ++training.hp.distribution=student_t and ++training.hp.nu_student_t=<nu_value>. For generation, this feature can be enabled with similar overrides: ++generation.distribution=student_t and ++generation.nu_student_t=<nu_value>.
  • CorrDiff example: the parameters P_mean and P_std (used to compute the noise level sigma) are now configurable. They can be set with the hydra overrides ++training.hp.P_mean=<P_mean_value> and ++training.hp.P_std=<P_std_value> for training (and similar ones with training.hp replaced by generation for generation).
  • Diffusion utils: patch-based inference and lead time support with deterministic sampler.
  • Existing DGL-based XAeroNet example has been renamed to xaeronet_dgl. Added new xaeronet example that uses PyTorch Geometric instead.
  • Updated the deforming plate example to use the Hybrid MeshGraphNet model.
  • ⚠️ BC-breaking: Refactored the transolver model to improve readability and performance, and extend to more use cases.
  • Diffusion models: improved lead time support for SongUNetPosLtEmbd and EDMLoss. Lead-time embeddings can now be used with/without positional embeddings.
  • Diffusion models: consolidate ApexGroupNorm and GroupNorm in models/diffusion/layers.py with a factory get_group_norm that can be used to instantiate either one of them. get_group_norm is now the recommended way to instantiate a GroupNorm layer in SongUNet-based and other diffusion models.
  • Physicsnemo models: improved checkpoint loading API in Module.from_checkpoint that now exposes a strict parameter to raise error on missing/unexpected keys, similar to that used in torch.nn.Module.load_state_dict.
  • Migrated Hybrid MGN and deforming plate example to PyTorch Geometric.

Fixed

  • Bug fixes in DoMINO model in sphere sampling and tensor reshaping
  • Bug fixes in DoMINO utils random sampling and test.py
  • Optimized DoMINO config params based on DrivAer ML
Source: README.md, updated 2025-08-22