Download Latest Version Release 3.0 - 21_8_2023 source code.zip (34.2 MB)
Email in envelope

Get an email when there's a new version of Neural Network Intelligence

Home / v3.0
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2023-09-13 6.3 kB
Release 3.0 - 21_8_2023 source code.tar.gz 2023-09-13 33.1 MB
Release 3.0 - 21_8_2023 source code.zip 2023-09-13 34.2 MB
Totals: 3 Items   67.3 MB 7

Web Portal

  • New look and feel
  • Breaking change: nni.retiarii is no longer maintained and tested. Please migrate to nni.nas.
  • Inherit nni.nas.nn.pytorch.ModelSpace, rather than use @model_wrapper.
  • Use nni.choice, rather than nni.nas.nn.pytorch.ValueChoice.
  • Use nni.nas.experiment.NasExperiment and NasExperimentConfig, rather than RetiariiExperiment.
  • Use nni.nas.model_context, rather than nni.nas.fixed_arch.
  • Please refer to quickstart for more changes.
  • A refreshed experience to construct model space.
  • Enhanced debuggability via freeze() and simplify() APIs.
  • Enhanced expressiveness with nni.choice, nni.uniform, nni.normal and etc.
  • Enhanced experience of customization with MutableModule, ModelSpace and ParamterizedModule.
  • Search space with constraints is now supported.
  • Improved robustness and stability of strategies.
  • Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
  • Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
  • Most multi-trial strategies now supports specifying seed for reproducibility.
  • Performance of strategies have been verified on a set of benchmarks.
  • Strategy/engine middleware.
  • Filtering, replicating, deduplicating or retrying models submitted by any strategy.
  • Merging or transforming models before executing (e.g., CGO).
  • Arbitrarily-long chains of middlewares.
  • New execution engine.
  • Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
  • The old execution engine is now decomposed into execution engine and model format.
  • Enhanced extensibility of execution engines.
  • NAS profiler and hardware-aware NAS.
  • New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
  • Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
  • Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Model Compression

  • Compression framework is refactored, new framework import path is nni.contrib.compression.
  • Configure keys are refactored, support more detailed compression configurations. view doc
  • Support multi compression methods fusion.
  • Support distillation as a basic compression component.
  • Support more compression targets, like input, ouptut and any registered paramters.
  • Support compressing any module type by customizing module settings.
  • Model compression support in DeepSpeed mode.
  • Fix example bugs.
  • Pruning
  • Pruner interfaces have fine-tuned for easy to use. view doc
  • Support configuring granularity in pruners. view doc
  • Support different mask ways, multiply zero or add a large negative value.
  • Support manully setting dependency group and global group. view doc
  • A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
  • The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
  • Fix config list in the examples.
  • Quantization
  • Support using Evaluator to handle training/inferencing.
  • Support more module fusion combinations. view doc
  • Support configuring granularity in quantizers. view doc
  • Bias correction is supported in the Post Training Quantization algorithm.
  • LSQ+ quantization algorithm is supported.
  • Distillation
  • DynamicLayerwiseDistiller and Adaptive1dLayerwiseDistiller are supported.
  • Compression documents now updated for the new framework, the old version please view v2.10 doc.
  • New compression examples are under nni/examples/compression
  • Create a evaluator: nni/examples/compression/evaluator
  • Pruning a model: nni/examples/compression/pruning
  • Quantize a model: nni/examples/compression/quantization
  • Fusion compression: nni/examples/compression/fusion

Training Services

  • Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
  • Local training service:
  • Reduced latency of creating trials
  • Fixed "GPU metric not found"
  • Fixed bugs about resuming trials
  • Remote training service:
  • reuse_mode now defaults to False; setting it to True will fallback to v2.x remote training service
  • Reduced latency of creating trials
  • Fixed "GPU metric not found"
  • Fixed bugs about resuming trials
  • Supported viewing trial logs on the web portal
  • Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)
  • Get rid of IoC and remove unused training services.
Source: README.md, updated 2023-09-13