Download Latest Version v1.5.0 source code.tar.gz (3.6 MB)
Email in envelope

Get an email when there's a new version of AutoGluon

Home / v1.5.0
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2025-12-19 12.9 kB
v1.5.0 source code.tar.gz 2025-12-19 3.6 MB
v1.5.0 source code.zip 2025-12-19 4.4 MB
Totals: 3 Items   8.0 MB 2

Version 1.5.0

We are happy to announce the AutoGluon 1.5.0 release!

AutoGluon 1.5.0 introduces new features and major improvements to both tabular and time series modules.

This release contains 131 commits from 17 contributors! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/1.4.0...1.5.0

Join the community: Get the latest updates: Twitter

This release supports Python versions 3.10, 3.11, 3.12 and 3.13. Support for Python 3.13 is currently experimental, and some features might not be available when running Python 3.13 on Windows. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.5.0.


Spotlight

Chronos-2

AutoGluon v1.5 adds support for Chronos-2, our latest generation of foundation models for time series forecasting. Chronos-2 natively handles all types of dynamic covariates, and performs cross-learning from items in the batch. It produces multi-step quantile forecasts and is designed for strong out-of-the-box performance on new datasets.

Chronos-2 achieves state-of-the-art zero-shot accuracy among public models on major benchmarks such as fev-bench and GIFT-Eval, making it a strong default choice when little or no task-specific training data is available.

In AutoGluon, Chronos-2 can be used in zero-shot mode or fine-tuned on custom data. Both LoRA fine-tuning and full fine-tuning are supported. Chronos-2 integrates into the standard TimeSeriesPredictor workflow, making it easy to backtest, compare against classical and deep learning models, and combine with other models in ensembles.

:::python
from autogluon.timeseries import TimeSeriesPredictor

predictor = TimeSeriesPredictor(...)
predictor.fit(train_data, presets="chronos2")  # zero-shot mode

More details on zero-shot usage, fine-tuning and ensembling are available in the updated tutorial.

Tabular

TBA


General

Dependencies

  • Update torch to >=2.6,<2.10 @FANGAreNotGnu @shchur (#5270) (#5425)
  • Update seaborn to >=0.12.0,<0.14. @Innixma (#5378)
  • Update onnx to >=1.13.0,<1.21.0 @shchur (#5439)
  • Update ray to >=2.43.0,<2.53 @shchur @prateekdesai04 (#5442) (#5312)
  • Update transformers to ">=4.51.0,<4.58" @shchur (#5439)
  • Update lightning to >=2.5.1,<2.6 @canerturkmen (#5432)
  • Update psutil to >=5.7.3,<7.2.0 @Innixma (#5434)
  • Update xgboost to >=2.0,<3.2 @Innixma (#5434)
  • Update pytabkit to 1.7.2,<1.8 @Innixma (#5434)
  • Update tabpfn to >=6.1.0,<6.1.1 @Innixma (#5434)
  • Update tabicl to 0.1.4,<0.2 @Innixma (#5434)
  • Update scikit-learn-intelex to 2025.0,<2025.10 @Innixma (#5434)
  • Add experimental support for Python 3.13 @shchur @shou10152208 (#5073) (#5423)

Fixes and Improvements

  • Minor typing fixes. @canerturkmen (#5292)
  • Fix conda install instructions for ray version. @Innixma (#5323)
  • Use standalone uv in full_install.sh. @Innixma (#5328)
  • Cleanup load_pd and save_pd. @Innixma (#5359)
  • Remove LICENSE and NOTICE files from common. @prateekdesai04 (#5396)
  • Fix upload python package. @prateekdesai04 (#5397)
  • Change build order. @prateekdesai04 (#5398)
  • Decouple and enable module-wise installation. @prateekdesai04 (#5399)
  • Fix get_smallest_valid_dtype_int for negative values. @Innixma (#5421)

Tabular

AutoGluon-Tabular v1.5 introduces several improvements focused on accuracy, robustness, and usability. The release adds new foundation models, updates the feature preprocessing pipeline, and improves GPU stability and memory estimation. New model portfolios are provided for both CPU and GPU workloads.

Highlights

  • New foundation models: RealTabPFN-2, RealTabPFN-2.5, and TabDPT are now available in AutoGluon-Tabular.
  • Updated preprocessing pipeline with more consistent feature handling across models.
  • Improved GPU stability and more reliable memory estimation during training.
  • New CPU and GPU portfolios tuned for better performance across a wide range of datasets.
  • Stronger benchmark results: with the new presets, AutoGluon-Tabular v1.5 achieves an 85% win rate over AutoGluon v1.4 Extreme on the 51 TabArena datasets, with a 3% reduction in mean relative error.

New Features

  • New model: Explainable Boosting Machine. @paulbkoch (#4480)
  • New preprocessors for tabular data. @atschalz (#5441)
  • Add LightGBMPrep. @atschalz @Innixma (#5490)
  • New models: TabPFN-2.5, TabDPT @Innixma (#5434)
  • Add v1.5.0 presets @Innixma ([#5505]https://github.com/autogluon/autogluon/pull/5505)

Fixes and Improvements

  • Fix bug if pred is inf and weight is 0 in weighted ensemble. @Innixma (#5317)
  • Default TabularPredictor.delete_models dry_run=False. @Innixma (#5260)
  • Remove redundant TabPFNv2 CPU log. @Innixma (#5259)
  • Add einops in mitra install. @xiyuanzh (#5266)
  • Support different random seeds per fold. @LennartPurucker (#5267)
  • Changing the default output dir's base path. @LennartPurucker (#5285)
  • Add Mitra download_default_weights. @Innixma (#5271)
  • Ensure compatibility of flash attention unpad_input. @xiyuanzh (#5298)
  • Refactor of validation technique selection. @LennartPurucker (#4585)
  • Mitra HF Args. @xiyuanzh (#5272)
  • Gracefully handle ray exceptions. @Innixma (#5327)
  • Add logs for LightGBM CUDA device. @Innixma (#5325)
  • Add Load/Save to TabularDataset. @Innixma (#5357)
  • Fix model random state. @Innixma (#5369)
  • Add AbstractModel type hints. @Innixma (#5358)
  • MakeOneFeatureGenerator pass check_is_fitted test. @betatim (#5386)
  • Enable CPU loading of models trained on GPU @Innixma (#5403) (#5434)
  • Remove unused variable val_improve_epoch in TabularNeuralNetTorchModel. @celestinoxp (#5466)
  • Fix memory estimation for RF/XT in parallel mode. @celestinoxp (#5467)
  • Pass label cleaner to model for semantic encodings. @LennartPurucker (#5482)
  • Fix time_epoch_average calculation in TabularNeuralNetTorch. @celestinoxp (#5484)
  • GPU optimization, scheduling for parallel_local fitting strategy. @prateekdesai04 (#5388)
  • Fix XGBoost crashing on eval metric name in HPs. @LennartPurucker (#5493)

TimeSeries

AutoGluon v1.5 introduces substantial improvements to the time series module, with clear gains in both accuracy and usability. Across our benchmarks, v1.5 achieves up to an 80% win rate compared to v1.4. The release adds new models, more flexible ensembling options, and numerous bug fixes and quality-of-life improvements.

Highlights

  • Chronos-2 is now available in AutoGluon, with support for zero-shot inference as well as full and LoRA fine-tuning (tutorial).
  • Customizable ensembling logic: Adds item-level ensembling, multi-layer stack ensembles, and other advanced forecast combination methods (documentation).
  • New presets leading to major gains in accuracy & efficiency. AG-TS v1.5 achieves up to 80% win rate over v1.4 on point and probabilistic forecasting tasks. With just a 10 minute time limit, v1.5 outperforms v1.4 running for 2 hours.
  • Usability improvements: Automatically determine an appropriate backtesting configuration by setting num_val_windows="auto" and refit_every_n_windows="auto". Easily access the validation predictions and perform rolling evaluation on custom data with new predictor methods backtest_predictions and backtest_targets.

New Features

  • Add multi-layer stack ensembling support @canerturkmen (#5459) (#5472) (#5463) (#5456) (#5436) (#5422) (#5391)
  • Add new advanced ensembling methods @canerturkmen @shchur (#5465) (#5420) (#5401) (#5389) (#5376)
  • Add Chronos-2 model. @abdulfatir @canerturkmen (#5427) (#5447) (#5448) (#5449) (#5454) (#5455) (#5450) (#5458) (#5492) (#5495) (#5487) (#5486)
  • Update Chronos-2 tutorial. @abdulfatir (#5481)
  • Add Toto model. @canerturkmen (#5321) (#5390) (#5475)
  • Fine-tune Chronos-Bolt on user-provided quantile_levels. @shchur (#5315)
  • Add backtesting methods for the TimeSeriesPredictor. @shchur (#5356)

  • Update predictor presets. @shchur (#5480) (#5480)

API Changes and Deprecations

  • Remove outdated presets related to the original Chronos model: chronos, chronos_large, chronos_base, chronos_small, chronos_mini, chronos_tiny, chronos_ensemble. We recommend to use the new presets chronos2, chronos2_small and chronos2_ensemble instead.

Fixes and Improvements

  • Replace inf values with NaN inside _check_and_prepare_data_frame. @shchur (#5240)
  • Add model registry and fix presets typing. @canerturkmen (#5100)
  • Fix broken unittests for time series. @shchur (#5361)
  • Move ITEMID and TIMESTAMP to dataset namespace. @canerturkmen (#5363)
  • Remove deprecated arguments and classes. @shchur (#5354)
  • Replace Chronos code with a dependency on chronos-forecasting @canerturkmen (#5380) (#5383)
  • Avoid errors if date_feature clashes with known_covariates. @shchur (#5414)
  • Make ray an optional dependency for autogluon.timeseries. @shchur (#5430)
  • Sort feature importance df. @shchur (#5468)
  • Make NPTS model deterministic. @shchur (#5471)
  • Store cardinality inside CovariateMetadata. @shchur (#5476)
  • Minor fixes and improvements @shchur @abdulfatir @canerturkmen (#5489) (#5452) (#5444) (#5416) (#5413) (#5410) (#5406)

Code Quality

  • Refactor trainable model set build logic. @canerturkmen (#5297)
  • Typing improvements to multiwindow model. @canerturkmen (#5308)
  • Move prediction cache out of trainer. @canerturkmen (#5313)
  • Refactor trainer methods with ensemble logic. @canerturkmen (#5375)
  • Use builtin generics for typing, remove types in internal docstrings. @canerturkmen (#5300)
  • Reorganize ensembles, add base class for array-based ensemble learning. @canerturkmen (#5332)
  • Separate ensemble training logic from trainer. @canerturkmen (#5384)
  • Clean up typing and documentation for Chronos. @canerturkmen (#5392)
  • Add timer utility, fix time limit in ensemble regressors, clean up tests. @canerturkmen (#5393)
  • upgrade type annotations to Python3.10. @canerturkmen (#5431)

Multimodal

Fixes and Improvements

  • Bug Fix and Update AutoMM Tutorials. @FANGAreNotGnu (#5167)
  • Fix Focal Loss. @FANGAreNotGnu (#5496)
  • Fix false positive document detection for images with incidental text. @FANGAreNotGnu (#5469)

Documentation and CI

  • [doc] Clarify tuning_data documentation. @Innixma (#5296)
  • [Test] Fix CI + Upgrade Ray. @prateekdesai04 (#5306)
  • Fix notebook build failures. @prateekdesai04 (#5348)
  • ci: scope down GitHub Token permissions. @AdnaneKhan (#5351)
  • Fix CodeQL GitHub action. @shchur (#5367)
  • [CI] Fix docker build. @prateekdesai04 (#5402)
  • [docs] Reorder modules in docs. @shchur (#5404)
  • remove ROADMAP.md. @canerturkmen (#5405)
  • [docs] Add citations for Chronos-2 and multi-layer stacking for TS. @shchur (#5412)
  • Fix permissions for platform_tests action. @shchur (#5418)
  • Revert "Fix permissions for platform_tests action". @shchur (#5419)
  • Fix torch<2.10 issues in the CI. @shchur (#5435)

Contributors

Full Contributor List (ordered by # of commits):

@shchur @canerturkmen @Innixma @prateekdesai04 @abdulfatir @LennartPurucker @celestinoxp @FANGAreNotGnu @xiyuanzh @nathanaelbosch @betatim @AdnaneKhan @paulbkoch @shou10152208 @ryuichi-ichinose @atschalz @colesussmeier

New Contributors

  • @betatim made their first contribution in (#5386)
  • @AdnaneKhan made their first contribution in (#5351)
  • @paulbkoch made their first contribution in (#4480)
  • @shou10152208 made their first contribution in (#5073)
  • @ryuichi-ichinose made their first contribution in (#5458)
  • @atschalz made their first contribution in (#5441)
  • @colesussmeier made their first contribution in (#5452)
Source: README.md, updated 2025-12-19