Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
AdaNet v0.7.0.tar.gz | 2019-06-26 | 902.7 kB | |
AdaNet v0.7.0.zip | 2019-06-26 | 988.7 kB | |
README.md | 2019-06-26 | 892 Bytes | |
Totals: 3 Items | 1.9 MB | 0 |
- Add embeddings support on TPU via
TPUEmbedding
. - Train the current iteration forever when
max_iteration_steps=None
. - Introduce
adanet.AutoEnsembleSubestimator
for training subestimators on different training data partitions and implement ensemble methods like bootstrap aggregating (a.k.a bagging). - Fix bug when using Gradient Boosted Decision Tree Estimators with
AutoEnsembleEstimator
during distributed training. - Allow
AutoEnsembleEstimator's
candidate_pool
argument to be alambda
in order to createEstimators
lazily. - Remove
adanet.subnetwork.Builder#prune_previous_ensemble
for abstract class. This behavior is now specified usingadanet.ensemble.Strategy
subclasses. - BREAKING CHANGE: Only support TensorFlow >= 1.14 to better support TensorFlow 2.0. Drop support for versions < 1.14.
- Correct eval metric computations on CPU and GPU.