You might want to consider drifting momentum and learning rate for auto-tune.
This adds compute time but ends up being needed if MSE minimization is required in automated workflows.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Agreed. However, I have somewhat lost my motivation to work on auto-tune. I thought it was a good idea several years ago, but it ended up being so slow with neural networks that I never really used it. Maybe it might be more tolerable if it gave better intermediate feedback and allowed the user to provide optional hints about which parameters to sweep. I think to do it right, it should detect the presence of a GPGPU and utilize it, and employ various heuristics based on the size of the training data to pick suitable layer types and sizes.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The first cross validation in the NN Autotune impl needs some default layers:
should be:
Else the cross validate will error out with layer enforcement.
Have faith in chaos,
Todd
Last edit: Todd Morrison 2015-09-29
Thank you! I have pushed this fix into the repository.
You might want to consider drifting momentum and learning rate for auto-tune.
This adds compute time but ends up being needed if MSE minimization is required in automated workflows.
Agreed. However, I have somewhat lost my motivation to work on auto-tune. I thought it was a good idea several years ago, but it ended up being so slow with neural networks that I never really used it. Maybe it might be more tolerable if it gave better intermediate feedback and allowed the user to provide optional hints about which parameters to sweep. I think to do it right, it should detect the presence of a GPGPU and utilize it, and employ various heuristics based on the size of the training data to pick suitable layer types and sizes.