Hi all NN lovers!

I see Neuroph as a very potential framework to be used in my project, which is related to function approximation / forecasting. I am especially interested in momentum backpropagation learning. However, I wasn't able to find stopping criteria based on the change of epoch's average error. That is, when the absolute rate of change in the average squared error per epoch is sufficient small, the learning is stopped. I have understood that this stopping is widely used.

Maybe it could be implemented relatively easily in the SupevisedLearning.doLearningEpoch() in the same condition where totalNetworkError is checked.

What you think? Could this be implemented or did I just miss the part of the code where it is implemented?

Anyway, thank you for the developers of the current framework! It is great!

Riksi