I've been using Neuroph for several years and I just noticed that Multiperceptron in 2.9.4 doesn't appear to work properly.
I created a very basic test 2 inputs, 1 output
1, -1, 1
-1, -1, -1
1, 1, 1
-1, 1, 1
Multiperceptron, 2 input, 2 hidden, 1 output, TANH, Resilient backprop, bias node
Learning rate .3, max error .0001, up to 200,000 iterations (a bit overkill)
This should learn very easily and very well.
If I substitute the 2.8 neuroph core jar and run this test 20 times repeatedly I get the following error rates:
5.2811515815879365E-6
2.394940280787124E-5
8.377997848402625E-5
2.069499829098618E-5
4.0399068598974E-5
6.881952446540644E-5
5.8032967758322944E-5
2.973112913702599E-5
1.822446720816441E-5
6.242313818334985E-5
6.473502969547782E-5
7.0031299996109E-5
6.320901221700987E-5
1.161293181333649E-5
5.648644118347408E-5
4.230006971303609E-5
1.2457240508890997E-5
4.9388207960116435E-5
2.4095926739979982E-5
1.4899191832377127E-5
Very low error rate on all 20 tests.
If I only replace the jar (leave the code exactly as-is) and rerun 20 tests wtih 2.9.4 core jar I get this
0.8134890706192368
0.4607474928916899
0.35123225620689147
1.0161151228514778
0.9433927841549042
0.9184233213004372
0.40513380369824786
0.4731655054136824
0.3819684998224395
0.8784872180125608
0.7613366752681061
0.3868048697065001
0.437074869649533
0.914768010596594
0.7940754974695807
0.9959826682889845
0.33421409521009704
0.2892115845458796
0.6359396491127787
0.32118777491607053
You can see these are extremely high error rates.
The code looks like this:
NeuralNetwork multiPerceptron2 = new MultiLayerPerceptron(TransferFunctionType.TANH, numberOfInputs2, numberOfInputs2, 1);
ResilientPropagation resilientPropagation2 = new ResilientPropagation();
resilientPropagation2.setMaxIterations(200000);
resilientPropagation2.setLearningRate(0.3);
resilientPropagation2.setMaxError(0.0001);
multiPerceptron2.setLearningRule(resilientPropagation2);
addNeuralNetwork(multiPerceptron2);
multiPerceptronTest = matchModel.getNeuralNetworkInstance();
multiPerceptronTest.getLearningRule().addListener(this);
multiPerceptronTest.learn(dataSet);
I would like to move up to 2.9.4 to be able to use the cross validation.
Thanks
Ken
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the note Ken. We made some changes in 2.94 in order to make it easier to follow mathematical models of more backprop variations, but it seems that resilient is now broken. It will be fixed with next release.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Just uncomment line 141 of "org.neuroph.nnet.learning.ResilientPropagation", it'll correct the problem.
//weight.value += weightChange; -- ovo mora da se radi simultano
=>
weight.value += weightChange; //-- ovo mora da se radi simultano
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've been using Neuroph for several years and I just noticed that Multiperceptron in 2.9.4 doesn't appear to work properly.
I created a very basic test 2 inputs, 1 output
1, -1, 1
-1, -1, -1
1, 1, 1
-1, 1, 1
Multiperceptron, 2 input, 2 hidden, 1 output, TANH, Resilient backprop, bias node
Learning rate .3, max error .0001, up to 200,000 iterations (a bit overkill)
This should learn very easily and very well.
If I substitute the 2.8 neuroph core jar and run this test 20 times repeatedly I get the following error rates:
5.2811515815879365E-6
2.394940280787124E-5
8.377997848402625E-5
2.069499829098618E-5
4.0399068598974E-5
6.881952446540644E-5
5.8032967758322944E-5
2.973112913702599E-5
1.822446720816441E-5
6.242313818334985E-5
6.473502969547782E-5
7.0031299996109E-5
6.320901221700987E-5
1.161293181333649E-5
5.648644118347408E-5
4.230006971303609E-5
1.2457240508890997E-5
4.9388207960116435E-5
2.4095926739979982E-5
1.4899191832377127E-5
Very low error rate on all 20 tests.
If I only replace the jar (leave the code exactly as-is) and rerun 20 tests wtih 2.9.4 core jar I get this
0.8134890706192368
0.4607474928916899
0.35123225620689147
1.0161151228514778
0.9433927841549042
0.9184233213004372
0.40513380369824786
0.4731655054136824
0.3819684998224395
0.8784872180125608
0.7613366752681061
0.3868048697065001
0.437074869649533
0.914768010596594
0.7940754974695807
0.9959826682889845
0.33421409521009704
0.2892115845458796
0.6359396491127787
0.32118777491607053
You can see these are extremely high error rates.
The code looks like this:
2.9.2 Works as well. So it would appear the problem occurred after 2.9.2.
Thanks for the note Ken. We made some changes in 2.94 in order to make it easier to follow mathematical models of more backprop variations, but it seems that resilient is now broken. It will be fixed with next release.
Just uncomment line 141 of "org.neuroph.nnet.learning.ResilientPropagation", it'll correct the problem.
//weight.value += weightChange; -- ovo mora da se radi simultano
=>
weight.value += weightChange; //-- ovo mora da se radi simultano