I have gotten the ffnet code to work properly, and I have verified that I can run the generated code in fortran after executing the python script.
I am working on trying to implement code to use a different transfer function (tanh instead of sigmoid).
I have updated the fortran code in ffnet.f that resides in the /fortran folder. However, I am having trouble getting my changes to work. I am wondering if I have to re-compile the source code to include these changes. For example, I see the compile.py file is used to compile the fortran files… but I can't get execute compile.py without getting an error (it seems that there is a problem locating a file named pythonw.exe.
Any help is greatly appreciated.
The command in the system shell:
f2py -m _ffnet -c ffnet.f
should do the thing if you have prpperly installed Fortran and C compilers. What operating system you use? Could you send the errors generated by compile.py?
Ahh. You're on Windows i see: read then this post about using f2py on Windows:
1) I was using windows, but I thought it might be useful to try it on Linux. So, I just installed openSUSE on an old computer. I got the ffnet code up and running again.
2) In a terminal window I used the command you sent in your first reply…
3) My changes were then included next time I ran the python script!
4) However, the code did not run correctly when I tried to replace all of your sigmoid functions with tanh. I also made sure to replace all derivative calculations to the derivative of tanh… for example,
f = 1 - tanh(f)^2
5) In Summary, each of these lines were changed to use tanh or to use derivative of tanh
line 44, 59, 133, 146, 154, 232, 397
Any idea why I would get an error? Could it be a normalization problem since tanh can accept -1 to 1 but sigmoid can only accept 0 to 1?
normalization would be a reason of worse training results for tanh. what errors exactly do you get an for what training problem?
The code won't even run, I get a segmentation violation.
I have resorted to using the code as-is with the sigmoid activation functions. I may re-visit trying to use tanh, but I don't have the time for it right now.
On a separate topic, do you know what cost function is minimized when using the optimization routines (bfgs, momentum, tnc)?
Cost function is calculated as sum of the squared errors at all outputs and
for all training samples - for *normalized* trainig data.
Log in to post a comment.