Good morning everybody,
I have successfully implemented a Kohonen network and I have run a basic OCR test to see that it is working. A Kohonen neural network forces a winning neuron. My question is: can i squeeze some additional information out of the network by looking at the loosing neurons as well?
What would be cool is: Winning neuron X wins because of 90% likelihood, neuron Y loses with 10% rest. this is a lot more information than just X wins (especially when we are close to 50 : 50).
Am I totally wrong or is some approach somehow useful?
you can use the calculated euclidean distance of the kohonensynapse. of course this isn't a likelihood measure, but one can see it as an approximation.
thank you for this valuable input.
this is definitely something which brought us into the right direction.
in the meantime we coded a working kohonen network in C. we did a lot of optimization here in terms of internal memory structure and execution times. i can proudly say that we cannot squeeze out more training speed out of one CPU.
the next goal of the project now is to scale out training to a.) more CPU cores and b.) to more machines.
while a.) seems to be a matter of work we are not sure how to scale the thing to many different boxes efficiently without being killed by the network traffic. my goal is to scale to, say, 128 CPUs on 32 boxes or so.
the question is: i would like to split the training data into parts and run ever part of it on a different node. is there a mathematical way to merge output data (= weights) together in some useful way? i don't have to be 100% precise; 99.5% is most likely enough. i just need to find a way to increase the size of the network by throwing hardware (= many boxes) and clever algorithms at the problem).
is there some scientific work on the problem available?
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.