BNNS is a research tool for interactive training of artificial neural networks based on the Response Function Plots visualization method. It enables users to simulate, visualize and interact in the learning process of a Multi-Layer Perceptron on tasks which have a 2D character. Tasks like the famous two-spirals task or classification of satellite image data.
Selected visualization methods have to be incorporated into a human-machine interface (HMI) providing visual information about the learning process of an MLP type neural network. Proposed HMI should enabled a user to comprehend the learning process and thus incorporate his ideas and knowledge into the learning process.
Description of the embedded visualization and interaction methods can be found in [1], which also contains a survey of neural network visualization methods.
Visualization of moderate sized MLPs presents an overwhelming amount of visual information to the user. Our method to reduce the user fatigue is based on clustering of RFPs of hidden neurons. Then the user is presented only with the representatives of clusters. A scale of clustering algorithms is available to perform the task of clustering. A survey on performance of Kohonen network, Growing Neural Gas (GNG) and GNG with Utility factor (GNG-U) can be found in [2].
1.
Matus Uzak and Rudolf Jaksa, "Framework for the Interactive Learning of Artificial Neural Networks", Artificial Neural Networks - ICANN 2006, ser. Lecture Notes in Computer Science, vol. 4131/2006, Springer Berlin/Heidelberg, 2006, pp. 103-112. [pdf]
2.
Matus Uzak, Rudolf Jaksa and Peter Sincak, "Reduction of Visual Information in Neural Network Learning Visualization", Artificial Neural Networks - ICANN 2008, ser. Lecture Notes in Computer Science, vol.5164/2008, Springer Berlin/Heidelberg, 2008, pp. 690-699. [pdf]