Menu

Home

Matus Uzak

BNNS is a research tool for interactive training of artificial neural networks based on the Response Function Plots visualization method. It enables users to simulate, visualize and interact in the learning process of a Multi-Layer Perceptron on tasks which have a 2D character. Tasks like the famous two-spirals task or classification of satellite image data.

Screenshot thumbnail
Spirals task: MLP visualization and extrapolation view.
Screenshot thumbnail
Boston remote sensing testbed: MPL performance analysis.
Screenshot thumbnail
Boston remote sensing testbed: error-energy analysis.

Features

  • Reset of weights connected to a neuron
  • Freeze/Unfreeze of weights connected to a neuron
  • Basic support for scaling of RFPs, preview of RFP in its native size
  • Preview of conflicts between output layer neurons
  • Preview of error energy on output layer
  • Preview of Training/Testing patterns
  • Sigmoidal/Softmax activation on output layer
  • Logging of MSE and of the sum of output layer responses
  • Perl script to prepare bnns patterns from PGM data
  • Perl script to prepare bnns patterns from Boston Remote Sensing Testbed data
  • Maintained User Manual

Pictures

Extrapolation Performance

Research

Visualization and interaction

Selected visualization methods have to be incorporated into a human-machine interface (HMI) providing visual information about the learning process of an MLP type neural network. Proposed HMI should enabled a user to comprehend the learning process and thus incorporate his ideas and knowledge into the learning process.

Description of the embedded visualization and interaction methods can be found in [1], which also contains a survey of neural network visualization methods.

Reduction of visual information

Visualization of moderate sized MLPs presents an overwhelming amount of visual information to the user. Our method to reduce the user fatigue is based on clustering of RFPs of hidden neurons. Then the user is presented only with the representatives of clusters. A scale of clustering algorithms is available to perform the task of clustering. A survey on performance of Kohonen network, Growing Neural Gas (GNG) and GNG with Utility factor (GNG-U) can be found in [2].

Publications

1. Matus Uzak and Rudolf Jaksa, "Framework for the Interactive Learning of Artificial Neural Networks", Artificial Neural Networks - ­ICANN 2006, ser. Lecture Notes in Computer Science, vol. 4131/2006, Springer Berlin/Heidelberg, 2006, pp. 103-112. [pdf]

2. Matus Uzak, Rudolf Jaksa and Peter Sincak, "Reduction of Visual Information in Neural Network Learning Visualization", Artificial Neural Networks - ICANN 2008, ser. Lecture Notes in Computer Science, vol.5164/2008, Springer Berlin/Heidelberg, 2008, pp. 690-699. [pdf]

Project Members:


Related

Wiki: ExtrapolationPerformance
Wiki: Manual