Home
Name Modified Size InfoDownloads / Week
Source 2016-12-13
Windows 2016-12-13
README 2017-01-28 13.4 kB
NEWS 2016-12-13 1.5 kB
Totals: 4 Items   14.9 kB 0
-------------------------------------------------------------------------------
nunn - https://sourceforge.net/projects/nunn/
(c) Antonino Calderone <antonino.calderone@gmail.com> - 2015 - 2017
-------------------------------------------------------------------------------

This is an implementation of a Machine Learning Library.

Binaries for Windows have been built by using Microsoft Visual C++ 2015, 
so you may need to install Visual C++ Redistributable Packages 2015.
To do this, search for "Visual C++ Redistributable Packages for Visual Studio 2015" 
or use the link https://www.microsoft.com/en-us/download/details.aspx?id=48145


-------------------------------------------------------------------------------
Features
-------------------------------------------------------------------------------
- Implements Perceptron, MLP, RMLP, Hopfield neural nets, Q-Learning algorithm
- Supports fully connected multi-layers neural networks
- Easy to use and understand
- Easy to save and load entire objects
- Multi-platform
- Exports topology that you can draw using Graphviz dot (http://www.graphviz.org/)


The library package includes the following samples/tools:


-------------------------------------------------------------------------------
Nunn Topology -> Graphviz format converter (nunn_topo)
-------------------------------------------------------------------------------

Using this tool you can export neural network topologies and draw them 
using Graphviz dot.
dot draws directed graphs. It reads attributed graph text files and writes drawings,
either as graph files or in a graphics format such as GIF, PNG, SVG or PostScript
(which can be converted to PDF).


nunn_topo Usage:

nunn_topo
        [--version|-v]
        [--help|-h]
        [--save|-s <dot file name>]
        [--load|-l <net_description_file_name>]

Where:
--version or -v
        shows the program version
--help or -h
        generates just this 'Usage' text
--save or -s
        save dot file
--load or -l
        load net data from file


-------------------------------------------------------------------------------
MNIST Test Demo (mnist_test)
-------------------------------------------------------------------------------

This demo trains and tests (R)MLP neural network against the MNIST.
The test is performed by using the MNIST data set, which contains 60K + 10K 
scanned images of handwritten digits with their correct classifications
The images are greyscale and 28 by 28 pixels in size.
The first part of 60,000 images were used as training data.
The second part of 10,000 images were used as test data.
The test data was taken from a different set of people than the original training data.
The training input is treated as a 28×28=784-dimensional vector.
Each entry in the vector represents the grey value for a single pixel in the image.
The corresponding desired output is a 10-dimensional vector.

You may obtain info about MNIST at link http://yann.lecun.com/exdb/mnist/

-------------------------------------------------------------------------------
mnist_test - Usage:

mnist_test
    [--version|-v]
    [--help|-h]
    [--training_files_path|-p <path>]
    [--training_imgsfn|-tri <filename>] (default train-images.idx3-ubyte)
    [--training_lblsfn|-trl <filename>] (default train-labels.idx1-ubyte)
    [--test_imgsfn|-ti <filename>] (default t10k-images.idx3-ubyte)
    [--test_lblsfn|-tl <filename>] (default t10k-images.idx3-ubyte)
    [--save|-s <net_description_file_name>]
    [--load|-l <net_description_file_name>]
    [--skip_training|-n]
    [--use_cross_entropy|-c]
    [--learning_rate|-r <rate>]
    [--momentum|-m <value>]
    [--epoch_cnt|-e <count>]
    [[--hidden_layer|-hl <size> [--hidden_layer|--hl <size] ... ]

Where:
--version or -v
        shows the program version
--help or -h
        generates just this 'Usage' text
--training_files_path or -p
        set training/test files set path
--training_imgsfn or -tri
        set training images file name
--training_lblsfn or -trl
        set training labels file name
--test_imgsfn or -ti
        set test images file name
--test_lblsfn or -tl
        set test labels file name
--save or -s
        save net data to file
--load or -l
        load net data from file
--skip_training or -n
        skip net training
--use_cross_entropy or -c
        use the cross entropy cost function instead of MSE
--learning_rate or -r
        set learning rate (default 0.10)
--epoch_cnt or -e
        set epoch count (default 10)	 
--momentum or -m
        set momentum (default 0.5)
--hidden_layer or -hl
        set hidden layer size (n. of neurons, default 300)

-------------------------------------------------------------------------------
Example:

# mnist_test -e 60 -r 0.40 -hl 135 -s nn_135hl_040lr.net

-------------------------------------------------------------------------------
Output:

NN hidden neurons L1       : 135
Net Learning rate          : 0.4
Training labels : train-labels.idx1-ubyte
Training images : train-images.idx3-ubyte
Test labels file: t10k-labels.idx1-ubyte
Test images file: t10k-images.idx3-ubyte
Learning epoch 1 of 60
Completed 100%
Error rate   : 6.65%
Success rate : 93.35%
BER          : 6.65%
Epoch BER    : 1
Learning epoch 2 of 60
Completed 22.8%

...

BER          : 2.31%



-------------------------------------------------------------------------------
Handwritten Digit OCR Demo (ocr_test)
-------------------------------------------------------------------------------
This is an interactive demo which uses MNIST trained neural network created 
by using nunn library.
nunn status files (.net) have been created by mnist_test application



-------------------------------------------------------------------------------
TicTacToe Demo (tictactoe)
-------------------------------------------------------------------------------
Basic Tic Tac Toe game which uses neural networks. 

tictactoe - Usage:
    [--version|-v]
    [--help|-h]
    [--save|-s <net_description_file_name>]
    [--load|-l <net_description_file_name>]
    [--skip_training|-n]
    [--learning_rate|-r <rate>]
    [--epoch_cnt|-e <count>]
    [--stop_on_err_tr|-x <error rate>]
    [[--hidden_layer|-hl <size> [--hidden_layer|--hl <size] ... ]

Where:
--version or -v
        shows the program version
--help or -h
        generates just this 'Usage' text
--save or -s
        save net data to file
--load or -l
        load net data from file
--skip_training or -n
        skip net training
--learning_rate or -r
        set learning rate (default 0.3)
--epoch_cnt or -e
        set epoch count (default 100000)
--stop_on_err_tr or -x
        set error rate threshold (default 0.1)
--hidden_layer or -hl
        set hidden layer size (n. of neurons, default 30 )


-------------------------------------------------------------------------------
Example: Running the program which loads a trained NN from the file 
         tictactoe.net skypping training stage (-n)

>tictactoe.exe -l tictactoe.net -n
Inputs                     : 10
NN hidden neurons L1       : 50
NN hidden neurons L2       : 30
NN hidden neurons L3       : 20
Outputs                    : 9
Net Learning rate  ( LR )  : 0.3
Net Momentum       ( M )   : 0.5
MSE Threshold      ( T )   : 0.1
Neuron 2 -> 0.1%
Neuron 8 -> 0.1%
Neuron 5 -> 99.9%
-------------
|   |   |   |
|   |   |   |
|---|---|---|
|   |   |   |
|   | O |   |
|---|---|---|
|   |   |   |
|   |   |   |
-------------

-------------
|   |   |   |
| 1 | 2 | 3 |
|---|---|---|
|   |   |   |
| 4 | O | 6 |
|---|---|---|
|   |   |   |
| 7 | 8 | 9 |
-------------

Please, give me a number within the range [1..9]: 1

...



-------------------------------------------------------------------------------
TicTacToe Demo for Windows (winttt)
-------------------------------------------------------------------------------

Winttt is an interactive Tic Tac Toe version for Windows which may be dynamically 
trained or may use trained neural networks, including those nets created by
using tictactoe program.



-------------------------------------------------------------------------------
XOR Problem sample (xor_test)
-------------------------------------------------------------------------------
A typical example of non-linealy separable function is the XOR.
Implementing the XOR function is a classic problem in neural networks.
This function takes two input arguments with values in [0,1] 
and returns one output in [0,1], as specified in the following table:

  x1 x2 |  y   
 ---+---+----
  0 | 0 |  0
  0 | 1 |  1
  1 | 0 |  1
  1 | 1 |  0
 
XOR computes the logical exclusive-or, which yields 1 if and 
only if the two inputs have different values.
So, this classification can not be solved with linear separation, 
but is very easy for a neural network to generate a non-linear solution to.
 


-------------------------------------------------------------------------------
Perceptron AND sample (and_test)
-------------------------------------------------------------------------------

AND function implemented using a perceptron
A typical example of linearly separable function is the AND. This type
of function can be learned by a single preceptron neural net

AND takes two input arguments with values in [0,1] 
and returns one output in [0,1], as specified in the following table:

 x1 x2 |  y   
---+---+----
0 | 0 |  0
0 | 1 |  0
1 | 0 |  0
1 | 1 |  1

AND computes the logical-AND, which yields 1 if and 
only if the two inputs have 1 values.


-------------------------------------------------------------------------------
Hopfield Test (hopfield_test)
-------------------------------------------------------------------------------

The Hopfield networks may be used to solve the recall problem of matching cues 
for an input pattern to an associated pre-learned pattern.
They are form of recurrent artificial neural networks, which
serve as content-addressable memory systems with binary threshold nodes.

This test shows an use-case of hopfield net used as auto-associative memory
In this example we recognize 100-pixel picture with the 100-neuron neural 
network.

Star '*' pixel is represented by 1 and empty ' ' is represented by -1.
In this way we obtained test vectors that have 100 components.

When the test vector, which is passed to input of the network, 
is identical with one of the pattern vectors, 
the net does not change its state. 
It recognizes also test vectors that are similar to patterns.

Sometimes we can see another feature of Hopfield network – 
remembering relations between neighboring pixels without their values. 
As a result of network activity we get the picture that is the reversed 
pattern.

The net learns the following images:
+----------+ +----------+ +----------+ +----------+ +----------+
|   ***    | |**********| |*****     | |**********| |**********|
|  ****    | |**********| |*****     | |**********| |*        *|
| *****    | |**********| |*****     | |**      **| |* ****** *|
|   ***    | |**********| |*****     | |**      **| |* *    * *|
|   ***    | |**********| |*****     | |**      **| |* * ** * *|
|   ***    | |          | |     *****| |**********| |* * ** * *|
|   ***    | |          | |     *****| |**********| |* *    * *|
|   ***    | |          | |     *****| |**      **| |* ****** *|
| *******  | |          | |     *****| |**      **| |*        *|
| *******  | |          | |     *****| |**      **| |**********|
+----------+ +----------+ +----------+ +----------+ +----------+

Then for each test-image it recalls the most similar of those learnt, 
as shown in the following example:

 THIS IMAGE      RECALLS
+----------+  +----------+
|   ***    |  |   ***    |
|   ***    |  |  ****    |
|   ***    |  | *****    |
|   ***    |  |   ***    |
|   ***    |  |   ***    |
|   ***    |  |   ***    |
|   ***    |  |   ***    |
|   ***    |  |   ***    |
|   ***    |  | *******  |
|   ***    |  | *******  |
+----------+  +----------+


 THIS IMAGE      RECALLS
+----------+  +----------+
|**********|  |**********|
|**********|  |**********|
|          |  |**********|
|          |  |**********|
|          |  |**********|
|          |  |          |
|          |  |          |
|          |  |          |
|          |  |          |
|          |  |          |
+----------+  +----------+


 THIS IMAGE      RECALLS
+----------+  +----------+
|          |  |*****     |
|          |  |*****     |
|*****     |  |*****     |
|*****     |  |*****     |
|*****     |  |*****     |
|     *****|  |     *****|
|     *****|  |     *****|
|     *****|  |     *****|
|          |  |     *****|
|          |  |     *****|
+----------+  +----------+


 THIS IMAGE      RECALLS
+----------+  +----------+
|**********|  |**********|
|*        *|  |**********|
|*        *|  |**      **|
|*        *|  |**      **|
|*        *|  |**      **|
|**********|  |**********|
|**********|  |**********|
|*        *|  |**      **|
|*        *|  |**      **|
|*        *|  |**      **|
+----------+  +----------+


 THIS IMAGE      RECALLS
+----------+  +----------+
|**********|  |**********|
|*        *|  |*        *|
|* ****** *|  |* ****** *|
|* *    * *|  |* *    * *|
|* *    * *|  |* * ** * *|
|* *    * *|  |* * ** * *|
|* *    * *|  |* *    * *|
|* ****** *|  |* ****** *|
|*        *|  |*        *|
|**********|  |**********|
+----------+  +----------+
Source: README, updated 2017-01-28