Menu

Can't get my first own neural net to run

Help
Anonymous
2016-01-28
2016-01-28
  • Anonymous

    Anonymous - 2016-01-28

    I managed to get the library compiled and ran the hello_ml example successfully now :) I tried to get one step further by modifying the code so it fits my problem better. However I wasn't able to run it without crashing. Basically I only removed everything other than neural network (compiled and ran fine). Then I changed the data fields (no continuous ones, only specified; only one label). I proceded to fill the feat and lab variables (all set to 0 for starters). I chanced the test_features to 10 and predicted_labels to 1.

    However, when I try to run it I get this:
    Floating point exception (core dumped)

    Hooking into it with db and getting a backtrace results in this:

    Program received signal SIGFPE, Arithmetic exception.
    0x0000000000498c3e in GClasses::GNeuralNet::trainWithValidation (this=0x845c80, trainFeatures=..., trainLabels=..., validateFeatures=..., validateLabels=...) at GNeuralNet.cpp:573
    573             if(1.0 - dSumSquaredError / dBestError >= m_minImprovement) // This condition is designed such that if dSumSquaredError is NAN, it will break out of the loop
    (gdb) bt
    #0  0x0000000000498c3e in GClasses::GNeuralNet::trainWithValidation (this=0x845c80, trainFeatures=..., trainLabels=..., validateFeatures=..., validateLabels=...) at GNeuralNet.cpp:573
    #1  0x0000000000498356 in GClasses::GNeuralNet::trainInner (this=0x845c80, features=..., labels=...) at GNeuralNet.cpp:503
    #2  0x0000000000430d14 in GClasses::GSupervisedLearner::train (this=0x845c80, features=..., labels=...) at GLearner.cpp:484
    #3  0x0000000000435ed4 in GClasses::GLabelFilter::trainInner (this=0x8463a0, features=..., labels=...) at GLearner.cpp:1287
    #4  0x0000000000430d14 in GClasses::GSupervisedLearner::train (this=0x8463a0, features=..., labels=...) at GLearner.cpp:484
    #5  0x0000000000435701 in GClasses::GFeatureFilter::trainInner (this=0x846460, features=..., labels=...) at GLearner.cpp:1192
    #6  0x0000000000430d14 in GClasses::GSupervisedLearner::train (this=0x846460, features=..., labels=...) at GLearner.cpp:484
    #7  0x0000000000437676 in GClasses::GAutoFilter::trainInner (this=0x7fffffffdca0, features=..., labels=...) at GLearner.cpp:1573
    #8  0x0000000000430d14 in GClasses::GSupervisedLearner::train (this=0x7fffffffdca0, features=..., labels=...) at GLearner.cpp:484
    #9  0x000000000040517f in do_neural_network (features=..., labels=..., test_features=..., predicted_labels=...) at main.cpp:41
    #10 0x0000000000406824 in doit () at main.cpp:89
    #11 0x000000000040698e in main (argc=1, argv=0x7fffffffdf08) at main.cpp:101
    

    It seems that something is wrong with the values in feat and lab which are given to do_neural_network() and subsequently to af.train(features, labels)

    This is the furthest I can get with my programming skills, could someone please help me or at least give me a hint?

    Thanks a lot and best regards,
    Chris

    Please find my changed code below for reference:

    void doit()
    {
        // Define the feature attributes (or columns)
        vector<size_t> feature_values;
        for(int i=0; i<10; i++)
        {
            //add one data field for every port scanned
            feature_values.push_back(4); // state = { notscanned=0, open=1, closed=2, filtered=3 }
        }
    
        // Define the label attributes (or columns)
        vector<size_t> label_values;
        label_values.push_back(2); // alert = { no=0, yes=1 }
    
        // Make some contrived hard-coded training data
        GMatrix feat(feature_values);
        feat.newRows(12);
        GMatrix lab(label_values);
        lab.newRows(12);
        //For each port the state is set according to the types defined above
        feat[0][0] = 0;  feat[0][1] = 0;  feat[0][2] = 0;  feat[0][3] = 0;  feat[0][4] = 0;  feat[0][5] = 0;  feat[0][6] = 0;  feat[0][7] = 0;  feat[0][8] = 0;  feat[0][9] = 0;  lab[0][0] = 0;
        feat[1][0] = 0;  feat[1][1] = 0;  feat[1][2] = 0;  feat[1][3] = 0;  feat[1][4] = 0;  feat[1][5] = 0;  feat[1][6] = 0;  feat[1][7] = 0;  feat[1][8] = 0;  feat[1][9] = 0;  lab[1][0] = 0;
        feat[2][0] = 0;  feat[2][1] = 0;  feat[2][2] = 0;  feat[2][3] = 0;  feat[2][4] = 0;  feat[2][5] = 0;  feat[2][6] = 0;  feat[2][7] = 0;  feat[2][8] = 0;  feat[2][9] = 0;  lab[2][0] = 0;
        feat[3][0] = 0;  feat[3][1] = 0;  feat[3][2] = 0;  feat[3][3] = 0;  feat[3][4] = 0;  feat[3][5] = 0;  feat[3][6] = 0;  feat[3][7] = 0;  feat[3][8] = 0;  feat[3][9] = 0;  lab[3][0] = 0;
        feat[4][0] = 0;  feat[4][1] = 0;  feat[4][2] = 0;  feat[4][3] = 0;  feat[4][4] = 0;  feat[4][5] = 0;  feat[4][6] = 0;  feat[4][7] = 0;  feat[4][8] = 0;  feat[4][9] = 0;  lab[4][0] = 0;
        feat[5][0] = 0;  feat[5][1] = 0;  feat[5][2] = 0;  feat[5][3] = 0;  feat[5][4] = 0;  feat[5][5] = 0;  feat[5][6] = 0;  feat[5][7] = 0;  feat[5][8] = 0;  feat[5][9] = 0;  lab[5][0] = 0;
        feat[6][0] = 0;  feat[6][1] = 0;  feat[6][2] = 0;  feat[6][3] = 0;  feat[6][4] = 0;  feat[6][5] = 0;  feat[6][6] = 0;  feat[6][7] = 0;  feat[6][8] = 0;  feat[6][9] = 0;  lab[6][0] = 0;
        feat[7][0] = 0;  feat[7][1] = 0;  feat[7][2] = 0;  feat[7][3] = 0;  feat[7][4] = 0;  feat[7][5] = 0;  feat[7][6] = 0;  feat[7][7] = 0;  feat[7][8] = 0;  feat[7][9] = 0;  lab[7][0] = 0;
        feat[8][0] = 0;  feat[8][1] = 0;  feat[8][2] = 0;  feat[8][3] = 0;  feat[8][4] = 0;  feat[8][5] = 0;  feat[8][6] = 0;  feat[8][7] = 0;  feat[8][8] = 0;  feat[8][9] = 0;  lab[8][0] = 0;
        feat[9][0] = 0;  feat[9][1] = 0;  feat[9][2] = 0;  feat[9][3] = 0;  feat[9][4] = 0;  feat[9][5] = 0;  feat[9][6] = 0;  feat[9][7] = 0;  feat[9][8] = 0;  feat[9][9] = 0;  lab[9][0] = 0;
        feat[10][0] = 0;  feat[10][1] = 0;  feat[10][2] = 0;  feat[10][3] = 0;  feat[10][4] = 0;  feat[10][5] = 0;  feat[10][6] = 0;  feat[10][7] = 0;  feat[10][8] = 0;  feat[10][9] = 0;  lab[10][0] = 0;
        feat[11][0] = 0;  feat[11][1] = 0;  feat[11][2] = 0;  feat[11][3] = 0;  feat[11][4] = 0;  feat[11][5] = 0;  feat[11][6] = 0;  feat[11][7] = 0;  feat[11][8] = 0;  feat[11][9] = 0;  lab[11][0] = 0;
    
        // Make a test vector
        GVec test_features(10);
        GVec predicted_labels(1);
        cout << "This demo trains a neural net with several fake port scan results for 10 ports and a decision if alert is raised or not.\n\n";
        test_features[0] = 0; test_features[1] = 2; test_features[2] = 0; test_features[3] = 0; test_features[4] = 0;
        test_features[5] = 0; test_features[6] = 0; test_features[7] = 0; test_features[8] = 0; test_features[9] = 0;
        cout << "Predicting alert for a given portscan vector.\n\n";
    
        // Use several models to make predictions
        cout.precision(4);
    
        do_neural_network(feat, lab, test_features, predicted_labels);
        cout << "The neural network predicts alert is " << (predicted_labels[0] == 0 ? "not raised" : "raised") <<  ".\n";
    }
    
     
  • Mike Gashler

    Mike Gashler - 2016-01-28

    It looks like there is a bug that occurs when all of the labels are homogeneous. Thanks for finding it--I will fix it. For now, a simple work-around is to change at least one of your labels to a non-zero value.

     
  • Anonymous

    Anonymous - 2016-01-28

    Thanks a lot for your quick answers! Really got to create an account. Thanks also for all your work with waffles, it's fun to work with it :)

     

Anonymous
Anonymous

Add attachments
Cancel