Surya Yehezki - 2017-04-21

Hello,

Currently, I am using MEKA to classify multi label dataset. Use SMO algorithm. To choose the best gamma and penalty, I use Grid Search. Hereby, the options that I used.

Options [-W, weka.classifiers.meta.GridSearch, --, -E, ACC, -y-property, kernel.gamma, -y-min, -3.0, -y-max, 4.0, -y-step, 1.0, -y-base, 10.0, -y-expression, pow(BASE,I), -x-property, C, -x-min, -3.0, -x-max, 4.0, -x-step, 1.0, -x-base, 10.0, -x-expression, pow(BASE,I), -sample-size, 100.0, -traversal, ROW-WISE, -log-file, C:\Users\Dell\Desktop\meka-release-1.9.1-SNAPSHOT, -num-slots, 1, -S, 1, -W, weka.classifiers.functions.SMO, --, -C, 10000.0, -L, 0.001, -P, 1.0E-12, -N, 0, -V, -1, -W, 1, -K, weka.classifiers.functions.supportVector.RBFKernel -G 0.1 -C 250007, -calibrator, weka.classifiers.functions.Logistic -R 1.0E-8 -M -1 -num-decimal-places 4]

From the result, I can not see that best gamma and penalty from the Grid Search? anybody know how to get the output?
Than you

Hereby, the result that I got.
== Evaluation Info

Classifier meka.classifiers.multilabel.LC
Options [-W, weka.classifiers.meta.GridSearch, --, -E, ACC, -y-property, kernel.gamma, -y-min, -3.0, -y-max, 4.0, -y-step, 1.0, -y-base, 10.0, -y-expression, pow(BASE,I), -x-property, C, -x-min, -3.0, -x-max, 4.0, -x-step, 1.0, -x-base, 10.0, -x-expression, pow(BASE,I), -sample-size, 100.0, -traversal, ROW-WISE, -log-file, C:\Users\Dell\Desktop\meka-release-1.9.1-SNAPSHOT, -num-slots, 1, -S, 1, -W, weka.classifiers.functions.SMO, --, -C, 10000.0, -L, 0.001, -P, 1.0E-12, -N, 0, -V, -1, -W, 1, -K, weka.classifiers.functions.supportVector.RBFKernel -G 0.1 -C 250007, -calibrator, weka.classifiers.functions.Logistic -R 1.0E-8 -M -1 -num-decimal-places 4]
Additional Info K = 200, N = 1542
Dataset MatriksFullImbalanced
Number of labels (L) 15
Type ML
Threshold 1.0
Verbosity 7

== Predictive Performance

Number of test instances (N)
Accuracy 0.859
Jaccard index 0.859
Hamming score 0.969
Exact match 0.828
Jaccard distance 0.141
Hamming loss 0.031
ZeroOne loss 0.172
Harmonic score 0.891
One error 0.125
Rank loss 0.058
Avg precision 0.871
Log Loss (lim. L) 0.084
Log Loss (lim. D) 0.202
Micro Precision 0.907
Micro Recall 0.937
Macro Precision 0.91
Macro Recall 0.946
F1 (micro averaged) 0.922
F1 (macro averaged by example) 0.869
F1 (macro averaged by label) 0.927
AUPRC (macro averaged) 0.873
AUROC (macro averaged) 0.961
Curve Data
Macro Curve Data
Micro Curve Data
Label indices [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ]
Accuracy (per label) [ 0.965 0.941 0.934 0.955 0.967 0.962 0.968 0.947 0.992 0.985 0.979 0.961 0.997 0.982 0.998 ]
Harmonic (per label) [ 0.955 0.940 0.934 0.945 0.952 0.964 0.960 0.937 0.985 0.983 0.949 0.950 0.987 0.963 0.999 ]
Precision (per label) [ 0.891 0.915 0.868 0.938 0.901 0.884 0.947 0.855 0.911 0.855 0.933 0.937 0.975 0.917 0.917 ]
Recall (per label) [ 0.939 0.933 0.934 0.920 0.929 0.968 0.942 0.919 0.976 0.981 0.912 0.927 0.975 0.939 1.000 ]
Empty labelvectors (predicted) 0.008
Label cardinality (predicted) 3.032
Levenshtein distance 0.029
Label cardinality (difference) -0.095
avg. relevance (test set) [ 0.198 0.385 0.319 0.320 0.192 0.237 0.287 0.224 0.063 0.082 0.137 0.292 0.060 0.124 0.017 ]
avg. relevance (predicted) [ 0.208 0.393 0.343 0.314 0.198 0.260 0.285 0.240 0.068 0.094 0.134 0.289 0.060 0.127 0.018 ]
avg. relevance (difference) [ -0.011 -0.008 -0.024 0.006 -0.006 -0.023 0.002 -0.017 -0.005 -0.012 0.003 0.003 0.000 -0.003 -0.002 ]