i have a question about th GAdaBoost Implementation.
The AdaBoost algorithm computes different distributions for the training
examples and gives them to the Learner for its training. Thus, the Learner has
to provide a training method with weighted features.
IMHO this is not implemented in GAdaBoost.
My question is, if this is a special version of the AdaBoost Algorithm, or if
i did not understand it correctly.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You are right. I approximated weighted features by resampling the training
set. That is, I draw each pattern with probability proportional to its weight.
This is not technically equivalent to AdaBoost, it's just an approximation,
but it works with any algorithm, not just those that support weighted
patterns. Perhaps I should rename it to remove confusion. Any suggestions? How
about GeneralizedAdaBoost? (I'll bet someone has already published something
about this approach, but I haven't seen it.)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you for answering. This solution sounds interesting. An evaluation of
the difference in errors would be interesting here.
In a short web-search, I didn't find an AdoBoost variant with resampling.
I propose to simply name it ResamplingAdaBoost. GeneralizedAdaBoost IMHO
suggests, that you just have to set a parameter correctly, to get the original
AdaBoost variant.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
During Deserialization of GResamplingAdaBoost , it isn't recognized.
Althoug you changed the name of the class, the GResamplingAdaBoost is still
sorted to the Classes with smaller letter than j in GLearner.cpp line 1395.
A small resorting would do it.
Greetings
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
i have a question about th GAdaBoost Implementation.
The AdaBoost algorithm computes different distributions for the training
examples and gives them to the Learner for its training. Thus, the Learner has
to provide a training method with weighted features.
IMHO this is not implemented in GAdaBoost.
My question is, if this is a special version of the AdaBoost Algorithm, or if
i did not understand it correctly.
You are right. I approximated weighted features by resampling the training
set. That is, I draw each pattern with probability proportional to its weight.
This is not technically equivalent to AdaBoost, it's just an approximation,
but it works with any algorithm, not just those that support weighted
patterns. Perhaps I should rename it to remove confusion. Any suggestions? How
about GeneralizedAdaBoost? (I'll bet someone has already published something
about this approach, but I haven't seen it.)
Thank you for answering. This solution sounds interesting. An evaluation of
the difference in errors would be interesting here.
In a short web-search, I didn't find an AdoBoost variant with resampling.
I propose to simply name it ResamplingAdaBoost. GeneralizedAdaBoost IMHO
suggests, that you just have to set a parameter correctly, to get the original
AdaBoost variant.
That sounds good. I'll renamed it to ResamplingAdaBoost.
Hello again,
i want to report a small mistake:
During Deserialization of GResamplingAdaBoost , it isn't recognized.
Althoug you changed the name of the class, the GResamplingAdaBoost is still
sorted to the Classes with smaller letter than j in GLearner.cpp line 1395.
A small resorting would do it.
Greetings
Thanks for catching this. I have fixed it now.