i have a question about th GAdaBoost Implementation.
The AdaBoost algorithm computes different distributions for the training
examples and gives them to the Learner for its training. Thus, the Learner has
to provide a training method with weighted features.
IMHO this is not implemented in GAdaBoost.
My question is, if this is a special version of the AdaBoost Algorithm, or if
i did not understand it correctly.
You seem to have CSS turned off.
Please don't fill out this field.
You are right. I approximated weighted features by resampling the training
set. That is, I draw each pattern with probability proportional to its weight.
This is not technically equivalent to AdaBoost, it's just an approximation,
but it works with any algorithm, not just those that support weighted
patterns. Perhaps I should rename it to remove confusion. Any suggestions? How
about GeneralizedAdaBoost? (I'll bet someone has already published something
about this approach, but I haven't seen it.)
Thank you for answering. This solution sounds interesting. An evaluation of
the difference in errors would be interesting here.
In a short web-search, I didn't find an AdoBoost variant with resampling.
I propose to simply name it ResamplingAdaBoost. GeneralizedAdaBoost IMHO
suggests, that you just have to set a parameter correctly, to get the original
That sounds good. I'll renamed it to ResamplingAdaBoost.
i want to report a small mistake:
During Deserialization of GResamplingAdaBoost , it isn't recognized.
Althoug you changed the name of the class, the GResamplingAdaBoost is still
sorted to the Classes with smaller letter than j in GLearner.cpp line 1395.
A small resorting would do it.
Thanks for catching this. I have fixed it now.
Sign up for the SourceForge newsletter: