Hello,

i have a question about th GAdaBoost Implementation.

The AdaBoost algorithm computes different distributions for the training

examples and gives them to the Learner for its training. Thus, the Learner has

to provide a training method with weighted features.

IMHO this is not implemented in GAdaBoost.

My question is, if this is a special version of the AdaBoost Algorithm, or if

i did not understand it correctly.