Dear all, Can you publish the running results of some algorithms (For example, Rankboost, AdaRank and LambdaMART) in some datasets (For example, MQ2008 and OHSUMED) on web site? And it is more convenient to verify and compare all results with the online results.
Yours sincerely!
LEEZHONG
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Same questions with Leezhong! I have tested ListNet on the LETOR datasets and my results seems disappointing compared with baselines on LETOR, some MAP even less than by 0.1.
Thank you!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you guys want referenced numbers on the LETOR dataset, use the numbers on the LETOR website. I don't think it's a good idea for us to publish another set of results.
In RankLib, each algorithm is implemented as described in the corresponding paper. As different implementations of the same algorithm usually provides different results, one shouldn't expect the results provided by RankLib to be the same as those published on the LETOR site. If you really need to reproduce the latter, contact the team behind LETOR and ask for their implementation.
As for anything based on neural network, try to tune at least the learning rate. It makes huge differences.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Dear all, Can you publish the running results of some algorithms (For example, Rankboost, AdaRank and LambdaMART) in some datasets (For example, MQ2008 and OHSUMED) on web site? And it is more convenient to verify and compare all results with the online results.
Yours sincerely!
LEEZHONG
MQ2008 Rankboost AdaRank LambdaMART
Ndcg@10(fold1.training.txt) 0.5017 0.4863 0.5322
Ndcg@10(fold2.training.txt) 0.5529 0.5286 0.5656
Excuse me, are these results?
Yours sincerely!
LEEZHONG
Same questions with Leezhong! I have tested ListNet on the LETOR datasets and my results seems disappointing compared with baselines on LETOR, some MAP even less than by 0.1.
Thank you!
If you guys want referenced numbers on the LETOR dataset, use the numbers on the LETOR website. I don't think it's a good idea for us to publish another set of results.
In RankLib, each algorithm is implemented as described in the corresponding paper. As different implementations of the same algorithm usually provides different results, one shouldn't expect the results provided by RankLib to be the same as those published on the LETOR site. If you really need to reproduce the latter, contact the team behind LETOR and ask for their implementation.
As for anything based on neural network, try to tune at least the learning rate. It makes huge differences.