Abstract
How to directly optimize ranking metrics such as Normalized Discounted Cumulative Gain (NDCG) is an interesting but challenging problem, because ranking metrics are either flat or discontinuous everywhere. Among existing approaches, LambdaRank is a novel algorithm that incorporates metrics into its learning procedure. Though empirically effective, it still lacks theoretical justification. For example, what is the underlying loss that LambdaRank optimizes for? Due to this, it is unclear whether LambdaRank will always converge. In this paper, we present a well-defined loss for LambdaRank in a probabilistic framework and show that LambdaRank is a special configuration in our framework. This framework, which we call LambdaLoss, provides theoretical justification for Lamb-daRank. Furthermore, we propose a few more metric-driven loss functions in our LambdaLoss framework. Our loss functions have clear connection to ranking metrics and can be optimized in our framework efficiently. Experiments on three publicly available data sets show that our methods significantly outperform the state-of-the-art learning-to-rank algorithms. This confirms both the theoretical soundness and the practical effectiveness of the LambdaLoss framework.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Li, Cheng; Bendersky, Michael; Najork, Marc; Wang, Xuanhui; and Golbandi, Nadav, "LambdaLoss: Metric-Driven Loss for Learning-to Rank", Technical Disclosure Commons, (May 31, 2018)
https://www.tdcommons.org/dpubs_series/1216