Multi-Label Learning to Rank through Multi-Objective Optimization

@article{Mahapatra2022MultiLabelLT,
  title={Multi-Label Learning to Rank through Multi-Objective Optimization},
  author={Debabrata Mahapatra and Chaosheng Dong and Yetian Chen and Deqiang Meng and Michinari Momma},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.03060}
}
Learning to Rank (LTR) technique is ubiquitous in the Information Retrieval sys-tem nowadays, especially in the Search Ranking application. The query-item relevance labels typically used to train the ranking model are often noisy measurements of human behavior, e.g., product rating for product search. The coarse measurements make the ground truth ranking non-unique with respect to a single relevance criterion. To resolve ambiguity, it is desirable to train a model using many relevance criteria… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 30 REFERENCES

Multi-Objective Ranking Optimization for Product Search Using Stochastic Label Aggregation

It is demonstrated empirically over three datasets that MORO with stochastic label aggregation provides a family of ranking models that fully dominates the set of MORO models built using deterministic label aggregation.

Learning to rank with multi-aspect relevance for vertical search

This paper proposes a novel formulation in which the relevance between a query and a document is assessed with respect to each aspect, forming the multi-aspect relevance, and studies two types of learning-based approaches to estimate the tradeoff between these relevance aspects.

Multi-objective Relevance Ranking

The proposed Augmented Lagrangian based method is designed to systematically solve the MO problem in a constrained optimization framework, which is integrated with a popular Boosting algorithm and is, by all means, a novel contribution.

Learning to rank with multiple objective functions

This work presents solutions to two open problems in learning to rank and shows how multiple measures can be combined into a single graded measure that can be learned, and investigates these ideas using LambdaMART, a state-of-the-art ranking algorithm.

An Alternative Cross Entropy Loss for Learning-to-Rank

This work proposes a cross entropy-based learning-to-rank loss function that is theoretically sound, is a convex bound on NDCG—a popular ranking metric—and is consistent with N DCG under learning scenarios common in information retrieval.

Multi-objective ranking of comments on web

While Hodge decomposition produces a globally consistent ranking, a globally inconsistent component is also present and an active learning strategy is proposed for the reduction of this component.

Improving Relevance Quality in Product Search using High-Precision Query-Product Semantic Similarity

A high-precision cross-encoder BERT model is leveraged for semantic similarity between customer query and products and its effectiveness for three ranking applications where offline-generated scores could be used: as an offline metric for estimating relevance quality impact, as a re-ranking feature covering head/torso queries, and as a training objective for optimization.

Robust ranking models via risk-sensitive optimization

Experiments indicate that ranking models learned this way significantly decreased the worst ranking failures while maintaining strong average effectiveness on par with current state-of-the-art models.

Learning to rank for freshness and relevance

Freshness of results is important in modern web search. Failing to recognize the temporal aspect of a query can negatively affect the user experience, and make the search engine appear stale. While

On Application of Learning to Rank for E-Commerce Search

The practical challenges in applying learning to rank methods to E-Com search are discussed, including the challenges in feature representation, obtaining reliable relevance judgments, and optimally exploiting multiple user feedback signals such as click rates, add-to-cart ratios, order rates, and revenue.