• Corpus ID: 237592945

Why Don't You Click: Neural Correlates of Non-Click Behaviors in Web Search

  title={Why Don't You Click: Neural Correlates of Non-Click Behaviors in Web Search},
  author={Ziyi Ye and Xiaohui Xie and Yiqun Liu and Zhihong Wang and Xuancheng Li and Jiaji Li and Xuesong Chen and M. Zhang and Shaoping Ma},
Web search heavily relies on click-through behavior as an essential feedback signal for performance improvement and evaluation. Traditionally, click is usually treated as a positive implicit feedback signal of relevance or usefulness, while non-click (especially nonclick after examination) is regarded as a signal of irrelevance or uselessness. However, there are many cases where users do not click on any search results but still satisfy their information need with the contents of the results… 

Figures and Tables from this paper

Web Search via an Efficient and Effective Brain-Machine Interface

In this work, an efficient and effective communication system between human beings and search engines based on electroencephalogram (EEG) signals is built, called Brain Machine Search Interface (BMSI) system, which provides functions including query reformulation and search result interaction.



Constructing Click Models for Mobile Search

A novel Mobile Click Model (MCM) is proposed that models how users examine and click search results on mobile SERPs and can extract richer information, such as the click necessity of search results and the probability of user satisfaction, from mobile click logs.

Leaving so soon?: understanding and predicting web search abandonment rationales

It is shown that although satisfaction is a common motivator for abandonment, one-in-five abandonment instances does not relate to satisfaction, and accurate predictions help search providers estimate user satisfaction for queries without clicks, affording a more complete understanding of search engine performance.

When does Relevance Mean Usefulness and User Satisfaction in Web Search?

It is shown that external assessors are capable of annotating usefulness when provided with more search context information and it is suggested that a usefulness-based evaluation method can be defined to better reflect the quality of search systems perceived by the users.

Detecting Good Abandonment in Mobile Search

This paper proposes a solution to the problem of good abandonment using gesture interactions, such as reading times and touch actions, as signals for differentiating between good and bad abandonment using query and session signals alone.

Context-aware web search abandonment prediction

This work proposes more advanced methods for modeling and predicting abandonment rationales using contextual information from user search sessions by analyzing search engine logs, and discovers dependencies between abandoned queries and user behaviors and builds a sequential classifier using a structured learning framework designed to handle such signals.

Understanding and Predicting Usefulness Judgment in Web Search

This study systematically investigates the effects of a variety of content, context, and behavior factors on usefulness judgments and finds that while user behavior factors are most important in determining usefulness judgments, content and context factors also have significant effects on it.

Investigating Result Usefulness in Mobile Search

The study highlights the difference between desktop and mobile search and sheds light on developing a more user-centric evaluation method for mobile search, confirming the findings that usefulness feedbacks can better reflect user satisfaction than relevance annotations in mobile search.

Evaluating Retrieval Performance Using Clickthrough Data

A theoretical analysis shows that the method gives the same results as evaluation with traditional relevance judgments under mild assumptions, and an empirical analysis verifies that the assumptions are indeed justified and that the new method leads to conclusive results in a WWW retrieval study.

Learning to Account for Good Abandonment in Search Success Metrics

This work describes how a search success metric can be augmented to account for good abandonment sessions using a machine learned metric that depends on user's viewport information and shows that taking good abandonment into consideration has a significant effect on the overall performance of the online metric.

Click the search button and be happy: evaluating direct and immediate information access

A n Nugget-based evaluation framework for DIIA is proposed, which takes nugget positions into account in order to evaluate the ability of a system to present important nuggets first and to minimise the amount of text the user has to read.