×

Unbiased Learning to Rank: Theory and Practice

Half-day tutorial

Time: Friday, 26 October 2018, 03:00PM-04:30PM & 04:30PM-06:00PM
Room: Praga

Qingyao Ai

University of Massachusetts Amherst, USA

As a main speaker of this tutorial, Qingyao Ai is a fifth year Ph.D student advised by Prof. W. Bruce Croft in the Center for Intelligent Information Retrieval (CIIR), College of Information and Computer Sciences, University of Massachusetts Amherst. His research mainly focuses on developing intelligent retrieval systems with machine learning techniques. He actively works on applying deep learning techniques on IR problems including ad-hoc retrieval, product search/recommendation and learning to rank. Before his study in CIIR, he obtained his bachelor degree from Dept. Computer Science & Technology, Tsinghua University, and finished his undergraduate thesis project on click models in THUIR lab, advised by Prof. Yiqun Liu.

Yiqun Liu

Tsinghua University, China

As a member of the advisory committee for this tutorial, Professor Yiqun Liu is working as Department co-Chair at the Department of Computer Science and Technology in Tsinghua University. His major research interests are in Web Search, User Behavior Analysis, and Natural Language Processing. His work on modeling user’s interaction with search engines got Best Paper Award Honorable Mention in SIGIR 2015 [40] and Best Student Paper Award in SIGIR 2017 [48]. He also serves as the Program Committee Co-chair of SIGIR 2018, co-Editor-in-Chief of Foundations and Trends in Information Retrieval, Program Committee Co-chair of NTCIR-13 and NTCIR-14, as well as editors of JASIST and Information Retrieval Journal.

Jiaxin Mao

Tsinghua University, China

As a main speaker of this tutorial, Jiaxin Mao is a postdoc from the Department of Computer Science and Technology of Tsinghua University, advised by Prof. Shaoping Ma and Prof. Yiqun Liu. He focuses on user behavior analysis of search engines and has expertise in utilizing user behavior signals to estimate their preference and satisfaction in Web Search and building click models to extract unbiased relevance feedback in different search contexts. He also serves as a SIGIR student liaison for Asia region.

W. Bruce Croft

University of Massachusetts Amherst, USA

As a member of the advisory committee for this tutorial, Professor W. Bruce Croft is the director of the Center for Intelligent Information Retrieval (CIIR), Distinguished Professor in College of Information and Computer Sciences, University of Massachusetts Amherst. He has made major contributions in most areas of information retrieval, including retrieval models, representation, Web search, query processing, cross-lingual retrieval, and search architectures. He has published more than 250 articles (h-index 100), and received many prestigious awards, including ACM Fellow, American Society for Information Science and Technology Research Award, ACM Gerard Salton Award (a lifetime achievement award), Tony Kent Strix Award, IEEE Computer Society Technical Achievement Award, etc.



Abstract

Implicit feedback (e.g., user clicks) is an important source of data for modern search engines. While heavily biased, it is cheap to collect and particularly useful for user-centric retrieval applications such as search ranking. To develop an unbiased learning-to-rank system with biased feedback, previous studies have focused on constructing probabilistic graphical models (e.g., click models) with user behavior hypothesis to extract and train ranking systems with unbiased relevance signals. Recently, a novel counter- factual learning framework that estimates and adopts examination propensity for unbiased learning to rank has attracted much attention. Despite its popularity, there is no systematic comparison of the unbiased learning-to-rank frameworks based on counterfactual learning and graphical models. In this tutorial, we aim to provide an overview of the fundamental mechanism for unbiased learning to rank. We will describe the theory behind existing frameworks, and give detailed instructions on how to conduct unbiased learning to rank in practice.



Detailed Outline

Section
Topics
Introduction and Motivation
Implicit feedback in IR
Bias in user feedback
Needs for Unbiased Learning to Rank (ULTR)
ULTR Based on Unbiased Relevance Signals
Examination hypothesis
Click models
Parameter estimation and EM algorithms
ULTR Based on Examination Propensity
Counterfactual learning
Propensity estimation for online systems
Summary
Click models vs. counterfactual learning
Future directions


Link to External Resources

Tutorial resources