Communications and Signal Processing Seminar

Crowd-Learning: Improving the Quality of Crowdsourcing Using Sequential Learning

Mingyan LiuProfessorUniversity of Michigan, EECS Dept
SHARE:

A problem facing many crowdsourcing systems is the unknown and uncontrolled nature of the quality of data inputs. In some cases this quality is unknown but has an objective measure. Consider for instance the problem of labeling massive datasets using Amazon Mechanic Turks (AMTs) where each labeler has an unknown annotation quality. In some other cases this quality is not only unknown but has a subjective measure. Consider for instance the problem of using online recommendation systems to make decisions about movie, restaurant, shopping, news articles, and so on. In both cases a user (the one handing out the labeling tasks and the one seeking to make a decision by others' recommendations, respectively) is interested in knowing how to select the best labelers to perform the task or whose opinion and recommendation should be valued in making its own choice. We formulate this problem as a sequential decision and learning problem, where the user through feedback learns over time to gravitate toward a select subset, a "best crowd" of labelers or recommenders who provide the most value to the user. This type of online learning in some sense falls under the family of multi-armed bandit (MAB) problems, but with a distinct feature not commonly seen: since the labelers' or recommenders' quality is unknown, their input (or reward in the MAB context) cannot be directly verified. We address this by cross-validation against the crowd and the user itself. Our formulation allows us to develop algorithms that work in an online fashion (thus causally), but that can also be used in an offline (non-causal) setting. We will show that they can outperform existing offline solutions (such as matrix factorization-based methods).

Sponsored by

ECE

Faculty Host

Dave Neuhoff