• Corpus ID: 238259658

Information Elicitation Meets Clustering

@article{Kong2021InformationEM,
  title={Information Elicitation Meets Clustering},
  author={Yuqing Kong},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.00952}
}
  • Yuqing Kong
  • Published 3 October 2021
  • Computer Science
  • ArXiv
In the setting where we want to aggregate people’s subjective evaluations, plurality vote may be meaningless when a large amount of low-effort people always report “good” regardless of the true quality. “Surprisingly popular” method, picking the most surprising answer compared to the prior, handle this issue to some extent. However, it is still not fully robust to people’s strategies. Here in the setting where a large number of people are asked to answer a small number of multi-choice questions… 

Figures from this paper

References

SHOWING 1-10 OF 23 REFERENCES
Identifying Expertise to Extract the Wisdom of Crowds
TLDR
A new measure of contribution is proposed to assess the judges' performance relative to the group and positive contributors are used to build a weighting model for aggregating forecasts, showing that the model derives its power from identifying experts who consistently outperform the crowd.
Crowdsourced judgement elicitation with endogenous proficiency
TLDR
The main idea behind the mechanism is to use the presence of multiple tasks and ratings to estimate a reporting statistic to identify and penalize low-effort agreement, which rewards agents for agreeing with another 'reference' report on the same task, but also penalizes for blind agreement by subtracting out this statistic term.
A solution to the single-question crowd wisdom problem
TLDR
This work proposes the following alternative to a democratic vote: select the answer that is more popular than people predict, and shows that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘ most confident’ principles fail under exactly those same assumptions.
Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks
TLDR
DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of roles, and can be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants.
An Information Theoretic Framework For Designing Information Elicitation Mechanisms That Reward Truth-telling
TLDR
The Mutual Information Paradigm overcomes the two main challenges in information elicitation without verification: how to incentivize high-quality reports and avoid agents colluding to report random or identical responses and how to motivate agents who believe they are in the minority to report truthfully.
Surrogate Scoring Rules
TLDR
It is shown that, with a single bit of information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth.
Informed Truthfulness in MultiTask Peer Prediction ( Working paper )
TLDR
This paper introduces the multi-task 01 mechanism, which extends the OA mechanism to multiple signals and provides informed truthfulness: no strategy provides more payoff in equilibrium than truthful reporting, and truthful reporting is strictly better than any uninformed strategy.
Eliciting Informative Feedback: The Peer-Prediction Method
TLDR
A scoring system is devised that induces honest reporting of feedback and proves to be a Nash equilibrium, which can be scaled to induce appropriate effort by raters and can be extended to handle sequential interaction and continuous signals.
A Bayesian Truth Serum for Subjective Data
TLDR
A scoring method for eliciting truthful subjective data in situations where objective truth is unknowable, which assigns high scores not to the most common answers but to the answers that are more common than collectively predicted, with predictions drawn from the same population.
L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise
TLDR
A novel information-theoretic loss function, L_DMI, is proposed, which is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information.
...
...