Manipulation among the Arbiters of Collective Intelligence

@article{Das2016ManipulationAT,
  title={Manipulation among the Arbiters of Collective Intelligence},
  author={Sanmay Das and Allen Lavoie and M. Magdon-Ismail},
  journal={ACM Transactions on the Web (TWEB)},
  year={2016},
  volume={10},
  pages={1 - 25}
}
Our reliance on networked, collectively built information is a vulnerability when the quality or reliability of this information is poor. Wikipedia, one such collectively built information source, is often our first stop for information on all kinds of topics; its quality has stood up to many tests, and it prides itself on having a “neutral point of view.” Enforcement of neutrality is in the hands of comparatively few, powerful administrators. In this article, we document that a surprisingly… Expand
Automated inference of point of view from user interactions in collective intelligence venues
TLDR
A combined model of topics and points-of-view on the entire history of English Wikipedia is built, and it is shown how it can be used to find potentially biased articles and visualize user interactions at a high level. Expand
Controversy Detection in Wikipedia Using Collective Classification
TLDR
This work proposes a stacked model which exploits the dependencies among related pages of controversial topics to improve classification of controversial web pages when compared to a model that examines each page in isolation, demonstrating that controversial topics exhibit homophily. Expand
Detecting pages to protect in Wikipedia across multiple languages
TLDR
The problem of deciding whether a page should be protected or not in a collaborative environment such as Wikipedia is considered as a binary classification task and a novel set of features to decide which pages to protect based on users page revision behavior and page categories are proposed. Expand
Detecting Biased Statements in Wikipedia
TLDR
A supervised classification approach is proposed, which relies on an automatically created lexicon of bias words, and other syntactical and semantic characteristics of biased statements, and shows that it is able to detect biased statements with an accuracy of 74%. Expand
Social Motivation and Point of View (Doctoral Consortium)
Social media facilitates interaction and information dissemination among an unprecedented number of participants. Why do users contribute, and why do they contribute to a specific venue? Does theExpand
The Congressional Classification Challenge: Domain Specificity and Partisan Intensity
TLDR
Surprisingly, it is found that the cross-domain learning performance, benchmarking the ability to generalize from one of these datasets to another, is in general poor, even though the algorithms perform very well in within-dataset cross-validation tests. Expand
Mind Your POV
TLDR
It is shown that after an article is tagged for NPOV, there is a significant decrease in biased language in the article, as measured by several lexicons, which suggests that NPOV tagging and discussion does improve content, but has less success enculturating editors to the site's linguistic norms. Expand
Concealing Communities Within the Crowd
This study investigates organizational identity and member identification in a hidden organization operating within a crowd-based collective. Specifically, it draws from Scott’s hidden organizationExpand
Telling Apart Tweets Associated with Controversial versus Non-Controversial Topics
TLDR
It is shown that features specific to Twitter or social media, in general, are more prevalent in tweets on controversial topics than in non-controversial ones, and will inform future investigations into the relationship between language use on social media and the perceived controversiality of topics. Expand
Probabilistic Approaches to Controversy Detection
TLDR
A probabilistic framework to detect controversy on the web, and a language modeling approach to this problem is introduced, based on insights from social science research. Expand
...
1
2
...

References

SHOWING 1-10 OF 47 REFERENCES
Manipulation among the arbiters of collective intelligence: how wikipedia administrators mold public opinion
TLDR
Neither prior history nor vote counts during an administrator's election can identify those editors most likely to change their behavior in this suspicious manner, and an alternative measure, which gives more weight to influential voters, can successfully reject these suspicious candidates. Expand
\Googlearchy": How a Few Heavily-Linked Sites Dominate Politics on the Web
Claims about the Web and politics have commonly confounded two dierent things: retrievability and visibility, the large universe of pages that could theoretically be accessed versus those thatExpand
Who moderates the moderators?: crowdsourcing abuse detection in user-generated content
TLDR
This paper introduces a framework to address the problem of moderating online content using crowdsourced ratings, and presents efficient algorithms to accurately detect abuse that only require knowledge about the identity of a single 'good' agent, who rates contributions accurately more than half the time. Expand
Mopping up: modeling wikipedia promotion decisions
This paper presents a model of the behavior of candidates for promotion to administrator status in Wikipedia. It uses a policy capture framework to highlight similarities and differences in theExpand
Collective wisdom: information growth in wikis and blogs
TLDR
This model is able to reproduce many features of the edit dynamics observed on Wikipedia and on blogs collected from LiveJournal; in particular, it captures the observed rise in the edit rate, followed by 1/t decay. Expand
Finding social roles in Wikipedia
TLDR
The number of new editors playing helpful roles in a single month's cohort nearly equal the number found in the dedicated sample, suggesting that informal socialization has the potential provide sufficient role related labor despite growth and change in Wikipedia. Expand
Mining latent relations in peer-production environments: a case study with Wikipedia article similarity and controversy
TLDR
A new similarity measure, which is called expert-based similarity, is proposed to discover semantic relations among Wikipedia articles from the co-editorship perspective to discern the influence and impact of several factors which are hypothysed to generate controversies in Wikipedia articles. Expand
Assessing the value of cooperation in Wikipedia
TLDR
It is shown that the accretion of edits to an article is described by a simple stochastic mechanism, resulting in a heavy tail of highly visible articles with a large number of edits, which validates Wikipedia as a successful collaborative effort. Expand
Automatic Vandalism Detection in Wikipedia : Towards a Machine Learning Approach
Since the end of 2006 several autonomous bots are, or have been, running on Wikipedia to keep the encyclopedia free from vandalism and other damaging edits. These expert systems, however, are farExpand
Manipulation among the Arbiters of Collective Intelligence
TLDR
The authors' reliance on networked, collectively built information is a vulnerability when the quality or reliability of this information is poor, and Wikipedia is a good example of this. Expand
...
1
2
3
4
5
...