Transparency of CHI Research Artifacts: Results of a Self-Reported Survey

@inproceedings{Wacharamanotham2019TransparencyOC,
  title={Transparency of CHI Research Artifacts: Results of a Self-Reported Survey},
  author={Chat Wacharamanotham and Lukas Eisenring and Steve Haroz and Florian Echtler},
  year={2019}
}
Several fields of science are experiencing a “replication crisis” that has negatively impacted their credibility. Assessing the validity of a contribution via replicability of its experimental evidence and reproducibility of its analyses requires access to relevant study materials, data, and code. Failing to share them limits the ability to scrutinize or build-upon the research, ultimately hindering scientific progress. Understanding how the diverse research artifacts in HCI impact sharing can… 

Figures and Tables from this paper

Transparency in Measurement Reporting
TLDR
A prescriptive model of a measurement selection process is proposed, which aids researchers to systematically define their constructs, specify operationalizations, and justify why these measures were chosen and should contribute to more transparency in measurement reporting.
Are You Open? A Content Analysis of Transparency and Openness Guidelines in HCI Journals
TLDR
It is indicated that journals in the sample currently do not set or specify clear openness and transparency standards, and potential reasons, the aptness of natural science-based guidelines for HCI, and next steps for the HCI community in furthering openness and Transparency are discussed.
Statistical Significance Testing at CHI PLAY: Challenges and Opportunities for More Transparency
TLDR
It is found that over half of these papers employ NHST without specific statistical hypotheses or research questions, which may risk the proliferation of false positive findings and provide a template for more transparent research and reporting practices.
A Meta-Analysis of Effect Sizes of CHI Typing Experiments
TLDR
This work presents the reference for small, medium, and large effect sizes for typing experiments based on a meta-analysis of well-cited papers from CHI conference, which can be used to conduct a priori power analysis or assess the magnitude of the found effect.
Prototyping Usable Privacy and Security Systems: Insights from Experts
TLDR
The challenges faced by researchers in this area such as the high costs of conducting field studies when evaluating hardware prototypes, the scarcity of open-source material, and the resistance to novel prototypes are identified.
Interactive tools for reproducible science
TLDR
An empirical understanding of reproducible research practices and the role of supportive tools through research in HEP and across a variety of scientific fields is built and the term secondary usage forms of RDM tools is introduced.
Studying Reddit: A Systematic Overview of Disciplines, Approaches, Methods, and Ethics
This article offers a systematic analysis of 727 manuscripts that used Reddit as a data source, published between 2010 and 2020. Our analysis reveals the increasing growth in use of Reddit as a data
Interactive Tools for Reproducible Science - Understanding, Supporting, and Motivating Reproducible Science Practices
TLDR
This thesis paves new ways for interaction with RDM tools that support and motivate reproducible science, and advocates the unique role of HCI in supporting, motivating, and transforming reproducible research practices through the design of tools that enable effective RDM.
The Computational Thematic Analysis Toolkit
TLDR
The toolkit demonstrates how common analysis tasks like data collection, cleaning and filtering, modelling and sampling, and coding can be implemented within a single visual interface, and how that interface can encourage researchers to manage ethical and transparency considerations throughout their research process.
No Humans Here
Many research communities routinely conduct activities that fall outside the bounds of traditional human subjects research, yet still frequently rely on the determinations of institutional review
...
1
2
...

References

SHOWING 1-10 OF 67 REFERENCES
Open Practices in Visualization Research : Opinion Paper
  • Steve Haroz
  • Computer Science
    2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)
  • 2018
TLDR
The current state of openness in visualization research is described and suggestions for authors, reviewers, and editors to improve open practices in the field are provided.
What Drives Academic Data Sharing?
TLDR
It is concluded that research data cannot be regarded as knowledge commons, but research policies that better incentivise data sharing are needed to improve the quality of research results and foster scientific progress.
Data Sharing by Scientists: Practices and Perceptions
TLDR
Large scale programs, such as the NSF-sponsored DataNET will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.
Evaluating HCI Research beyond Usability
TLDR
This SIG invites researchers from all areas of HCI research who are interested to engage in a debate of issues in the process of validating research artefacts to discuss solutions for this difficult topic.
Is once enough?: on the extent and content of replications in human-computer interaction
TLDR
What the results mean to HCI, including how reporting of studies could be improved and how conferences/journals may change author instructions to get more replications, is discussed.
Reproducibility, correctness, and buildability: The three principles for ethical public dissemination of computer science and engineering research
TLDR
A system of three principles of public dissemination, which are reproducibility, correctness, and buildability are proposed, and it is argued that these three principles are not being sufficiently met by current publications and proposals in computer science and engineering.
Estimating the reproducibility of psychological science
TLDR
A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency
TLDR
When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned, and there was no change over time in the low rates of data sharing among comparison journals.
No Replication, No Trust? How Low Replicability Influences Trust in Psychology
In the current psychological debate, low replicability of psychological findings is a central topic. While the discussion about the replication crisis has a huge impact on psychological research, we
Transparency and Openness Promotion Guidelines for HCI
TLDR
This special interest group addresses the status quo of HCI research with regards to research practices of transparency and openness, and seeks to identify current practices that are more progressive and worth communicating to other disciplines, while evaluating whether practices in other disciplines are likely to apply to HCIResearch constructively.
...
1
2
3
4
5
...